Powered by Deep Web Technologies
Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

Computer applications for engineering/structural analysis  

SciTech Connect

Analysts and organizations have a tendency to lock themselves into specific codes with the obvious consequences of not addressing the real problem and thus reaching the wrong conclusion. This paper discusses the role of the analyst in selecting computer codes. The participation and support of a computation division in modifying the source program, configuration management, and pre- and post-processing of codes are among the subjects discussed. Specific examples illustrating the computer code selection process are described in the following problem areas: soil structure interaction, structural analysis of nuclear reactors, analysis of waste tanks where fluid structure interaction is important, analysis of equipment, structure-structure interaction, analysis of the operation of the superconductor supercollider which includes friction and transient temperature, and 3D analysis of the 10-meter telescope being built in Hawaii. Validation and verification of computer codes and their impact on the selection process are also discussed.

Zaslawsky, M.; Samaddar, S.K.

1991-01-01T23:59:59.000Z

2

Accounting for the Energy Consumption of Personal Computing Including Portable Devices  

E-Print Network (OSTI)

the impact of energy consumed by the computing sector on the environment, as well as on the electricity cost Energy, Electricity, Computing, Portable Devices, Environment, Networking 1. INTRODUCTION The increasing, and what we can expect in the future. This requires a detailed analysis of energy consumption

Namboodiri, Vinod

3

Global Analysis of Solar Neutrino Oscillations Including SNO CC Measurement  

E-Print Network (OSTI)

For active and sterile neutrinos, we present the globally allowed solutions for two neutrino oscillations. We include the SNO CC measurement and all other relevant solar neutrino and reactor data. Five active neutrino oscillation solutions (LMA, LOW, SMA, VAC, and Just So2) are currently allowed at 3 sigma; three sterile neutrino solutions (Just So2, SMA, and VAC) are allowed at 3 sigma. The goodness of fit is satisfactory for all eight solutions. We also investigate the robustness of the allowed solutions by carrying out global analyses with and without: 1) imposing solar model constraints on the 8B neutrino flux, 2) including the Super-Kamiokande spectral energy distribution and day-night data, 3) including a continuous mixture of active and sterile neutrinos, 4) using an enhanced CC cross section for deuterium (due to radiative corrections), and 5) a optimistic, hypothetical reduction by a factor of three of the error of the SNO CC rate. For every analysis strategy used in this paper, the most favored solutions all involve large mixing angles: LMA, LOW, or VAC. The favored solutions are robust, but the presence at 3 sigma of individual sterile solutions and the active Just So2 solution is sensitive to the analysis assumptions.

John N. Bahcall; M. C. Gonzalez-Garcia; Carlos Pena-Garay

2001-06-25T23:59:59.000Z

4

Analysis of alpha Centauri AB including seismic constraints  

E-Print Network (OSTI)

Detailed models of alpha Cen A and B based on new seismological data for alpha Cen B by Carrier & Bourban (2003) have been computed using the Geneva evolution code including atomic diffusion. Taking into account the numerous observational constraints now available for the alpha Cen system, we find a stellar model which is in good agreement with the astrometric, photometric, spectroscopic and asteroseismic data. The global parameters of the alpha Cen system are now firmly constrained to an age of t=6.52+-0.30 Gyr, an initial helium mass fraction Y_i=0.275+-0.010 and an initial metallicity (Z/X)_i=0.0434+-0.0020. Thanks to these numerous observational constraints, we confirm that the mixing-length parameter alpha of the B component is larger than the one of the A component, as already suggested by many authors (Noels et al. 1991, Fernandes & Neuforge 1995 and Guenther & Demarque 2000): alpha_B is about 8% larger than alpha_A (alpha_A=1.83+-0.10 and alpha_B=1.97+-0.10). Moreover, we show that asteroseismic measurements enable to determine the radii of both stars with a very high precision (errors smaller than 0.3%). The radii deduced from seismological data are compatible with the new interferometric results of Kervella et al. (2003) even if they are slightly larger than the interferometric radii (differences smaller than 1%).

P. Eggenberger; C. Charbonnel; S. Talon; G. Meynet; A. Maeder; F. Carrier; G. Bourban

2004-01-29T23:59:59.000Z

5

Search for Earth-like planets includes LANL star analysis  

NLE Websites -- All DOE Office Websites (Extended Search)

Search for earth-like planets Search for earth-like planets Search for Earth-like planets includes LANL star analysis The mission will not only be able to search for planets around other stars, but also yield new insights into the parent stars themselves. March 6, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and new materials. Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and new materials.

6

Analysis of 70 Ophiuchi AB including seismic constraints  

E-Print Network (OSTI)

The analysis of solar-like oscillations for stars belonging to a binary system provides a unique opportunity to probe the internal stellar structure and to test our knowledge of stellar physics. Such oscillations have been recently observed and characterized for the A component of the 70 Ophiuchi system. A model of 70 Ophiuchi AB that correctly reproduces all observational constraints available for both stars is determined. An age of 6.2 +- 1.0 Gyr is found with an initial helium mass fraction Y_i=0.266 +- 0.015 and an initial metallicity (Z/X)_i=0.0300 +- 0.0025 when atomic diffusion is included and a solar value of the mixing-length parameter assumed. A precise and independent determination of the value of the mixing-length parameter needed to model 70 Oph A requires accurate measurement of the mean small separation, which is not available yet. Current asteroseismic observations, however, suggest that the value of the mixing-length parameter of 70 Oph A is lower or equal to the solar calibrated value. The effects of atomic diffusion and of the choice of the adopted solar mixture were also studied. We also tested and compared the theoretical tools used for the modeling of stars for which p-modes frequencies are detected by performing this analysis with three different stellar evolution codes and two different calibration methods. We found that the different evolution codes and calibration methods we used led to perfectly coherent results.

P. Eggenberger; A. Miglio; F. Carrier; J. Fernandes; N. C. Santos

2008-02-25T23:59:59.000Z

7

Certification plan for reactor analysis computer codes  

Science Conference Proceedings (OSTI)

A certification plan for reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations has been prepared. An action matrix, checklists, a time schedule, and a resource commitment table have been included in the plan. These items identify what is required to achieve certification of the codes, the time table that this will be accomplished on, and the resources needed to support such an effort.

Toffer, H.; Crowe, R.D.; Schwinkendorf, K.N. [Westinghouse Hanford Co., Richland, WA (United States); Pevey, R.E. [Westinghouse Savannah River Co., Aiken, SC (United States)

1990-01-01T23:59:59.000Z

8

Analysis of 70 Ophiuchi AB including seismic constraints  

E-Print Network (OSTI)

The analysis of solar-like oscillations for stars belonging to a binary system provides a unique opportunity to probe the internal stellar structure and to test our knowledge of stellar physics. Such oscillations have been recently observed and characterized for the A component of the 70 Ophiuchi system. A model of 70 Ophiuchi AB that correctly reproduces all observational constraints available for both stars is determined. An age of 6.2 +- 1.0 Gyr is found with an initial helium mass fraction Y_i=0.266 +- 0.015 and an initial metallicity (Z/X)_i=0.0300 +- 0.0025 when atomic diffusion is included and a solar value of the mixing-length parameter assumed. A precise and independent determination of the value of the mixing-length parameter needed to model 70 Oph A requires accurate measurement of the mean small separation, which is not available yet. Current asteroseismic observations, however, suggest that the value of the mixing-length parameter of 70 Oph A is lower or equal to the solar calibrated value. The e...

Eggenberger, P; Carrier, F; Fernandes, J; Santos, N C

2008-01-01T23:59:59.000Z

9

Including uncertainty in hazard analysis through fuzzy measures  

Science Conference Proceedings (OSTI)

This paper presents a method for capturing the uncertainty expressed by an Hazard Analysis (HA) expert team when estimating the frequencies and consequences of accident sequences and provides a sound mathematical framework for propagating this uncertainty to the risk estimates for these accident sequences. The uncertainty is readily expressed as distributions that can visually aid the analyst in determining the extent and source of risk uncertainty in HA accident sequences. The results also can be expressed as single statistics of the distribution in a manner analogous to expressing a probabilistic distribution as a point-value statistic such as a mean or median. The study discussed here used data collected during the elicitation portion of an HA on a high-level waste transfer process to demonstrate the techniques for capturing uncertainty. These data came from observations of the uncertainty that HA team members expressed in assigning frequencies and consequences to accident sequences during an actual HA. This uncertainty was captured and manipulated using ideas from possibility theory. The result of this study is a practical method for displaying and assessing the uncertainty in the HA team estimates of the frequency and consequences for accident sequences. This uncertainty provides potentially valuable information about accident sequences that typically is lost in the HA process.

Bott, T.F.; Eisenhawer, S.W.

1997-12-01T23:59:59.000Z

10

Accounting for the energy consumption of personal computing including portable devices  

Science Conference Proceedings (OSTI)

In light of the increased awareness of global energy consumption, questions are also being asked about the contribution of computing equipment. Though studies have documented the share of energy consumption due to these equipment over the years, these ... Keywords: computing, electricity, energy, environment, networking, portable devices

Pavel Somavat; Shraddha Jadhav; Vinod Namboodiri

2010-04-01T23:59:59.000Z

11

A Diagnostic Method for Computing the Surface Wind from the Geostrophic Wind Including the Effects of Baroclinity  

Science Conference Proceedings (OSTI)

A diagnostic procedure to compute the surface wind from the geostrophic wind including the effects of baroclinity is designed and tested. Expressions are derived to calculate the similarity functions A and B for use when only the surface ...

Maurice Danard

1988-12-01T23:59:59.000Z

12

Organizational Analysis in Computer Science  

E-Print Network (OSTI)

systems design, development, and use in diverse application domains, including CASE tools, instructional

Kling, Rob

1993-01-01T23:59:59.000Z

13

Computer applications for engineering/structural analysis. Revision 1  

SciTech Connect

Analysts and organizations have a tendency to lock themselves into specific codes with the obvious consequences of not addressing the real problem and thus reaching the wrong conclusion. This paper discusses the role of the analyst in selecting computer codes. The participation and support of a computation division in modifying the source program, configuration management, and pre- and post-processing of codes are among the subjects discussed. Specific examples illustrating the computer code selection process are described in the following problem areas: soil structure interaction, structural analysis of nuclear reactors, analysis of waste tanks where fluid structure interaction is important, analysis of equipment, structure-structure interaction, analysis of the operation of the superconductor supercollider which includes friction and transient temperature, and 3D analysis of the 10-meter telescope being built in Hawaii. Validation and verification of computer codes and their impact on the selection process are also discussed.

Zaslawsky, M.; Samaddar, S.K.

1991-12-31T23:59:59.000Z

14

A Computer Package for Transmission Line Analysis  

Science Conference Proceedings (OSTI)

A computer program, LIGNE, has been developed as a comprehensive teaching aid for the section on transmission line theory in an electromagnetics course. The program assists the student in the analysis of transmission lines, enabling him to quickly assess ...

Georges-Andre Chaudron; Manfred Nachman

1987-11-01T23:59:59.000Z

15

Mathematics of Computer Algebra and Analysis Project ... - CECM  

E-Print Network (OSTI)

MITACS Seminar Series on Mathematics of Computer Algebra and Analysis. Large Expression Management in Computer Algebra for Symbolic Modelling.

16

Temporal fringe pattern analysis with parallel computing  

SciTech Connect

Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis.

Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

2005-11-20T23:59:59.000Z

17

Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations  

NLE Websites -- All DOE Office Websites (Extended Search)

Computation of Domain-Averaged Irradiance Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations S. Kato Center for Atmospheric Sciences Hampton University Hampton, Virginia Introduction Recent development of remote sensing instruments by Atmospheric Radiation Measurement (ARM?) Program provides information of spatial and temporal variability of cloud structures. However it is not clear what cloud properties are required to express complicated cloud fields in a realistic way and how to use them in a relatively simple one-dimensional (1D) radiative transfer model to compute the domain averaged irradiance. To address this issue, a simple shortwave radiative transfer model that can treat the vertical cloud optical property correlation is developed. The model is based on the gamma-weighted

18

DOE-2 building energy analysis computer program  

SciTech Connect

Concern with energy conservation requirements has resulted in a growing awareness throughout the architectural/engineering community of the need for an easy-to-use, fast-running, completely documented, public-domain computer program for the energy-use analysis of buildings. DOE-2 has been developed to meet these needs. The program emphasizes ease of input, efficiency of computation, flexibility of operation, and usefulness of output. A key factor in meeting these requirements has been achieved by the development of a free-format Building Design Language (BDL) that greatly facilitates the user's task in defining the building; its heating, ventilating, and air conditioning (HVAC) systems; and its operation. The DOE-2 program is described.

Hunn, B.D.

1979-01-01T23:59:59.000Z

19

Computational Challenges and Analysis under Increasingly Dynamic and Uncertain  

E-Print Network (OSTI)

Computational Challenges and Analysis under Increasingly Dynamic and Uncertain Electric Power Empowering Minds to Engineer the Future Electric Energy System #12;Thrust Area 5 White Paper Computational Challenges and Analysis Under Increasingly Dynamic and Uncertain Electric Power System Conditions Project

20

Application of redundant computation in software performance analysis  

Science Conference Proceedings (OSTI)

Redundant computation is an execution of a program statement(s) that does not contribute to the program output. The same statement on one execution may exhibit redundant computation whereas on a different execution, it contributes to the program output. ... Keywords: control dependence, data dependence, dependence analysis, performance analysis, redundant code, redundant computation

Zakarya Alzamil; Bogdan Korel

2005-07-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

Seismic fracture analysis of concrete gravity dams including dam-reservoir interaction  

Science Conference Proceedings (OSTI)

In this study, the seismic fracture response of concrete gravity dams is investigated with considering the effects of dam-reservoir interaction. A co-axial rotating crack model (CRCM), which includes the strain softening behavior, is selected for concrete ... Keywords: Concrete gravity dam, Dam-reservoir interaction, Non-linear analysis, Seismic fracture

Yusuf Calayir; Muhammet Karaton

2005-07-01T23:59:59.000Z

22

Comments on “A Diagnostic Method for Computing the Surface Wind from the Geostrophic Wind Including the Effects of Baroclinity”  

Science Conference Proceedings (OSTI)

A numerical method for solving the generalized Ekman problem in which both geostrophic wind and eddy diffusivity are allowed to vary with height is used to compute surface cross-isobaric flow angles and surface winds. These are compared with ...

Lynn L. LeBlanc; Eric A. Pani

1989-10-01T23:59:59.000Z

23

The Design and Analysis of Computer Experiments | Open Energy...  

Open Energy Info (EERE)

2003 DOI Not Provided Check for DOI availability: http:crossref.org Online Internet link for The Design and Analysis of Computer Experiments Citation Thomas J....

24

Use of high performance computing in neutronics analysis activities  

NLE Websites -- All DOE Office Websites (Extended Search)

high performance computing in neutronics analysis activities M.A. Smith Argonne National Laboratory 9700 South Cass Avenue, Argonne Illinois 60439, USA Abstract Reactor design is...

25

Design and Analysis of Computer Experiments | Open Energy Information  

Open Energy Info (EERE)

Design and Analysis of Computer Experiments Design and Analysis of Computer Experiments Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article: Design and Analysis of Computer Experiments Abstract Many scientific phenomena are now investigated by complex computer models or codes. A computer experiment is a number of runs of the code with various inputs. A feature of many computer experiments is that the output is deterministic--rerunning the code with the same inputs gives identical observations. Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data. Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical

26

An analysis of the cloud computing platform  

E-Print Network (OSTI)

A slew of articles have been written about the fact that computing will eventually go in the direction of electricity. Just as most software users these days also own the hardware that runs the software, electricity users ...

Bhattacharjee, Ratnadeep

2009-01-01T23:59:59.000Z

27

Argonne Transportation Research and Analysis Computing Center...  

NLE Websites -- All DOE Office Websites (Extended Search)

of State list of "State Sponsors of Terrorism." T-4 countries currently are Cuba, Iran, North Korea, Sudan, and Syria. A request for TRACC computer access by a T-4 country...

28

Method for including operation and maintenance costs in the economic analysis of active solar energy systems  

DOE Green Energy (OSTI)

For a developing technology such as solar energy, the costs for operation and maintenance (O and M) can be substantial. In the past, most economic analyses included these costs by simply assuming that an annual cost will be incurred that is proportional to the initial cost of the system. However, in assessing the economics of new systems proposed for further research and development, such a simplification can obscure the issues. For example, when the typical method for including O and M costs in an economic analysis is used, the O and M costs associated with a newly developed, more reliable, and slightly more expensive controller will be assumed to increase - an obvious inconsistency. The method presented in this report replaces this simplistic approach with a representation of the O and M costs that explicitly accounts for the uncertainties and risks inherent in the operation of any equipment. A detailed description of the data inputs required by the method is included as well as a summary of data sources and an example of the method as applied to an active solar heating system.

Short, W.D.

1986-08-01T23:59:59.000Z

29

Virtualization for Security: Including Sandboxing, Disaster Recovery, High Availability, Forensic Analysis, and Honeypotting  

Science Conference Proceedings (OSTI)

One of the biggest buzzwords in the IT industry for the past few years, virtualization has matured into a practical requirement for many best-practice business scenarios, becoming an invaluable tool for security professionals at companies of every size. ... Keywords: Applied, Business Software, Computer Science, Computers, Information Management, Security

John Hoopes

2008-12-01T23:59:59.000Z

30

This book is intended for a wide readership including engineers, ap plied mathematicians, computer scientists, and graduate students who  

E-Print Network (OSTI)

Preface This book is intended for a wide readership including engineers, ap­ plied mathematicians on the Lyapunov matrix equation. The book presents different techniques for solving and ana­ lyzing the algebraic interest. The book provides easy and quick references for the solution of many engineering and mathematical

Gajic, Zoran

31

Global sensitivity analysis of stochastic computer models with joint metamodels  

Science Conference Proceedings (OSTI)

The global sensitivity analysis method used to quantify the influence of uncertain input variables on the variability in numerical model responses has already been applied to deterministic computer codes; deterministic means here that the same set of ... Keywords: Computer experiment, Gaussian process, Generalized additive model, Joint modeling, Sobol indices, Uncertainty

Amandine Marrel; Bertrand Iooss; Sébastien Veiga; Mathieu Ribatet

2012-05-01T23:59:59.000Z

32

Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October...

33

Information management for global environmental change, including the Carbon Dioxide Information Analysis Center  

Science Conference Proceedings (OSTI)

The issue of global change is international in scope. A body of international organizations oversees the worldwide coordination of research and policy initiatives. In the US the National Science and Technology Council (NSTC) was established in November of 1993 to provide coordination of science, space, and technology policies throughout the federal government. NSTC is organized into nine proposed committees. The Committee on Environmental and Natural Resources (CERN) oversees the US Department of Energy`s Global Change Research Program (USGCRP). As part of the USGCRP, the US Department of Energy`s Global Change Research Program aims to improve the understanding of Earth systems and to strengthen the scientific basis for the evaluation of policy and government action in response to potential global environmental changes. This paper examines the information and data management roles of several international and national programs, including Oak Ridge National Laboratory`s (ORNL`s) global change information programs. An emphasis will be placed on the Carbon Dioxide Information Analysis Center (CDIAC), which also serves as the World Data Center-A for Atmospheric Trace Gases.

Stoss, F.W. [Oak Ridge National Lab., TN (United States). Carbon Dioxide Information Analysis Center

1994-06-01T23:59:59.000Z

34

Additive spectral method for fuzzy cluster analysis of similarity data including community structure and affinity matrices  

Science Conference Proceedings (OSTI)

An additive spectral method for fuzzy clustering is proposed. The method operates on a clustering model which is an extension of the spectral decomposition of a square matrix. The computation proceeds by extracting clusters one by one, which makes the ... Keywords: Additive fuzzy clustering, Community structure, Lapin transformation, One-by-one clustering, Research activity structure, Spectral fuzzy clustering

Boris Mirkin; Susana Nascimento

2012-01-01T23:59:59.000Z

35

Markedness and frequency: a computational analysis  

Science Conference Proceedings (OSTI)

When the markedness analysis is extended to the lexical and grammatical levels, the question arises whether an analogue of the markedness/frequency correlation, observed in phonology, also exists on these higher linguistic levels. This article presents ...

Henry Ku?era

1982-07-01T23:59:59.000Z

36

DeCompactionTool: Software for subsidence analysis including statistical error quantification  

Science Conference Proceedings (OSTI)

Subsidence analysis based on decompaction of the sedimentary record is a standard method for reconstructing the evolution of sedimentary basins. For such an analysis, data on strata thickness, lithology, age constraints, lithological properties, porosity, ... Keywords: Basin analysis, Error quantification, Monte Carlo simulation, Subsidence, Vienna Basin

Monika Hölzel; Robert Faber; Michael Wagreich

2008-11-01T23:59:59.000Z

37

Multiscale analysis of nonlinear systems using computational homology  

DOE Green Energy (OSTI)

This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University

2010-05-19T23:59:59.000Z

38

Multiscale analysis of nonlinear systems using computational homology  

DOE Green Energy (OSTI)

This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner

2010-05-24T23:59:59.000Z

39

Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART)  

NLE Websites -- All DOE Office Websites (Extended Search)

Computational Computational Current Future Accelerators Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART) Kai Germaschewski, Homa Karimabadi Amitava Bhattacharjee, Fatima Ebrahimi, Will Fox, Liwei Lin CICART Space Science Center / Dept. of Physics University of New Hampshire March 18, 2013 Kai Germaschewski and Homa Karimabadi CICART Project Computational Current Future Accelerators Outline 1 Project Information 2 Computational Strategies 3 Current HPC usage and methods 4 HPC requirements for 2017 5 Strategies for New Architectures Kai Germaschewski and Homa Karimabadi CICART Project Computational Current Future Accelerators Project Information Center for Integrated Computation and Analysis of Reconnection and Turbulence Director: Amitava Bhattacharjee, PPPL /

40

COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS  

DOE Green Energy (OSTI)

In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.

Mathur, M.P.; Freeman, Mark (U.S. DOE National Energy Technology Laboratory); Gera, Dinesh (Fluent, Inc.)

2001-11-06T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

Computer-aided visualization and analysis system for sequence evaluation  

DOE Patents (OSTI)

A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

Chee, Mark S. (3199 Waverly St., Palo Alto, CA 94306)

1998-08-18T23:59:59.000Z

42

Computer-aided visualization and analysis system for sequence evaluation  

DOE Patents (OSTI)

A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

Chee, Mark S. (Palo Alto, CA)

2001-06-05T23:59:59.000Z

43

Computer-aided visualization and analysis system for sequence evaluation  

DOE Patents (OSTI)

A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

Chee, Mark S. (Palo Alto, CA)

1999-10-26T23:59:59.000Z

44

Modeling and analysis of transient vehicle underhood thermo- hydrodynamic events using computational fluid dynamics and high performance computing.  

DOE Green Energy (OSTI)

This work has explored the preliminary design of a Computational Fluid Dynamics (CFD) tool for the analysis of transient vehicle underhood thermo-hydrodynamic events using high performance computing platforms. The goal of this tool will be to extend the capabilities of an existing established CFD code, STAR-CD, allowing the car manufacturers to analyze the impact of transient operational events on the underhood thermal management by exploiting the computational efficiency of modern high performance computing systems. In particular, the project has focused on the CFD modeling of the radiator behavior during a specified transient. The 3-D radiator calculations were performed using STAR-CD, which can perform both steady-state and transient calculations, on the cluster computer available at ANL in the Nuclear Engineering Division. Specified transient boundary conditions, based on experimental data provided by Adapco and DaimlerChrysler were used. The possibility of using STAR-CD in a transient mode for the entire period of time analyzed has been compared with other strategies which involve the use of STAR-CD in a steady-state mode at specified time intervals, while transient heat transfer calculations would be performed for the rest of the time. The results of these calculations have been compared with the experimental data provided by Adapco/DaimlerChrysler and recommendations for future development of an optimal strategy for the CFD modeling of transient thermo-hydrodynamic events have been made. The results of this work open the way for the development of a CFD tool for the transient analysis of underhood thermo-hydrodynamic events, which will allow the integrated transient thermal analysis of the entire cooling system, including both the engine block and the radiator, on high performance computing systems.

Tentner, A.; Froehle, P.; Wang, C.; Nuclear Engineering Division

2004-01-01T23:59:59.000Z

45

Modeling and analysis of transient vehicle underhood thermo - hydrodynamic events using computational fluid dynamics and high performance computing.  

DOE Green Energy (OSTI)

This work has explored the preliminary design of a Computational Fluid Dynamics (CFD) tool for the analysis of transient vehicle underhood thermo-hydrodynamic events using high performance computing platforms. The goal of this tool will be to extend the capabilities of an existing established CFD code, STAR-CD, allowing the car manufacturers to analyze the impact of transient operational events on the underhood thermal management by exploiting the computational efficiency of modern high performance computing systems. In particular, the project has focused on the CFD modeling of the radiator behavior during a specified transient. The 3-D radiator calculations were performed using STAR-CD, which can perform both steady-state and transient calculations, on the cluster computer available at ANL in the Nuclear Engineering Division. Specified transient boundary conditions, based on experimental data provided by Adapco and DaimlerChrysler were used. The possibility of using STAR-CD in a transient mode for the entire period of time analyzed has been compared with other strategies which involve the use of STAR-CD in a steady-state mode at specified time intervals, while transient heat transfer calculations would be performed for the rest of the time. The results of these calculations have been compared with the experimental data provided by Adapco/DaimlerChrysler and recommendations for future development of an optimal strategy for the CFD modeling of transient thermo-hydrodynamic events have been made. The results of this work open the way for the development of a CFD tool for the transient analysis of underhood thermo-hydrodynamic events, which will allow the integrated transient thermal analysis of the entire cooling system, including both the engine block and the radiator, on high performance computing systems.

Froehle, P.; Tentner, A.; Wang, C.

2003-09-05T23:59:59.000Z

46

MMA, A Computer Code for Multi-Model Analysis  

Science Conference Proceedings (OSTI)

This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

Eileen P. Poeter and Mary C. Hill

2007-08-20T23:59:59.000Z

47

Determination of pion and kaon fragmentation functions including spin asymmetries data in a global analysis  

E-Print Network (OSTI)

We present new functional form of pion and kaon fragmentation functions up to next-to-leading order obtained through a global fit to single-inclusive electron-positron annihilation data and also employ, the semi-inclusive deep inelastic scattering asymmetry data from HERMES and COMPASS to determine FFs. In this analysis we consider the impression of semi-inclusive deep inelastic scattering asymmetry data on the fragmentation functions, where the produced hadrons of different electric charge are identified. We break symmetry assumption between quark and anti-quark fragmentation functions for favored partons by using the asymmetry data. The results of our analysis are in good agreement with electron-positron annihilation data and also with all the semi-inclusive deep inelastic scattering asymmetry data. Also we apply the obtained fragmentation functions to predict the scaled-energy distribution of $\\pi^+/K^+$ inclusively produced in top-quark decays at next-to-leading order using the zero-mass variable-flavor-number scheme exploiting the universality and scaling violations of fragmentation functions.

M. Soleymaninia; A. N. Khorramian; S. M. Moosavinejad; F. Arbabifar

2013-06-07T23:59:59.000Z

48

A compendium of computer codes used in particle accelerator design and analysis  

Science Conference Proceedings (OSTI)

We present a compilation of computer codes used in the design and analysis of particle accelerators. This document describes each code on a one? or two?page data sheet. All codes included in this compilation are filed at Los Alamos. (AIP)

Los Alamos Accelerator Code Group

1989-01-01T23:59:59.000Z

49

Computer analysis of the two versions of Byzantine chess  

E-Print Network (OSTI)

In the Byzantine Empire of 11-15 CE chess was played on the circular board. Two versions were known - REGULAR and SYMMETRIC. The difference between them is easy: the white queen is placed either on light (regular) or on dark square (symmetric). However, the computer analysis reveals the results of this 'small perturbation'.

Anatole Khalfine; Ed Troyan

2007-01-21T23:59:59.000Z

50

Advanced Computer Methods for Grounding Analysis Ignasi Colominas1  

E-Print Network (OSTI)

of grounding grids of large electrical substations in practical cases present some difficulties mainly dueAdvanced Computer Methods for Grounding Analysis Ignasi Colominas1 , Jos´e Par´is1 , Xes present the foundations of a nu- merical formulation based on the Boundary Element Method for grounding

Colominas, Ignasi

51

RDI's Wisdom Way Solar Village Final Report: Includes Utility Bill Analysis of Occupied Homes  

SciTech Connect

In 2010, Rural Development, Inc. (RDI) completed construction of Wisdom Way Solar Village (WWSV), a community of ten duplexes (20 homes) in Greenfield, MA. RDI was committed to very low energy use from the beginning of the design process throughout construction. Key features include: 1. Careful site plan so that all homes have solar access (for active and passive); 2. Cellulose insulation providing R-40 walls, R-50 ceiling, and R-40 floors; 3. Triple-pane windows; 4. Airtight construction (~0.1 CFM50/ft2 enclosure area); 5. Solar water heating systems with tankless, gas, auxiliary heaters; 6. PV systems (2.8 or 3.4kWSTC); 7. 2-4 bedrooms, 1,100-1,700 ft2. The design heating loads in the homes were so small that each home is heated with a single, sealed-combustion, natural gas room heater. The cost savings from the simple HVAC systems made possible the tremendous investments in the homes' envelopes. The Consortium for Advanced Residential Buildings (CARB) monitored temperatures and comfort in several homes during the winter of 2009-2010. In the Spring of 2011, CARB obtained utility bill information from 13 occupied homes. Because of efficient lights, appliances, and conscientious home occupants, the energy generated by the solar electric systems exceeded the electric energy used in most homes. Most homes, in fact, had a net credit from the electric utility over the course of a year. On the natural gas side, total gas costs averaged $377 per year (for heating, water heating, cooking, and clothes drying). Total energy costs were even less - $337 per year, including all utility fees. The highest annual energy bill for any home evaluated was $458; the lowest was $171.

Robb Aldrich, Steven Winter Associates

2011-07-01T23:59:59.000Z

52

RDI's Wisdom Way Solar Village Final Report: Includes Utility Bill Analysis of Occupied Homes  

DOE Green Energy (OSTI)

7. 2-4 bedrooms, 1,100-1,700 ft2. The design heating loads in the homes were so small that each home is heated with a single, sealed-combustion, natural gas room heater. The cost savings from the simple HVAC systems made possible the tremendous investments in the homes' envelopes. The Consortium for Advanced Residential Buildings (CARB) monitored temperatures and comfort in several homes during the winter of 2009-2010. In the Spring of 2011, CARB obtained utility bill information from 13 occupied homes. Because of efficient lights, appliances, and conscientious home occupants, the energy generated by the solar electric systems exceeded the electric energy used in most homes. Most homes, in fact, had a net credit from the electric utility over the course of a year. On the natural gas side, total gas costs averaged $377 per year (for heating, water heating, cooking, and clothes drying). Total energy costs were even less - $337 per year, including all utility fees. The highest annual energy bill for any home evaluated was $458; the lowest was $171.

Robb Aldrich, Steven Winter Associates

2011-07-01T23:59:59.000Z

53

GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT  

E-Print Network (OSTI)

GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING) implementation of the algorithms on high performance computing platforms. #12;2. INVARIANTS REPRESENTATION

Stuttgart, Universität

54

Structural analysis of magnetic fusion energy systems in a combined interactive/batch computer environment  

SciTech Connect

A system of computer programs has been developed to aid in the preparation of input data for and the evaluation of output data from finite element structural analyses of magnetic fusion energy devices. The system utilizes the NASTRAN structural analysis computer program and a special set of interactive pre- and post-processor computer programs, and has been designed for use in an environment wherein a time-share computer system is linked to a batch computer system. In such an environment, the analyst must only enter, review and/or manipulate data through interactive terminals linked to the time-share computer system. The primary pre-processor programs include NASDAT, NASERR and TORMAC. NASDAT and TORMAC are used to generate NASTRAN input data. NASERR performs routine error checks on this data. The NASTRAN program is run on a batch computer system using data generated by NASDAT and TORMAC. The primary post-processing programs include NASCMP and NASPOP. NASCMP is used to compress the data initially stored on magnetic tape by NASTRAN so as to facilitate interactive use of the data. NASPOP reads the data stored by NASCMP and reproduces NASTRAN output for selected grid points, elements and/or data types.

Johnson, N.E.; Singhal, M.K.; Walls, J.C.; Gray, W.H.

1979-01-01T23:59:59.000Z

55

Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes - Update to Include Evaluation of Impact of Including a Humidifier Option  

SciTech Connect

The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of a centrally ducted integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006a). The present report is an update to that document which summarizes results of an analysis of the impact of adding a humidifier to the HVAC system to maintain minimum levels of space relative humidity (RH) in winter. The space RH in winter has direct impact on occupant comfort and on control of dust mites, many types of disease bacteria, and 'dry air' electric shocks. Chapter 8 in ASHRAE's 2005 Handbook of Fundamentals (HOF) suggests a 30% lower limit on RH for indoor temperatures in the range of {approx}68-69F based on comfort (ASHRAE 2005). Table 3 in chapter 9 of the same reference suggests a 30-55% RH range for winter as established by a Canadian study of exposure limits for residential indoor environments (EHD 1987). Harriman, et al (2001) note that for RH levels of 35% or higher, electrostatic shocks are minimized and that dust mites cannot live at RH levels below 40%. They also indicate that many disease bacteria life spans are minimized when space RH is held within a 30-60% range. From the foregoing it is reasonable to assume that a winter space RH range of 30-40% would be an acceptable compromise between comfort considerations and limitation of growth rates for dust mites and many bacteria. In addition it reports som

Baxter, Van D [ORNL

2007-02-01T23:59:59.000Z

56

Engineering Analysis of Intermediate Loop and Process Heat Exchanger Requirements to Include Configuration Analysis and Materials Needs  

SciTech Connect

The need to locate advanced hydrogen production facilities a finite distance away from a nuclear power source necessitates the need for an intermediate heat transport loop (IHTL). This IHTL must not only efficiently transport energy over distances up to 500 meters but must also be capable of operating at high temperatures (>850oC) for many years. High temperature, long term operation raises concerns of material strength, creep resistance and general material stability (corrosion resistance). IHTL design is currently in the initial stages. Many questions remain to be answered before intelligent design can begin. The report begins to look at some of the issues surrounding the main components of an IHTL. Specifically, a stress analysis of a compact heat exchanger design under expected operating conditions is reported. Also the results of a thermal analysis performed on two ITHL pipe configurations for different heat transport fluids are presented. The configurations consist of separate hot supply and cold return legs as well as annular design in which the hot fluid is carried in an inner pipe and the cold return fluids travels in the opposite direction in the annular space around the hot pipe. The effects of insulation configurations on pipe configuration performance are also reported. Finally, a simple analysis of two different process heat exchanger designs, one a tube in shell type and the other a compact or microchannel reactor are evaluated in light of catalyst requirements. Important insights into the critical areas of research and development are gained from these analyses, guiding the direction of future areas of research.

T.M. Lillo; R.L. Williamson; T.R. Reed; C.B. Davis; D.M. Ginosar

2005-09-01T23:59:59.000Z

57

Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility  

SciTech Connect

Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike.

Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.

1999-08-01T23:59:59.000Z

58

Computer Aided Fault Tree Analysis System (CAFTA), Version 6.0 Demo  

Science Conference Proceedings (OSTI)

CAFTA is a computer software program used for developing reliability models of large complex systems, using fault tree and event tree methodology.DescriptionCAFTA is designed to meet the many needs of reliability analysts while performing fault tree/event tree analysis on a system or group of systems.  It includes:Fault Tree Editor for building, updating and printing fault tree modelsEvent Tree Editor for building, ...

2013-02-18T23:59:59.000Z

59

Mathematical modelling, analysis and computation of some complex and nonlinear flow problems.  

E-Print Network (OSTI)

???This thesis consists of two parts: (I) modelling, analysis and computation of sweat transport in textile media; (II) unconditional convergence and optimal error analysis of… (more)

Li, Buyang (???)

2012-01-01T23:59:59.000Z

60

Uncertainty and sensitivity analysis for long-running computer codes : a critical review  

E-Print Network (OSTI)

This thesis presents a critical review of existing methods for performing probabilistic uncertainty and sensitivity analysis for complex, computationally expensive simulation models. Uncertainty analysis (UA) methods ...

Langewisch, Dustin R

2010-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing.  

E-Print Network (OSTI)

??High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute… (more)

Mani, Sindhu

2012-01-01T23:59:59.000Z

62

Validation of IVA Computer Code for Flow Boiling Stability Analysis  

SciTech Connect

IVA is a computer code for modeling of transient multiphase, multi-component, non-equilibrium flows in arbitrary geometry including flow boiling in 3D nuclear reactors. This work presents part of the verification procedure of the code. We analyze the stability of flow boiling in natural circulation loop. Experimental results collected on the AREVA/FANP KATHY loop regarding frequencies, mass flows and decay ratio of the oscillations are used for comparison. The comparison demonstrates the capability of the code to successfully simulate such class of processes. (author)

Ivanov Kolev, Nikolay [Framatome-ANP, PO Box 3220, D-91058, Erlangen (Germany)

2006-07-01T23:59:59.000Z

63

Tuning and Analysis Utilities (TAU) | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Storage & File Systems Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Tuning and Analysis Utilities (TAU) References TAU Project Site TAU Instrumentation Methods TAU Compilation Options TAU Fortran Instrumentation FAQ TAU Leap to Petascale 2009 Presentation

64

The CDF computing and analysis system: First experience  

SciTech Connect

The Collider Detector at Fermilab (CDF) collaboration records and analyses proton anti-proton interactions with a center-of-mass energy of 2 TeV at the Tevatron. A new collider run, Run II, of the Tevatron started in April. During its more than two year duration the CDF experiment expects to record about 1 PetaByte of data. With its multi-purpose detector and center-of-mass energy at the frontier, the experimental program is large and versatile. The over 500 scientists of CDF will engage in searches for new particles, like the Higgs boson or supersymmetric particles, precision measurement of electroweak parameters, like the mass of the W boson, measurement of top quark parameters, and a large spectrum of B physics. The experiment has taken data and analyzed them in previous runs. For Run II, however, the computing model was changed to incorporate new methodologies, the file format switched, and both data handling and analysis system redesigned to cope with the increased demands. This paper (4-036 at Chep 2001) gives an overview of the CDF Run II compute system with emphasis on areas where the current system does not match initial estimates and projections. For the data handling and analysis system a more detailed description is given.

R. Colombo et al.

2001-11-02T23:59:59.000Z

65

Mathematics of Computer Algebra and Analysis - MOCAA - CECM  

E-Print Network (OSTI)

The Computer Algebra group at Simon Fraser, the Symbolic Computation group at Waterloo, and the ORCCA group at Western meet approximately biweekly.

66

Challenges in computer applications for ship and floating structure design and analysis  

Science Conference Proceedings (OSTI)

This paper presents a review on the key research areas in the design and analysis of ships and floating structures. The major areas of computer application are identified in several stages of ship/floating structure design and analysis with the principal ... Keywords: Boundary element method, Computational fluid dynamics, Computer applications, Computer-aided ship and floating structure design, Finite element analysis, Hydrodynamics, Production, Structures

R. Sharma; Tae-wan Kim; Richard Lee Storch; Hans (J. J. ) Hopman; Stein Ove Erikstad

2012-03-01T23:59:59.000Z

67

NALDA (Naval Aviation Logistics Data Analysis) CAI (computer aided instruction)  

SciTech Connect

Data Systems Engineering Organization (DSEO) personnel developed a prototype computer aided instruction CAI system for the Naval Aviation Logistics Data Analysis (NALDA) system. The objective of this project was to provide a CAI prototype that could be used as an enhancement to existing NALDA training. The CAI prototype project was performed in phases. The task undertaken in Phase I was to analyze the problem and the alternative solutions and to develop a set of recommendations on how best to proceed. The findings from Phase I are documented in Recommended CAI Approach for the NALDA System (Duncan et al., 1987). In Phase II, a structured design and specifications were developed, and a prototype CAI system was created. A report, NALDA CAI Prototype: Phase II Final Report, was written to record the findings and results of Phase II. NALDA CAI: Recommendations for an Advanced Instructional Model, is comprised of related papers encompassing research on computer aided instruction CAI, newly developing training technologies, instructional systems development, and an Advanced Instructional Model. These topics were selected because of their relevancy to the CAI needs of NALDA. These papers provide general background information on various aspects of CAI and give a broad overview of new technologies and their impact on the future design and development of training programs. The paper within have been index separately elsewhere.

Handler, B.H. (Oak Ridge K-25 Site, TN (USA)); France, P.A.; Frey, S.C.; Gaubas, N.F.; Hyland, K.J.; Lindsey, A.M.; Manley, D.O. (Oak Ridge Associated Universities, Inc., TN (USA)); Hunnum, W.H. (North Carolina Univ., Chapel Hill, NC (USA)); Smith, D.L. (Memphis State Univ., TN (USA))

1990-07-01T23:59:59.000Z

68

Computational methods for criticality safety analysis within the scale system  

SciTech Connect

The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs.

Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.

1986-01-01T23:59:59.000Z

69

MADCAP The Microwave Anisotropy Dataset Computational Analysis Package  

E-Print Network (OSTI)

Realizing the extraordinary scientific potential of the CMB requires precise measurements of its tiny anisotropies over a significant fraction of the sky at very high resolution. The analysis of the resulting datasets is a serious computational challenge. Existing algorithms require terabytes of memory and hundreds of years of CPU time. We must therefore both maximize our resources by moving to supercomputers and minimize our requirements by algorithmic development. Here we will outline the nature of the challenge, present our current optimal algorithm, and discuss its implementation as the MADCAP software package and application to data from the North American test flight of the joint Italian-U.S. BOOMERanG experiment on the Cray T3E at NERSC and CINECA. A documented beta-release of MADCAP is publicly available at http://cfpa.berkeley.edu/~borrill/cmb/madcap.html

Borrill, J

1999-01-01T23:59:59.000Z

70

MADCAP - The Microwave Anisotropy Dataset Computational Analysis Package  

E-Print Network (OSTI)

Realizing the extraordinary scientific potential of the CMB requires precise measurements of its tiny anisotropies over a significant fraction of the sky at very high resolution. The analysis of the resulting datasets is a serious computational challenge. Existing algorithms require terabytes of memory and hundreds of years of CPU time. We must therefore both maximize our resources by moving to supercomputers and minimize our requirements by algorithmic development. Here we will outline the nature of the challenge, present our current optimal algorithm, and discuss its implementation as the MADCAP software package and application to data from the North American test flight of the joint Italian-U.S. BOOMERanG experiment on the Cray T3E at NERSC and CINECA. A documented beta-release of MADCAP is publicly available at http://cfpa.berkeley.edu/~borrill/cmb/madcap.html

Julian Borrill

1999-11-19T23:59:59.000Z

71

Data analysis using the Gnu R system for statistical computation  

Science Conference Proceedings (OSTI)

R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

Simone, James; /Fermilab

2011-07-01T23:59:59.000Z

72

The role of computer-aided drafting, analysis, and design software in structural engineering practice  

E-Print Network (OSTI)

Perhaps the greatest innovation in engineering in the last fifty years, computer software has changed the way structural engineers conduct nearly every aspect of their daily business. Computer-aided drafting, analysis, and ...

De los Reyes, Adrian

2006-01-01T23:59:59.000Z

73

Visualizing Atmospheric Fields on a Personal Computer: Application to Potential Vorticity Analysis  

Science Conference Proceedings (OSTI)

A four-dimensional computer-analysis program for visualizing atmospheric fields on a personal computer is presented. The Program can display, on screen, a fast-time animation of meteorological fields on various surfaces or cross sections.

B. U. Neeman; P. Alpert

1990-02-01T23:59:59.000Z

74

OTEC cold water pipe: a survey of available shell analysis computer programs and implications of hydrodynamic loadings  

DOE Green Energy (OSTI)

The design and analysis of the cold water pipe (CWP) is one of the most important technological problems to be solved in the OTEC ocean engineering program. Analytical computer models have to be developed and verified in order to provide an engineering approach for the OTEC CWP with regards to environmental factors such as waves, currents, platform motions, etc., and for various structural configurations and materials such as rigid wall CWP, compliant CWP, stockade CWP, etc. To this end, Analysis and Technology, Inc. has performed a review and evaluation of shell structural analysis computer programs applicable to the design of an OTEC CWP. Included in this evaluation are discussions of the hydrodynamic flow field, structure-fluid interaction and the state-of-the-art analytical procedures for analysis of offshore structures. The analytical procedures which must be incorporated into the design of a CWP are described. A brief review of the state-of-the-art for analysis of offshore structures and the need for a shell analysis for the OTEC CWP are included. A survey of available shell computer programs, both special purpose and general purpose, and discussions of the features of these dynamic shell programs and how the hydrodynamic loads are represented within the computer programs are included. The hydrodynamic loads design criteria for the CWP are described. An assessment of the current state of knowledge for hydrodynamic loads is presented. (WHK)

Pompa, J.A.; Allik, H.; Webman, K.; Spaulding, M.

1979-02-01T23:59:59.000Z

75

Mathematics of Computer Algebra and Analysis Project ... - CECM  

E-Print Network (OSTI)

Jun 20, 2007 ... Title: Ajax & RIA for Computer Algebra Systems. Keehong Song, Department of Mathematics Education, Pusan National University, Korea.

76

Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent transport is caused by drift-wave instabilities, driven by free energy in plasma temperature and density gradients. * Unavoidable: These instabilities will persist in a reactor. * Various types (asymptotic theory): ITG, TIM, TEM, ETG . . . + Electromagnetic variants (AITG, etc). 3 Computing and Storage Requirements Fokker-Planck Theory of Plasma Transport Basic equation still

77

HEAP: heat energy analysis program. A computer model simulating solar receivers  

DOE Green Energy (OSTI)

Thermal design of solar receivers is commonly accomplished via approximate models, where the receiver is treated as an isothermal box with lumped quantities of heat losses to the surroundings by radiation, conduction and convection. These approximate models, though adequate for preliminary design purposes, are not detailed enough to distinguish between different receiver designs, or to predict transient performance under variable solar flux, ambient temperatures, etc. A computer code has been written for this purpose and is given the name HEAP, an acronym for Heat Energy Analysis Program. HEAP has a basic structure that fits a general heat transfer problem, but with specific features that are custom-made for solar receivers. The code is written in MBASIC computer language. This document explains the detailed methodology followed in solving the heat transfer problem, and includes a program flow chart, an explanation of input and output tables, and an example of the simulation of a cavity-type solar receiver.

Lansing, F.L.

1979-01-15T23:59:59.000Z

78

Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.  

SciTech Connect

This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d'%C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)

2011-06-01T23:59:59.000Z

79

Foam computer model helps in analysis of underbalanced drilling  

Science Conference Proceedings (OSTI)

A new mechanistic model attempts to overcome many of the problems associated with existing foam flow analyses. The model calculates varying Fanning friction factors, rather than assumed constant factors, along the flow path. Foam generated by mixing gas and liquid for underbalanced drilling has unique rheological characteristics, making it very difficult to accurately predict the pressure profile. A user-friendly personal-computer program was developed to solve the mechanical energy balance equation for compressible foam flow. The program takes into account influxes of gas, liquid, and oil from formations. The pressure profile, foam quality, density, and cuttings transport are predicted by the model. A sensitivity analysis window allows the user to quickly optimize the hydraulics program by selecting the best combination of injection pressure, back pressure, and gas/liquid injection rates. This new model handles inclined and horizontal well bores and provides handy engineering and design tools for underbalanced drilling, well bore cleanout, and other foam operations. The paper describes rheological models, foam flow equations, equations of state, mechanical energy equations, pressure drop across nozzles, influx modeling, program operation, comparison to other models, to lab data, and to field data, and results.

Liu, G.; Medley, G.H. Jr. [Maurer Engineering Inc., Houston, TX (United States)

1996-07-01T23:59:59.000Z

80

Mathematics of Computer Algebra and Analysis - MOCAA - CECM  

E-Print Network (OSTI)

Department: Computer Science, Western. E-mail: mchowdh3@uwo.ca. MSc. Thesis: Homotopy techniques for multiplication modulo triangular sets, April 2009 .

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

Mathematics of Computer Algebra and Analysis - MOCAA - CECM  

E-Print Network (OSTI)

Project Leaders: Dr. George Labahn (University of Waterloo) and Dr. Michael Monagan (Simon Fraser University). The Computer Algebra research community in ...

82

Mathematics of Computer Algebra and Analysis - MOCAA - CECM  

E-Print Network (OSTI)

We hold two seminar series. One is run by the Ontario Research Center for Computer Algebra at the University of Waterloo and the University of Western Ontario ...

83

Mathematics of Computer Algebra and Analysis - MOCAA - CECM  

E-Print Network (OSTI)

Dec 1, 2011 ... Name: George Labahn. Department: School of Computer Science. University: University of Waterloo. Mailing Address: Waterloo, ON, N2L 3G1, ...

84

Mathematics of Computer Algebra and Analysis Project (MOCAA)  

E-Print Network (OSTI)

Dec 5, 2007... to support high-performance computer algebra on symmetric multi-processor machines and multicores. A component-level parallel solver for ...

85

Computer analysis of the language of deaf children  

Science Conference Proceedings (OSTI)

The use of a computer parsing system for analyzingspeech and language development in deaf children is being investigated. To this end

Barbara G. Parkhurst; Marian P. Mac Eachron

1979-01-01T23:59:59.000Z

86

Monodromy analysis of the computational power of the Ising topological quantum computer  

E-Print Network (OSTI)

We show that all quantum gates which could be implemented by braiding of Ising anyons in the Ising topological quantum computer preserve the n-qubit Pauli group. Analyzing the structure of the Pauli group's centralizer, also known as the Clifford group, for n\\geq 3 qubits, we prove that the image of the braid group is a non-trivial subgroup of the Clifford group and therefore not all Clifford gates could be implemented by braiding. We show explicitly the Clifford gates which cannot be realized by braiding estimating in this way the ultimate computational power of the Ising topological quantum computer.

Andre Ahlbrecht; Lachezar S. Georgiev; Reinhard F. Werner

2009-11-13T23:59:59.000Z

87

MADCAP - The Microwave Anisotropy Dataset Computational Analysis Package  

E-Print Network (OSTI)

In the standard model of cosmology the universe starts with a hot Big Bang. As the universe expands it cools, and after 300,000 years it drops below the ionisation temperature of hydrogen. The previously free electrons become bound to protons, and with no electrons for the photons to scatter off they continue undeflected to us today. This image of the surface of last-scattering is what we call the Cosmic (because it fills the universe) Microwave (because of the frequency at which its black body spectrum peaks today) Background (because it originates behind all other light sources). Despite its stunning uniformity - isotropic to a few parts in a million - it is the tiny perturbations in the CMB that give us an unprecedented view of the early universe. First detected by the COBE satellite in 1991, these anisotropies are an imprint of the primordial density fluctuations needed to seed the development of gravitationally bound objects in the universe, and are potentially the most powerful discriminant between cosmological models. Realizing the extraordinary scientific potential of the CMB requires precise measurements of these tiny anisotropies over a significant fraction of the sky at very high resolution. The analysis of the resulting datasets is a serious computational challenge. Existing algorithms require terabytes of memory and hundreds of years of CPU time. We must therefore both maximize our resources by moving to supercomputers and minimize our requirements by algorithmic development. Here we will outline the nature of the challenge, present our current optimal algorithm, discuss its implementation - as the MADCAP software package - and its application to data from the North American test flight of the joint Italian-U.S. BOOMERanG experiment on the Cray T3E at NERSC...

Julian Borrill

1999-01-01T23:59:59.000Z

88

Computer image analysis of seed shape and seed color for flax cultivar description  

Science Conference Proceedings (OSTI)

We applied computer image analysis to group together flax cultivars (Linum usitatissimum L.) according to their similarity in commercially important dry seed traits. Both the seed shape and seed-color traits were tested on 53 cultivars from world germplasm ... Keywords: Agro-biodiversity preservation, Computer image analysis, Flax cultivar clustering, Flax seed descriptors, Hierarchical clustering, Multivariate analysis, Principal component analysis, Seed color, Seed shape

Wiesnerová Dana; Wiesner Ivo

2008-05-01T23:59:59.000Z

89

COMFAR III: Computer Model for Feasibility Analysis and Reporting | Open  

Open Energy Info (EERE)

COMFAR III: Computer Model for Feasibility Analysis and Reporting COMFAR III: Computer Model for Feasibility Analysis and Reporting Jump to: navigation, search Tool Summary Name: COMFAR III: Computer Model for Feasibility Analysis and Reporting Agency/Company /Organization: United Nations Industrial Development Organization Focus Area: Industry Resource Type: Software/modeling tools User Interface: Desktop Application Website: www.unido.org/index.php?id=o3470 Language: "Arabic, Chinese, English, French, German, Japanese, Portuguese, Russian, Spanish; Castilian" is not in the list of possible values (Abkhazian, Achinese, Acoli, Adangme, Adyghe; Adygei, Afar, Afrihili, Afrikaans, Afro-Asiatic languages, Ainu, Akan, Akkadian, Albanian, Aleut, Algonquian languages, Altaic languages, Amharic, Angika, Apache languages, Arabic, Aragonese, Arapaho, Arawak, Armenian, Aromanian; Arumanian; Macedo-Romanian, Artificial languages, Assamese, Asturian; Bable; Leonese; Asturleonese, Athapascan languages, Australian languages, Austronesian languages, Avaric, Avestan, Awadhi, Aymara, Azerbaijani, Balinese, Baltic languages, Baluchi, Bambara, Bamileke languages, Banda languages, Bantu (Other), Basa, Bashkir, Basque, Batak languages, Beja; Bedawiyet, Belarusian, Bemba, Bengali, Berber languages, Bhojpuri, Bihari languages, Bikol, Bini; Edo, Bislama, Blin; Bilin, Blissymbols; Blissymbolics; Bliss, Bosnian, Braj, Breton, Buginese, Bulgarian, Buriat, Burmese, Caddo, Catalan; Valencian, Caucasian languages, Cebuano, Celtic languages, Central American Indian languages, Central Khmer, Chagatai, Chamic languages, Chamorro, Chechen, Cherokee, Cheyenne, Chibcha, Chichewa; Chewa; Nyanja, Chinese, Chinook jargon, Chipewyan; Dene Suline, Choctaw, Chuukese, Chuvash, Classical Newari; Old Newari; Classical Nepal Bhasa, Classical Syriac, Coptic, Cornish, Corsican, Cree, Creek, Creoles and pidgins , Crimean Tatar; Crimean Turkish, Croatian, Cushitic languages, Czech, Dakota, Danish, Dargwa, Delaware, Dinka, Divehi; Dhivehi; Maldivian, Dogri, Dogrib, Dravidian languages, Duala, Dutch; Flemish, Dyula, Dzongkha, Eastern Frisian, Efik, Egyptian (Ancient), Ekajuk, Elamite, English, Erzya, Esperanto, Estonian, Ewe, Ewondo, Fang, Fanti, Faroese, Fijian, Filipino; Pilipino, Finnish, Finno-Ugrian languages, Fon, French, Friulian, Fulah, Ga, Gaelic; Scottish Gaelic, Galibi Carib, Galician, Ganda, Gayo, Gbaya, Geez, Georgian, German, Germanic languages, Gilbertese, Gondi, Gorontalo, Gothic, Grebo, Greek, Modern, Guarani, Gujarati, Gwich'in, Haida, Haitian; Haitian Creole, Hausa, Hawaiian, Hebrew, Herero, Hiligaynon, Himachali languages; Western Pahari languages, Hindi, Hiri Motu, Hittite, Hmong; Mong, Hungarian, Hupa, Iban, Icelandic, Ido, Igbo, Ijo languages, Iloko, Inari Sami, Indic languages, Indo-European languages, Indonesian, Ingush, Interlingue; Occidental, Inuktitut, Inupiaq, Iranian languages, Irish, Iroquoian languages, Italian, Japanese, Javanese, Judeo-Arabic, Judeo-Persian, Kabardian, Kabyle, Kachin; Jingpho, Kalaallisut; Greenlandic, Kalmyk; Oirat, Kamba, Kannada, Kanuri, Kara-Kalpak, Karachay-Balkar, Karelian, Karen languages, Kashmiri, Kashubian, Kawi, Kazakh, Khasi, Khoisan languages, Khotanese; Sakan, Kikuyu; Gikuyu, Kimbundu, Kinyarwanda, Kirghiz; Kyrgyz, Klingon; tlhIngan-Hol, Komi, Kongo, Konkani, Korean, Kosraean, Kpelle, Kru languages, Kuanyama; Kwanyama, Kumyk, Kurdish, Kurukh, Kutenai, Ladino, Lahnda, Lamba, Land Dayak languages, Lao, Latin, Latvian, Lezghian, Limburgan; Limburger; Limburgish, Lingala, Lithuanian, Lojban, Lower Sorbian, Lozi, Luba-Katanga, Luba-Lulua, Luiseno, Lule Sami, Lunda, Luo (Kenya and Tanzania), Lushai, Luxembourgish; Letzeburgesch, Macedonian, Madurese, Magahi, Maithili, Makasar, Malagasy, Malay, Malayalam, Maltese, Manchu, Mandar, Mandingo, Manipuri, Manobo languages, Manx, Maori, Mapudungun; Mapuche, Marathi, Mari, Marshallese, Marwari, Masai, Mayan languages, Mende, Mi'kmaq; Micmac, Minangkabau, Mirandese, Mohawk, Moksha, Mon-Khmer languages, Mongo, Mongolian, Mossi, Multiple languages, Munda languages, N'Ko, Nahuatl languages, Nauru, Navajo; Navaho, Ndebele, North; North Ndebele, Ndebele, South; South Ndebele, Ndonga, Neapolitan, Nepal Bhasa; Newari, Nepali, Nias, Niger-Kordofanian languages, Nilo-Saharan languages, Niuean, North American Indian languages, Northern Frisian, Northern Sami, Norwegian, Nubian languages, Nyamwezi, Nyankole, Nyoro, Nzima, Occitan (post 1500); Provençal, Ojibwa, Oriya, Oromo, Osage, Ossetian; Ossetic, Otomian languages, Pahlavi, Palauan, Pali, Pampanga; Kapampangan, Pangasinan, Panjabi; Punjabi, Papiamento, Papuan languages, Pedi; Sepedi; Northern Sotho, Persian, Philippine languages, Phoenician, Pohnpeian, Polish, Portuguese, Prakrit languages, Pushto; Pashto, Quechua, Rajasthani, Rapanui, Rarotongan; Cook Islands Maori, Romance languages, Romanian; Moldavian; Moldovan, Romansh, Romany, Rundi, Russian, Salishan languages, Samaritan Aramaic, Sami languages, Samoan, Sandawe, Sango, Sanskrit, Santali, Sardinian, Sasak, Scots, Selkup, Semitic languages, Serbian, Serer, Shan, Shona, Sichuan Yi; Nuosu, Sicilian, Sidamo, Sign Languages, Siksika, Sindhi, Sinhala; Sinhalese, Sino-Tibetan languages, Siouan languages, Skolt Sami, Slave (Athapascan), Slavic languages, Slovak, Slovenian, Sogdian, Somali, Songhai languages, Soninke, Sorbian languages, Sotho, Southern, South American Indian (Other), Southern Altai, Southern Sami, Spanish; Castilian, Sranan Tongo, Sukuma, Sumerian, Sundanese, Susu, Swahili, Swati, Swedish, Swiss German; Alemannic; Alsatian, Syriac, Tagalog, Tahitian, Tai languages, Tajik, Tamashek, Tamil, Tatar, Telugu, Tereno, Tetum, Thai, Tibetan, Tigre, Tigrinya, Timne, Tiv, Tlingit, Tok Pisin, Tokelau, Tonga (Nyasa), Tonga (Tonga Islands), Tsimshian, Tsonga, Tswana, Tumbuka, Tupi languages, Turkish, Turkmen, Tuvalu, Tuvinian, Twi, Udmurt, Ugaritic, Uighur; Uyghur, Ukrainian, Umbundu, Uncoded languages, Undetermined, Upper Sorbian, Urdu, Uzbek, Vai, Venda, Vietnamese, Volapük, Votic, Wakashan languages, Walamo, Walloon, Waray, Washo, Welsh, Western Frisian, Wolof, Xhosa, Yakut, Yao, Yapese, Yiddish, Yoruba, Yupik languages, Zande languages, Zapotec, Zaza; Dimili; Dimli; Kirdki; Kirmanjki; Zazaki, Zenaga, Zhuang; Chuang, Zulu, Zuni) for this property.

90

A review of motion analysis methods for human Nonverbal Communication Computing  

Science Conference Proceedings (OSTI)

Human Nonverbal Communication Computing aims to investigate how people exploit nonverbal aspects of their communication to coordinate their activities and social relationships. Nonverbal behavior plays important roles in message production and processing, ... Keywords: Face tracking, Facial expression recognition, Gesture recognition, Group activity analysis, Motion analysis, Nonverbal Communication Computing

Dimitris Metaxas, Shaoting Zhang

2013-06-01T23:59:59.000Z

91

18F-FDG PET imaging analysis for computer aided Alzheimer's diagnosis  

Science Conference Proceedings (OSTI)

Finding sensitive and appropriate technologies for non-invasive observation and early detection of Alzheimer's disease (AD) is of fundamental importance to develop early treatments. In this work we develop a fully automatic computer aided diagnosis (CAD) ... Keywords: Alzheimer's disease (AD), Computer aided diagnosis, FDG-PET, Independent component analysis (ICA), Principal component analysis (PCA), Supervised learning, Support vector machine (SVM)

I. A. Illán; J. M. Górriz; J. Ramírez; D. Salas-Gonzalez; M. M. López; F. Segovia; R. Chaves; M. Gómez-Rio; C. G. Puntonet

2011-02-01T23:59:59.000Z

92

Computer simulation and economic analysis for ammonia fiber explosion (AFEX) pretreatment process  

E-Print Network (OSTI)

The ammonia fiber explosion (AFFECT) process is a promising new pretreatment for enhancing the reactivity of lignocerulose materials with many advantages over existing processes. The material is soaked in high-pressure liquid ammonia for a few minutes then the pressure is explosively released. A combined chemical (cellulose decrystamution) and physical (increased surface area) effect increases the enzymatic digestibility of lignocelmose. The laboratory phase of AFEX development is nearing completion, and a brief preliminary economic analysis (without detailed sizing) was finished. However, a commercial size plant has not been developed. This study was undertaken in an effort to support and assist AFEX commercialization through process simulation and cost analysis. In this study, a steady state computer simulation package was developed for the AFEX process. Corn fiber was used as the representative biomass treated by AFEX. Different ammonia loadings, water loadings, temperatures and pressures were used as operational parameters. Mass balances and energy balances are the major determinants of the equipments selected and sized. 'nermodynamic models or kinetic models are also included. A preliminary cost estimate includes total purchased-equipment cost using the equipment cost ratio method. The process computer simulation model was programmed in FORTRAN. FORTRAN subroutine libraries from IMSL (International Mathematical and Statistics Library), Inc. were used as needed. To increase the portability of the program, the programming was done on an EBM compatible PC.

Wang, Lin

1996-01-01T23:59:59.000Z

93

NSLS beam line data acquisition and analysis computer system  

SciTech Connect

A versatile computer environment to manage instrumentation alignment and experimental control at NSLS beam lines has been developed. The system is based on a 386/486 personal computer running under a UNIX operating system with X11 Windows. It offers an ideal combination of capability, flexibility, compatibility, and cost. With a single personal computer, the beam line user can run a wide range of scattering and spectroscopy experiments using a multi-tasking data collection program which can interact with CAMAC, GPIB and AT-Bus interfaces, and simultaneously examine and analyze data and communicate with remote network nodes.

Feng-Berman, S.K.; Siddons, D.P.; Berman, L.

1993-11-01T23:59:59.000Z

94

Computational illumination  

Science Conference Proceedings (OSTI)

The field of computational photography includes computational imaging techniques that enhance or extend the capabilities of digital photography, a combination of computer vision, computer graphics, and applied optics. Computational illumination is an ...

Matthew Turk

2010-11-01T23:59:59.000Z

95

Computational Analysis of a Pylon-Chevron Core Nozzle Interaction  

Science Conference Proceedings (OSTI)

In typical engine installations, the pylon of an engine creates a flow disturbance that interacts with the engine exhaust flow. This interaction of the pylon with the exhaust flow from a dual stream nozzle was studied computationally. The dual stream ...

Thomas R. H.; Kinzie K. W.; Pao S. Paul

2001-05-01T23:59:59.000Z

96

Computational tool in infrastructure emergency total evacuation analysis  

Science Conference Proceedings (OSTI)

Investigation has been made in the total evacuation of high profile infrastructures like airport terminal, super-highrise building, racecourse and tunnels. With the recent advancement of computer technologies, a number of evacuation modelling techniques ...

Kelvin H. L. Wong; Mingchun Luo

2005-05-01T23:59:59.000Z

97

Residential energy-consumption analysis utilizing the DOE-1 computer program  

SciTech Connect

The DOE-1 computer program is used to examine energy consumption in a typical middle-class household in Cincinnati, Ohio. The program is used to compare energy consumption under different structural and environmental conditions, including various levels of insulation in the walls and ceiling, double and single glazing of windows, and thermostat setback schedules. In addition, the DOE-1 program is used to model the house under three energy distribution systems: a unit heater, a single-zone fan system with optional subzone reheat; and a unitary heat pump. A plant equipment simulation is performed to model the heating and cooling plant currently installed in the house. A simple economic analysis of life-cycle costs for the house is done utilizing the economic simulation portion of DOE-1. Utility bills over the past six years are analyzed to gain an actual energy-use profile for the house to compare with computer results. Results indicate that a 35% savings in heating load may be obtained with addition of proper amounts of insulation as compared with the house with no insulation. The installation of double glazing on windows may save close to 6% on heating load. Thermostat setbacks may result in savings of around 25% on energy consumed for heating. Similar results are achieved with regard to cooling load. Comparison of actual energy consumed by the household (from utility bills) with the computer results shows a 4.25% difference in values between the two. This small percent difference certainly strengthens the case for future use of computer programs in comparing construction alternatives and predicting building energy consumption.

Arentsen, S K

1979-04-01T23:59:59.000Z

98

Computational design and analysis of flatback airfoil wind tunnel experiment.  

DOE Green Energy (OSTI)

A computational fluid dynamics study of thick wind turbine section shapes in the test section of the UC Davis wind tunnel at a chord Reynolds number of one million is presented. The goals of this study are to validate standard wind tunnel wall corrections for high solid blockage conditions and to reaffirm the favorable effect of a blunt trailing edge or flatback on the performance characteristics of a representative thick airfoil shape prior to building the wind tunnel models and conducting the experiment. The numerical simulations prove the standard wind tunnel corrections to be largely valid for the proposed test of 40% maximum thickness to chord ratio airfoils at a solid blockage ratio of 10%. Comparison of the computed lift characteristics of a sharp trailing edge baseline airfoil and derived flatback airfoils reaffirms the earlier observed trend of reduced sensitivity to surface contamination with increasing trailing edge thickness.

Mayda, Edward A. (University of California, Davis, CA); van Dam, C.P. (University of California, Davis, CA); Chao, David D. (University of California, Davis, CA); Berg, Dale E.

2008-03-01T23:59:59.000Z

99

4D frequency analysis of computational cameras for depth of field extension  

Science Conference Proceedings (OSTI)

Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational ... Keywords: Fourier analysis, computational camera, depth of field, light field

Anat Levin; Samuel W. Hasinoff; Paul Green; Frédo Durand; William T. Freeman

2009-07-01T23:59:59.000Z

100

WINACS: construction and analysis of web-based computer science information networks  

Science Conference Proceedings (OSTI)

WINACS (Web-based Information Network Analysis for Computer Science) is a project that incorporates many recent, exciting developments in data sciences to construct a Web-based computer science information network and to discover, retrieve, rank, cluster, ... Keywords: WINACS, information networks, web mining

Tim Weninger; Marina Danilevsky; Fabio Fumarola; Joshua Hailpern; Jiawei Han; Thomas J. Johnston; Surya Kallumadi; Hyungsul Kim; Zhijin Li; David McCloskey; Yizhou Sun; Nathan E. TeGrotenhuis; Chi Wang; Xiao Yu

2011-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

Experimental Analysis of Task-based Energy Consumption in Cloud Computing Systems  

E-Print Network (OSTI)

this model, we have conducted extensive experiments to profile the energy consumption in cloud computingExperimental Analysis of Task-based Energy Consumption in Cloud Computing Systems Feifei Chen, John is that large cloud data centres consume large amounts of energy and produce significant carbon footprints

Schneider, Jean-Guy

102

C-Meter: A Framework for Performance Analysis of Computing Clouds  

E-Print Network (OSTI)

C-Meter: A Framework for Performance Analysis of Computing Clouds Nezih Yigitbasi, Alexandru Iosup virtualization and network communications overheads. To address these issues, we have designed and implemented C-Meter of computing clouds. Then, we present the architecture of the C-Meter framework and discuss several cloud

Epema, Dick H.J.

103

Coupling of a multizone airflow simulation program with computational fluid dynamics for indoor environmental analysis  

E-Print Network (OSTI)

Current design of building indoor environment comprises macroscopIC approaches, such as CONT AM multizone airflow analysis tool, and microscopic approaches that apply Computational Fluid Dynamics (CFD). Each has certain ...

Gao, Yang, 1974-

2002-01-01T23:59:59.000Z

104

High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)  

SciTech Connect

Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

Oehmen, Chris [PNNL

2010-01-25T23:59:59.000Z

105

Mathematics of Computer Algebra and Analysis - MOCAA - CECM  

E-Print Network (OSTI)

... record; the management track record; the track record in training of Highly Qualified ... The scope of the research program includes problems like exact definite ...

106

Sieveless particle size distribution analysis of particulate materials through computer vision  

DOE Green Energy (OSTI)

This paper explores the inconsistency of length-based separation by mechanical sieving of particulate materials with standard sieves, which is the standard method of particle size distribution (PSD) analysis. We observed inconsistencies of length-based separation of particles using standard sieves with manual measurements, which showed deviations of 17 22 times. In addition, we have demonstrated the falling through effect of particles cannot be avoided irrespective of the wall thickness of the sieve. We proposed and utilized a computer vision with image processing as an alternative approach; wherein a user-coded Java ImageJ plugin was developed to evaluate PSD based on length of particles. A regular flatbed scanner acquired digital images of particulate material. The plugin determines particles lengths from Feret's diameter and width from pixel-march method, or minor axis, or the minimum dimension of bounding rectangle utilizing the digital images after assessing the particles area and shape (convex or nonconvex). The plugin also included the determination of several significant dimensions and PSD parameters. Test samples utilized were ground biomass obtained from the first thinning and mature stand of southern pine forest residues, oak hard wood, switchgrass, elephant grass, giant miscanthus, wheat straw, as well as Basmati rice. A sieveless PSD analysis method utilized the true separation of all particles into groups based on their distinct length (419 639 particles based on samples studied), with each group truly represented by their exact length. This approach ensured length-based separation without the inconsistencies observed with mechanical sieving. Image based sieve simulation (developed separately) indicated a significant effect (P < 0.05) on number of sieves used in PSD analysis, especially with non-uniform material such as ground biomass, and more than 50 equally spaced sieves were required to match the sieveless all distinct particles PSD analysis. Results substantiate that mechanical sieving, owing to handling limitations and inconsistent length-based separation of particles, is inadequate in determining the PSD of non-uniform particulate samples. The developed computer vision sieveless PSD analysis approach has the potential to replace the standard mechanical sieving. The plugin can be readily extended to model (e.g., Rosin Rammler) the PSD of materials, and mass-based analysis, while providing several advantages such as accuracy, speed, low cost, automated analysis, and reproducible results.

Igathinathane, C. [Mississippi State University (MSU); Pordesimo, L. O. [Mississippi State University (MSU); Columbus, Eugene P [ORNL; Batchelor, William D [ORNL; Sokhansanj, Shahabaddine [ORNL

2009-05-01T23:59:59.000Z

107

Applicaiton of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011 Application of the Computer Program SASSI for Seismic SSI Analysis for WTP Facilities, Farhang Ostadan & Raman Venkata, October 25, 2011, Page-2 Background *SASSI computer code was developed in the early 1980's to solve Soil-Structure-Interaction (SSI) problems * Original version of SASSI was based on the direct solution method for embedded structures * Requires that each soil node in the excavated soil volume be an interaction node * Subtraction solution method was introduced in 1998

108

A Computational Analysis of Lower Bounds for Big Bucket ...  

E-Print Network (OSTI)

to solve, and this is addressed by a thorough analysis in the paper. We con- clude with ...... Pentium 4 2.53 GHz processor and 1 GB of RAM. All the formulations ...

109

Inferring Group Processes from Computer-Mediated Affective Text Analysis  

SciTech Connect

Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

Schryver, Jack C [ORNL; Begoli, Edmon [ORNL; Jose, Ajith [Missouri University of Science and Technology; Griffin, Christopher [Pennsylvania State University

2011-02-01T23:59:59.000Z

110

A Bayesian Approach to the Design and Analysis of Computer Experiments  

DOE Green Energy (OSTI)

We consider the problem of designing and analyzing experiments for prediction of the function y(f), t {element_of} T, where y is evaluated by means of a computer code (typically by solving complicated equations that model a physical system), and T represents the domain of inputs to the code. We use a Bayesian approach, in which uncertainty about y is represented by a spatial stochastic process (random function); here we restrict attention to stationary Gaussian processes. The posterior mean function can be used as an interpolating function, with uncertainties given by the posterior standard deviations. Instead of completely specifying the prior process, we consider several families of priors, and suggest some cross-validational methods for choosing one that performs relatively well on the function at hand. As a design criterion, we use the expected reduction in the entropy of the random vector y (T*), where T* {contained_in} T is a given finite set of ''sites'' (input configurations) at which predictions are to be made. We describe an exchange algorithm for constructing designs that are optimal with respect to this criterion. To demonstrate the use of these design and analysis methods, several examples are given, including one experiment on a computer model of a thermal energy storage device and another on an integrated circuit simulator.

Currin, C.

1988-01-01T23:59:59.000Z

111

The economic impacts of the September 11 terrorist attacks: a computable general equilibrium analysis  

SciTech Connect

This paper develops a bottom-up approach that focuses on behavioral responses in estimating the total economic impacts of the September 11, 2001, World Trade Center (WTC) attacks. The estimation includes several new features. First, is the collection of data on the relocation of firms displaced by the attack, the major source of resilience in muting the direct impacts of the event. Second, is a new estimate of the major source of impacts off-site -- the ensuing decline of air travel and related tourism in the U.S. due to the social amplification of the fear of terrorism. Third, the estimation is performed for the first time using Computable General Equilibrium (CGE) analysis, including a new approach to reflecting the direct effects of external shocks. This modeling framework has many advantages in this application, such as the ability to include behavioral responses of individual businesses and households, to incorporate features of inherent and adaptive resilience at the level of the individual decision maker and the market, and to gauge quantity and price interaction effects across sectors of the regional and national economies. We find that the total business interruption losses from the WTC attacks on the U.S. economy were only slightly over $100 billion, or less than 1.0% of Gross Domestic Product. The impacts were only a loss of $14 billion of Gross Regional Product for the New York Metropolitan Area.

Oladosu, Gbadebo A [ORNL; Rose, Adam [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois; Asay, Gary [University of Southern California

2009-01-01T23:59:59.000Z

112

An analysis of computational workloads for the ORNL Jaguar system  

Science Conference Proceedings (OSTI)

This study presents an analysis of science application workloads for the Jaguar Cray XT5 system during its tenure as a 2.3 petaflop supercomputer at Oak Ridge National Laboratory. Jaguar was the first petascale system to be deployed for open science ... Keywords: applications, cray, exascale, hpc, metrics, ornl, petascale, scaling, science, workload

Wayne Joubert; Shi-Quan Su

2012-06-01T23:59:59.000Z

113

Negotiation among autonomous computational agents: principles, analysis and challenges  

Science Conference Proceedings (OSTI)

Automated negotiation systems with software agents representing individuals or organizations and capable of reaching agreements through negotiation are becoming increasingly important and pervasive. Examples, to mention a few, include the industrial ... Keywords: Automated negotiation, Autonomous agents, Bargaining, Impasse, Multi-agent systems, Negotiation framework, Negotiation systems, Pre-negotiation, Renegotiation

Fernando Lopes; Michael Wooldridge; A. Q. Novais

2008-03-01T23:59:59.000Z

114

The CACHE Study: Group Effects in Computer-supported Collaborative Analysis  

Science Conference Proceedings (OSTI)

The present experiment investigates effects of group composition in computer-supported collaborative intelligence analysis. Human cognition, though highly adaptive, is also quite limited, leading to systematic errors and limitations in performance --- ... Keywords: CACHE, CSCW, collaboration, group bias, group decision-making, intelligence analysis

Gregorio Convertino; Dorrit Billman; Peter Pirolli; J. P. Massar; Jeff Shrager

2008-08-01T23:59:59.000Z

115

A High-Performance Hybrid Computing Approach to Massive Contingency Analysis in the Power Grid  

Science Conference Proceedings (OSTI)

Operating the electrical power grid to prevent power black-outs is a complex task. An important aspect of this is contingency analysis, which involves understanding and mitigating potential failures in power grid elements such as transmission lines. ... Keywords: hybrid computational systems, middleware, power grid, contingency analysis

Ian Gorton; Zhenyu Huang; Yousu Chen; Benson Kalahar; Shuangshuang Jin; Daniel Chavarría-Miranda; Doug Baxter; John Feo

2009-12-01T23:59:59.000Z

116

Computable General Equilibrium Models for the Analysis of Energy and Climate Policies  

E-Print Network (OSTI)

Computable General Equilibrium Models for the Analysis of Energy and Climate Policies Ian Sue Wing of the economy-wide impacts of energy and climate policies. JEL Classification: C68, D58, H22, Q43 Keywords of energy and environmental policies. Perhaps the most important of these applications is the analysis

Wing, Ian Sue

117

Computer Based Training: Engineering Technical Training Modules - Finite Element Analysis v1.0  

Science Conference Proceedings (OSTI)

Finite Element Analysis ETTM, Version 1.0 is a computer45based training module that allows users to access training when desired and review it at their own pace. This module provides information about the basics of finite element analysis and modeling. This training should be used for position specific and/or continuing training for individuals involved with finite element analysis. This computer-based training (CBT) module is intended for use by new engineers as well as engineers changing jobs where bas...

2009-12-01T23:59:59.000Z

118

Back analysis of microplane model parameters using soft computing methods  

E-Print Network (OSTI)

A new procedure based on layered feed-forward neural networks for the microplane material model parameters identification is proposed in the present paper. Novelties are usage of the Latin Hypercube Sampling method for the generation of training sets, a systematic employment of stochastic sensitivity analysis and a genetic algorithm-based training of a neural network by an evolutionary algorithm. Advantages and disadvantages of this approach together with possible extensions are thoroughly discussed and analyzed.

Kucerova, A; Zeman, J

2009-01-01T23:59:59.000Z

119

Computational Computational  

E-Print Network (OSTI)

38 Computational complexity Computational complexity In 1965, the year Juris Hartmanis became Chair On the computational complexity of algorithms in the Transactions of the American Mathematical Society. The paper the best talent to the field. Theoretical computer science was immediately broadened from automata theory

Keinan, Alon

120

AIR INGRESS ANALYSIS: PART 2 – COMPUTATIONAL FLUID DYNAMIC MODELS  

Science Conference Proceedings (OSTI)

The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

2011-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Computational features of the CACECO containment analysis code. [LMFBR  

DOE Green Energy (OSTI)

A code, CACECO, has been written to assist in the analysis of containment situations peculiar to sodium cooled reactors. Typically, these situations involve relatively slow energy release processes and chemical reaction heat. Two examples are given to illustrate some of the code's features. These particular cases illustrate the potential for hydrogen formation in the containment building, but show that time is available to take corrective action. The code is suitable for other problems involving passive heat absorption in massive structures over long periods of time.

Peak, R.D.; Stepnewski, D.D.

1975-05-29T23:59:59.000Z

122

CFAST Computer Code Application Guidance for Documented Safety Analysis, Final Report  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Final CFAST Code Guidance Final CFAST Code Guidance CFAST Computer Code Application Guidance for Documented Safety Analysis Final Report U.S. Department of Energy Office of Environment, Safety and Health 1000 Independence Ave., S.W. Washington, DC 20585-2040 July 2004 DOE/NNSA-DP Technical Report CFAST Computer Code Application Guidance Final Report July 2004 ii INTENTIONALLY BLANK. DOE/NNSA-DP Technical Report CFAST Computer Code Application Guidance Final Report July 2004 iii FOREWORD This document provides guidance to Department of Energy (DOE) facility analysts in the use of the CFAST computer software for supporting Documented Safety Analysis applications. Information is provided herein that supplements information found in the CFAST documentation

123

Analysis of deformable image registration accuracy using computational modeling  

SciTech Connect

Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.

Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J. [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States)

2010-03-15T23:59:59.000Z

124

New Technologies and Methods to Improve Computational Speed and Robustness of Power Flow Analysis  

Science Conference Proceedings (OSTI)

The power flow problem consists of determining the steady-state operating point of an electrical transmission network under specific loading conditions. This report describes the development of power flow techniques designed to improve the efficiency and reliability of an electrical power network. Leveraging advancements in computing technologies, data processing, and sophisticated computational methods can improve the performance of power system analysis tools, specifically their accuracy, speed, ...

2013-12-20T23:59:59.000Z

125

Computerized Games and Simulations in Computer-Assisted Language Learning: A Meta-Analysis of Research  

Science Conference Proceedings (OSTI)

This article explores research on the use of computerized games and simulations in language education. The author examined the psycholinguistic and sociocultural constructs proposed as a basis for the use of games and simulations in computer-assisted ... Keywords: CALL, MMORPG, MOO, computer-assisted language learning, computerized game, computerized simulation, effective language learning, gaming, meta-analysis, psycholinguistic construct, research, second language acquisition, simulation, sociocultural construct, theories of language acquisition, virtual world

Mark Peterson

2010-02-01T23:59:59.000Z

126

Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis  

SciTech Connect

The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributes including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF)

1990-12-01T23:59:59.000Z

127

Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis  

SciTech Connect

The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributes including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF)

Not Available

1990-12-01T23:59:59.000Z

128

TASK XII ANALYTICAL REPORT--SM-1 TRANSIENT ANALYSIS BY ANALOG COMPUTER METHODS  

SciTech Connect

The voltage and frequency response of selected SM-1 plant system parameters to step load changes was analyzed using analog computer measurements. The analog model was that developed for analysis of the SM-2 design. The approach to the analysis, formulation of the model, and analog recordings are presented. The data will be used to prove reliability of the analog model by comparing analog data with test data to be taken at SM-1. (auth)

Barrett, J.A.

1961-05-26T23:59:59.000Z

129

A computational analysis of the ballistic performance of light-weight hybrid composite armors  

E-Print Network (OSTI)

composite armors M. Grujicic a,*, B. Pandurangan a , K.L. Koudela b , B.A. Cheeseman c a Department­matrix composite laminate armor to withstand the impact of a fragment simulating projectile (FSP) is investigated using a non-linear dynamics transient computational analysis. The hybrid armor is constructed using

Grujicic, Mica

130

Computation of Ground Surface Conduction Heat Flux by Fourier Analysis of Surface Temperature  

Science Conference Proceedings (OSTI)

A method for computing the ground surface heat flux density is tested at two places in West Africa during the rainy season and during the dry season. This method is based upon the Fourier analysis of the experimental ground surface temperature. ...

Guy Cautenet; Michel Legrand; Yaya Coulibaly; Christian Boutin

1986-03-01T23:59:59.000Z

131

Improvement of the computer methods for grounding analysis in layered soils by using  

E-Print Network (OSTI)

of the substation site. Thus, when a fault condition occurs, the grounding grid transports and dissipatesImprovement of the computer methods for grounding analysis in layered soils by using high of grounding systems embedded in uni- form soils. This approach has been implemented in a CAD system

Colominas, Ignasi

132

Heart sound analysis for symptom detection and computer-aided diagnosis  

E-Print Network (OSTI)

Heart sound analysis for symptom detection and computer-aided diagnosis Todd R. Reed a,*, Nancy E Abstract Heart auscultation (the interpretation by a physician of heart sounds) is a fundamental component for the production of heart sounds, and demonstrate its utility in iden- tifying features useful in diagnosis. We

Reed, Nancy E.

133

Geometry, analysis, and computation in mathematics and applied sciences. Final report  

SciTech Connect

Since 1993, the GANG laboratory has been co-directed by David Hoffman, Rob Kusner and Peter Norman. A great deal of mathematical research has been carried out here by them and by GANG faculty members Franz Pedit and Nate Whitaker. Also new communication tools, such as the GANG Webserver have been developed. GANG has trained and supported nearly a dozen graduate students, and at least half as many undergrads in REU projects.The GANG Seminar continues to thrive, making Amherst a site for short and long term visitors to come to work with the GANG. Some of the highlights of recent or ongoing research at GANG include: CMC surfaces, minimal surfaces, fluid dynamics, harmonic maps, isometric immersions, knot energies, foam structures, high dimensional soap film singularities, elastic curves and surfaces, self-similar curvature evolution, integrable systems and theta functions, fully nonlinear geometric PDE, geometric chemistry and biology. This report is divided into the following sections: (1) geometric variational problems; (2) soliton geometry; (3) embedded minimal surfaces; (4) numerical fluid dynamics and mathematical modeling; (5) GANG graphics and mathematical software; (6) description of the computational and visual analysis facility; and (7) research by undergraduates and GANG graduate seminar.

Kusner, R.B.; Hoffman, D.A.; Norman, P.; Pedit, F.; Whitaker, N.; Oliver, D.

1995-12-31T23:59:59.000Z

134

RISKIND: An enhanced computer code for National Environmental Policy Act transportation consequence analysis  

Science Conference Proceedings (OSTI)

The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive materials. The code is intended to provide scenario-specific analyses when evaluating alternatives for environmental assessment activities, including those for major federal actions involving radioactive material transport as required by the National Environmental Policy Act (NEPA). As such, rigorous procedures have been implemented to enhance the code`s credibility and strenuous efforts have been made to enhance ease of use of the code. To increase the code`s reliability and credibility, a new version of RISKIND was produced under a quality assurance plan that covered code development and testing, and a peer review process was conducted. During development of the new version, the flexibility and ease of use of RISKIND were enhanced through several major changes: (1) a Windows{sup {trademark}} point-and-click interface replaced the old DOS menu system, (2) the remaining model input parameters were added to the interface, (3) databases were updated, (4) the program output was revised, and (5) on-line help has been added. RISKIND has been well received by users and has been established as a key component in radiological transportation risk assessments through its acceptance by the U.S. Department of Energy community in recent environmental impact statements (EISs) and its continued use in the current preparation of several EISs.

Biwer, B.M.; LePoire, D.J.; Chen, S.Y.

1996-03-01T23:59:59.000Z

135

SEACC: the systems engineering and analysis computer code for small wind systems  

DOE Green Energy (OSTI)

The systems engineering and analysis (SEA) computer program (code) evaluates complete horizontal-axis SWECS performance. Rotor power output as a function of wind speed and energy production at various wind regions are predicted by the code. Efficiencies of components such as gearbox, electric generators, rectifiers, electronic inverters, and batteries can be included in the evaluation process to reflect the complete system performance. Parametric studies can be carried out for blade design characteristics such as airfoil series, taper rate, twist degrees and pitch setting; and for geometry such as rotor radius, hub radius, number of blades, coning angle, rotor rpm, etc. Design tradeoffs can also be performed to optimize system configurations for constant rpm, constant tip speed ratio and rpm-specific rotors. SWECS energy supply as compared to the load demand for each hour of the day and during each session of the year can be assessed by the code if the diurnal wind and load distributions are known. Also available during each run of the code is blade aerodynamic loading information.

Tu, P.K.C.; Kertesz, V.

1983-03-01T23:59:59.000Z

136

Uncertainty Studies of Real Anode Surface Area in Computational Analysis for Molten Salt Electrorefining  

SciTech Connect

This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang

2011-09-01T23:59:59.000Z

137

INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION  

SciTech Connect

Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths and structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.

Kenneth D. Luff

2002-06-30T23:59:59.000Z

138

INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION  

SciTech Connect

Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths and structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.

Kenneth D. Luff

2002-09-30T23:59:59.000Z

139

An economic and energy-aware analysis of the viability of outsourcing cluster computing to a cloud  

Science Conference Proceedings (OSTI)

This paper compares the total cost of ownership of a physical cluster with the cost of a virtual cloud-based cluster. For that purpose, cost models for both a physical cluster and a cluster on a cloud have been developed. The model for the physical cluster ... Keywords: Cloud computing, Cluster computing, Cost analysis, Green computing

Carlos De Alfonso; Miguel Caballer; Fernando Alvarruiz; GermáN Moltó

2013-03-01T23:59:59.000Z

140

INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION  

SciTech Connect

Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). The software tools in ICS have been developed for characterization of reservoir properties and evaluation of hydrocarbon potential using a combination of inter-disciplinary data sources such as geophysical, geologic and engineering variables. The ICS tools provide a means for logical and consistent reservoir characterization and oil reserve estimates. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) file utility tools. ICS tools are extremely flexible in their approach and use, and applicable to most geologic settings. The tools are primarily designed to correlate relationships between seismic information and engineering and geologic data obtained from wells, and to convert or translate seismic information into engineering and geologic terms or units. It is also possible to apply ICS in a simple framework that may include reservoir characterization using only engineering, seismic, or geologic data in the analysis. ICS tools were developed and tested using geophysical, geologic and engineering data obtained from an exploitation and development project involving the Red River Formation in Bowman County, North Dakota and Harding County, South Dakota. Data obtained from 3D seismic surveys, and 2D seismic lines encompassing nine prospective field areas were used in the analysis. The geologic setting of the Red River Formation in Bowman and Harding counties is that of a shallow-shelf, carbonate system. Present-day depth of the Red River formation is approximately 8000 to 10,000 ft below ground surface. This report summarizes production results from well demonstration activity, results of reservoir characterization of the Red River Formation at demonstration sites, descriptions of ICS tools and strategies for their application.

Mark A. Sippel; William C. Carrigan; Kenneth D. Luff; Lyn Canter

2003-11-12T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III  

Science Conference Proceedings (OSTI)

This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.

Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

1981-06-01T23:59:59.000Z

142

Computer code input for thermal hydraulic analysis of Multi-Function Waste Tank Facility Title II design  

Science Conference Proceedings (OSTI)

The input files to the P/Thermal computer code are documented for the thermal hydraulic analysis of the Multi-Function Waste Tank Facility Title II design analysis.

Cramer, E.R.

1994-10-01T23:59:59.000Z

143

Methods and apparatuses for information analysis on shared and distributed computing systems  

DOE Patents (OSTI)

Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

2011-02-22T23:59:59.000Z

144

Analysis of porosity in lower Ismay phylloid algal packstone using high-resolution computed x-ray tomography  

SciTech Connect

Three-dimensional images of porosity were created using high- resolution computed tomographic (CT) analysis as part of a larger study of phylloid algal packstone from bioherms in the lower Ismay (Des Moinesian, Paradox Formation). The sample imaged was collected at Eight Foot Rapids along the San Juan River in southeastern Utah 40 km west of the Aneth field. The larger study includes analysis of lithofacies, diagenesis, and quantitative analysis of porosity. Our goal is to predictively model porosity in phylloid algal reservoirs. Field observations suggest a relationship between porosity and lithology. Porosity is best established in phylloid algal packstone such as the one chosen for three-dimensional imaging. Locally, porosity is also associated with fractures and stylolitization. Petrographic observations suggest that formation of moldic and vuggy porosity in this sample was controlled by multiple episodes of dissolution and infill of blocky calcite. Porosity in thin section (5.94%) was measured using NIH Image (public domain) on a Macintosh desktop computer. High-resolution CT radiography of a 2.3 cm diameters cm high, cylindrical sample generated a series of 110 images at 0.1 mm intervals. Three-dimensional isosurface images of porosity reveal the degree of interconnection, pore size (up to 12 mm long and from 0.5 mm to 7 mm wide), and their highly irregular shape. These images can also be used to create animations of scans through the rock and three-dimensional, rotating images of the pores.

Beall, J.L., Gordon, I.T.; Gournay, J.P. (Univ. of Texas, Austin, TX (United States)) (and others)

1996-01-01T23:59:59.000Z

145

Maintaining scale as a realiable computational system for criticality safety analysis  

SciTech Connect

Accurate and reliable computational methods are essential for nuclear criticality safety analyses. The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer code system was originally developed at Oak Ridge National Laboratory (ORNL) to enable users to easily set up and perform criticality safety analyses, as well as shielding, depletion, and heat transfer analyses. Over the fifteen-year life of SCALE, the mainstay of the system has been the criticality safety analysis sequences that have featured the KENO-IV and KENO-V.A Monte Carlo codes and the XSDRNPM one-dimensional discrete-ordinates code. The criticality safety analysis sequences provide automated material and problem-dependent resonance processing for each criticality calculation. This report details configuration management which is essential because SCALE consists of more than 25 computer codes (referred to as modules) that share libraries of commonly used subroutines. Changes to a single subroutine in some cases affect almost every module in SCALE! Controlled access to program source and executables and accurate documentation of modifications are essential to maintaining SCALE as a reliable code system. The modules and subroutine libraries in SCALE are programmed by a staff of approximately ten Code Managers. The SCALE Software Coordinator maintains the SCALE system and is the only person who modifies the production source, executables, and data libraries. All modifications must be authorized by the SCALE Project Leader prior to implementation.

Bowmann, S.M.; Parks, C.V.; Martin, S.K.

1995-04-01T23:59:59.000Z

146

COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 2, User's manual  

Science Conference Proceedings (OSTI)

COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations; however, the transient capability has not yet been validated. This volume contains the input instructions for COBRA-SFS and an auxiliary radiation exchange factor code, RADX-1. It is intended to aid the user in becoming familiar with the capabilities and modeling conventions of the code.

Rector, D.R.; Cuta, J.M.; Lombardo, N.J.; Michener, T.E.; Wheeler, C.L.

1986-11-01T23:59:59.000Z

147

COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 1, Mathematical models and solution method  

Science Conference Proceedings (OSTI)

COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods.

Rector, D.R.; Wheeler, C.L.; Lombardo, N.J.

1986-11-01T23:59:59.000Z

148

Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART)  

NLE Websites -- All DOE Office Websites (Extended Search)

Objectives Objectives Current Future New science Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART) Kai Germaschewski, Amitava Bhattacharjee, Barrett Rogers, Will Fox, Yi-Min Huang, and others CICART Space Science Center / Dept. of Physics University of New Hampshire August 3, 2010 Kai Germaschewski CICART Project Objectives Current Future New science Outline 1 Project Information 2 Project summary and scientific objectives 3 Current HPC usage and methods 4 HPC requirements in 5 years 5 New science with new resources Kai Germaschewski CICART Project Objectives Current Future New science Project Information Center for Integrated Computation and Analysis of Reconnection and Turbulence PI: Amitava Bhattacharjee CICART has a dual mission in research: it seeks fundamental advances in physical understanding, and works to achieve these advances

149

Certification process of safety analysis and risk management computer codes at the Savannah River Site  

Science Conference Proceedings (OSTI)

The commitment by Westinghouse Savannah River Company (WSRC) to bring safety analysis and risk management codes into compliance with national and sitewide quality assurance requirements necessitated a systematic, structured approach. As a part of this effort, WSRC, in cooperation with the Westinghouse Hanford Company, has developed and implemented a certification process for the development and control of computer software. Safety analysis and risk management computer codes pertinent to reactor analyses were selected for inclusion in the certification process. As a first step, documented plans were developed for implementing verification and validation of the codes, and establishing configuration control. User qualification guidelines were determined. The plans were followed with an extensive assessment of the codes with respect to certification status. Detailed schedules and work plans were thus determined for completing certification of the codes considered. Although the software certification process discussed is specific to the application described, it is sufficiently general to provide useful insights and guidance for certification of other software.

Ades, M.J. (Westinghouse Savannah River Co., Aiken, SC (United States)); Toffer, H.; Lewis, C.J.; Crowe, R.D. (Westinghouse Hanford Co., Richland, WA (United States))

1992-01-01T23:59:59.000Z

150

Certification process of safety analysis and risk management computer codes at the Savannah River Site  

Science Conference Proceedings (OSTI)

The commitment by Westinghouse Savannah River Company (WSRC) to bring safety analysis and risk management codes into compliance with national and sitewide quality assurance requirements necessitated a systematic, structured approach. As a part of this effort, WSRC, in cooperation with the Westinghouse Hanford Company, has developed and implemented a certification process for the development and control of computer software. Safety analysis and risk management computer codes pertinent to reactor analyses were selected for inclusion in the certification process. As a first step, documented plans were developed for implementing verification and validation of the codes, and establishing configuration control. User qualification guidelines were determined. The plans were followed with an extensive assessment of the codes with respect to certification status. Detailed schedules and work plans were thus determined for completing certification of the codes considered. Although the software certification process discussed is specific to the application described, it is sufficiently general to provide useful insights and guidance for certification of other software.

Ades, M.J. [Westinghouse Savannah River Co., Aiken, SC (United States); Toffer, H.; Lewis, C.J.; Crowe, R.D. [Westinghouse Hanford Co., Richland, WA (United States)

1992-05-01T23:59:59.000Z

151

Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study  

SciTech Connect

This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

Krstulovich, S.F.

1986-11-12T23:59:59.000Z

152

A Preliminary Computer Pattern Analysis of Satellite Images of Mature Extratropical Cyclones  

Science Conference Proceedings (OSTI)

This study has applied computerized pattern analysis techniques to the location and classification of feature of several mature extratropical cyclones that were depicted in GOES satellite images. These features include the location of the center ...

Craig R. Burfeind; James A. Weinman; Bruce R. Barkstrom

1987-02-01T23:59:59.000Z

153

Computational atmospheric trajectory simulation analysis of spin-stabilised projectiles and small bullets  

Science Conference Proceedings (OSTI)

A mathematical model is based on the full equations of motion set up in the no-roll body reference frame and is integrated numerically from given initial conditions at the firing site. The computational flight analysis takes into consideration the ... Keywords: Coriolis effect, Mach number, Magnus effect, aerodynamic jump, atmospheric trajectory, constant aerodynamic coefficients, equations of motion, flight analysis, flight trajectories, gyroscopic stability, no-roll body reference frame, simulation, small bullets, spin-stabilised projectiles, static stability, total angle of attack, variable aerodynamic coefficients

D. N. Gkritzapis; E. E. Panagiotopoulos; D. P. Margaris; D. G. Papanikas

2008-07-01T23:59:59.000Z

154

Computer shadow analysis technique for tilted windows shaded by overhangs, vertical projections, and side fins  

SciTech Connect

This paper expands upon previously published techniques for calculating window shadow areas by computer to include tilted and horizontal glazing systems as well as vertical glazing systems. This methodology may be used for any rectangular window shaded by rectangular overhangs and/or side fins perpendicular to the plane of the window. Rectangular projections suspended from the end of an overhang are also accommodated. The technique yields a precise solution and requires minimum input. Computer processing is rapid because iterative algorithms are avoided. Shadow overlaps and end effects are completely treated. The glazing system may have any degree of tilt from horizontal (looking upward) through vertical to horizontal (looking downward). Techniques for sorting window shadow shapes and equations for calculating shadow areas are included.

Bekooy, R.G.

1983-01-01T23:59:59.000Z

155

SPECTRAL SHIFT CONTROL REACTOR BASIC PHYSICS PROGRAM-THEORETICAL ANALYSIS. PART II. BPG COMPUTER PROGRAM REPORT  

DOE Green Energy (OSTI)

The BPG code was written to compute the criticality of reactors using all kinds or moderators, specifically including mixtures of H/sub 2/O and D/sub 2/ O. Since slowing down with hydrogen moderator is susceptible to exact calculation, and since the Fermi age equation accurately treats heavy moderators such as carbon, the important innovations in this code deal with the transition from light to heavy scatter, and particularly with the treatment of moderation by deuterium. Included in the description of the code are derivation of the equations, physical data for the calculations, input and output format, and general operating conditions for use with the Burroughs 205 computer. (M.C.G.)

de Coulon, G.A.G.; Gates, L.D.; Worley, W.R.

1962-01-01T23:59:59.000Z

156

Coupled computational fluid dynamics and heat transfer analysis of the VHTR lower plenum.  

SciTech Connect

The very high temperature reactor (VHTR) concept is being developed by the US Department of Energy (DOE) and other groups around the world for the future generation of electricity at high thermal efficiency (> 48%) and co-generation of hydrogen and process heat. This Generation-IV reactor would operate at elevated exit temperatures of 1,000-1,273 K, and the fueled core would be cooled by forced convection helium gas. For the prismatic-core VHTR, which is the focus of this analysis, the velocity of the hot helium flow exiting the core into the lower plenum (LP) could be 35-70 m/s. The impingement of the resulting gas jets onto the adiabatic plate at the bottom of the LP could develop hot spots and thermal stratification and inadequate mixing of the gas exiting the vessel to the turbo-machinery for energy conversion. The complex flow field in the LP is further complicated by the presence of large cylindrical graphite posts that support the massive core and inner and outer graphite reflectors. Because there are approximately 276 channels in the VHTR core from which helium exits into the LP and a total of 155 support posts, the flow field in the LP includes cross flow, multiple jet flow interaction, flow stagnation zones, vortex interaction, vortex shedding, entrainment, large variation in Reynolds number (Re), recirculation, and mixing enhancement and suppression regions. For such a complex flow field, experimental results at operating conditions are not currently available. Instead, the objective of this paper is to numerically simulate the flow field in the LP of a prismatic core VHTR using the Sandia National Laboratories Fuego, which is a 3D, massively parallel generalized computational fluid dynamics (CFD) code with numerous turbulence and buoyancy models and simulation capabilities for complex gas flow fields, with and without thermal effects. The code predictions for simpler flow fields of single and swirling gas jets, with and without a cross flow, are validated using reported experimental data and theory. The key processes in the LP are identified using phenomena identification and ranking table (PIRT). It may be argued that a CFD code that accurately simulates simplified, single-effect flow fields with increasing complexity is likely to adequately model the complex flow field in the VHTR LP, subject to a future experimental validation. The PIRT process and spatial and temporal discretizations implemented in the present analysis using Fuego established confidence in the validation and verification (V and V) calculations and in the conclusions reached based on the simulation results. The performed calculations included the helicoid vortex swirl model, the dynamic Smagorinsky large eddy simulation (LES) turbulence model, participating media radiation (PMR), and 1D conjugate heat transfer (CHT). The full-scale, half-symmetry LP mesh used in the LP simulation included unstructured hexahedral elements and accounted for the graphite posts, the helium jets, the exterior walls, and the bottom plate with an adiabatic outer surface. Results indicated significant enhancements in heat transfer, flow mixing, and entrainment in the VHTR LP when using swirling inserts at the exit of the helium flow channels into the LP. The impact of using various swirl angles on the flow mixing and heat transfer in the LP is qualified, including the formation of the central recirculation zone (CRZ), and the effect of LP height. Results also showed that in addition to the enhanced mixing, the swirling inserts result in negligible additional pressure losses and are likely to eliminate the formation of hot spots.

El-Genk, Mohamed S. (University of New Mexico, Albuquerque, NM); Rodriguez, Salvador B.

2010-12-01T23:59:59.000Z

157

Analysis and selection of optimal function implementations in massively parallel computer  

DOE Patents (OSTI)

An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

Archer, Charles Jens (Rochester, MN); Peters, Amanda (Rochester, MN); Ratterman, Joseph D. (Rochester, MN)

2011-05-31T23:59:59.000Z

158

GENERAL REACTOR ANALYSIS COMPUTER PROGRAM FOR THE IBM 704, PROGRAM GEORGE  

SciTech Connect

Program George is an IBM 704 ccde that combines standard ANPD reactor analysis methcds for computing multiplication constant and spatial distribution of flux and power, as well as other nuclear parameters, in aircraft (heterogeneous) reactors. The elapsed time and the chance for human error between initial input and final output are greatly reduced in Program George. A matched set of reflector savings for a reflected, cylindrical core, which was formerly obtained by hand, is now computed by the program. In the C/sub 2/ pcrtion of Program George, the ccde computes the neutron slowingdown density and flux at 19 lethargy levels plus a thermal group in bare, homogeneous compositions with normal mode, ene ngy-dependent bucklings. The diffusion coefficient contains a transport correction as well as a Behrens correction for heterogeneous structure. Flux-depression factors at thermal and two epithermal levels are computed in annular, cylindrical geometry by one-energy, P/sub 3/ transport theory approximation in the I/sub 2/ pontion of the program, or they may be given as input. Flux weighting of various cross sections, transmission factors, and other parameters are computed in a multigroup representation. Two-energy-group constants from the regional compositions are supplied in the F/sub 2/ portion of the program, in which a two-energy group, one-space-dimensional, multiregion diffusion calculation yields multiplication constant, normalized fission density, fast flux, and slow flux in each onedimensional reactor pontrayed. If a reactivity match is requested, the reflector savings of the specified compositions are adjusted until the multiplication constant i n the radial F/sub 2/, longitudinal F/sub 2/, and the reference bare equivalent core of a cylindrical reflected reactor converge to the same value. (auth)

Hoffman, T.A.; Henderson, W.B.

1959-10-01T23:59:59.000Z

159

COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 3, Validation assessments  

Science Conference Proceedings (OSTI)

This report presents the results of the COBRA-SFS (Spent Fuel Storage) computer code validation effort. COBRA-SFS, while refined and specialized for spent fuel storage system analyses, is a lumped-volume thermal-hydraulic analysis computer code that predicts temperature and velocity distributions in a wide variety of systems. Through comparisons of code predictions with spent fuel storage system test data, the code's mathematical, physical, and mechanistic models are assessed, and empirical relations defined. The six test cases used to validate the code and code models include single-assembly and multiassembly storage systems under a variety of fill media and system orientations and include unconsolidated and consolidated spent fuel. In its entirety, the test matrix investigates the contributions of convection, conduction, and radiation heat transfer in spent fuel storage systems. To demonstrate the code's performance for a wide variety of storage systems and conditions, comparisons of code predictions with data are made for 14 runs from the experimental data base. The cases selected exercise the important code models and code logic pathways and are representative of the types of simulations required for spent fuel storage system design and licensing safety analyses. For each test, a test description, a summary of the COBRA-SFS computational model, assumptions, and correlations employed are presented. For the cases selected, axial and radial temperature profile comparisons of code predictions with test data are provided, and conclusions drawn concerning the code models and the ability to predict the data and data trends. Comparisons of code predictions with test data demonstrate the ability of COBRA-SFS to successfully predict temperature distributions in unconsolidated or consolidated single and multiassembly spent fuel storage systems.

Lombardo, N.J.; Cuta, J.M.; Michener, T.E.; Rector, D.R.; Wheeler, C.L.

1986-12-01T23:59:59.000Z

160

Computational Fluid Dynamics in Support of the SNS Liquid Mercury Thermal-Hydraulic Analysis  

SciTech Connect

Experimental and computational thermal-hydraulic research is underway to support the liquid mercury target design for the Spallation Neutron Source (SNS) facility. The SNS target will be subjected to internal nuclear heat generation that results from pulsed proton beam collisions with the mercury nuclei. Recirculation and stagnation zones within the target are of particular concern because of the likelihood that they will result in local hot spots and diminished heat removal from the target structure. Computational fluid dynamics (CFD) models are being used as a part of this research. Recent improvements to the 3D target model include the addition of the flow adapter which joins the inlet/outlet coolant pipes to the target body and an updated heat load distribution at the new baseline proton beam power level of 2 MW. Two thermal-hydraulic experiments are planned to validate the CFD model.

Siman-Tov, M.; Wendel, M.W.; Yoder, G.L.

1999-11-14T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

Technical support document: Energy efficiency standards for consumer products: Refrigerators, refrigerator-freezers, and freezers including draft environmental assessment, regulatory impact analysis  

Science Conference Proceedings (OSTI)

The Energy Policy and Conservation Act (P.L. 94-163), as amended by the National Appliance Energy Conservation Act of 1987 (P.L. 100-12) and by the National Appliance Energy Conservation Amendments of 1988 (P.L. 100-357), and by the Energy Policy Act of 1992 (P.L. 102-486), provides energy conservation standards for 12 of the 13 types of consumer products` covered by the Act, and authorizes the Secretary of Energy to prescribe amended or new energy standards for each type (or class) of covered product. The assessment of the proposed standards for refrigerators, refrigerator-freezers, and freezers presented in this document is designed to evaluate their economic impacts according to the criteria in the Act. It includes an engineering analysis of the cost and performance of design options to improve the efficiency of the products; forecasts of the number and average efficiency of products sold, the amount of energy the products will consume, and their prices and operating expenses; a determination of change in investment, revenues, and costs to manufacturers of the products; a calculation of the costs and benefits to consumers, electric utilities, and the nation as a whole; and an assessment of the environmental impacts of the proposed standards.

NONE

1995-07-01T23:59:59.000Z

162

User's manual for MOCUS-BACKFIRE [i.e. MOCUS-BACFIRE] : a computer program for common cause failure analysis  

E-Print Network (OSTI)

This report is the user's manual for MOCUS-BACFIRE, a computer programme for qualitative common cause analysis. The MOCUSBACFIRE package code was developed by coupling the MOCUS code and BACFIRE code. The MOCUS code is a ...

Heising, Carolyn D.

1981-01-01T23:59:59.000Z

163

An introduction to computer viruses  

Science Conference Proceedings (OSTI)

This report on computer viruses is based upon a thesis written for the Master of Science degree in Computer Science from the University of Tennessee in December 1989 by David R. Brown. This thesis is entitled An Analysis of Computer Virus Construction, Proliferation, and Control and is available through the University of Tennessee Library. This paper contains an overview of the computer virus arena that can help the reader to evaluate the threat that computer viruses pose. The extent of this threat can only be determined by evaluating many different factors. These factors include the relative ease with which a computer virus can be written, the motivation involved in writing a computer virus, the damage and overhead incurred by infected systems, and the legal implications of computer viruses, among others. Based upon the research, the development of a computer virus seems to require more persistence than technical expertise. This is a frightening proclamation to the computing community. The education of computer professionals to the dangers that viruses pose to the welfare of the computing industry as a whole is stressed as a means of inhibiting the current proliferation of computer virus programs. Recommendations are made to assist computer users in preventing infection by computer viruses. These recommendations support solid general computer security practices as a means of combating computer viruses.

Brown, D.R.

1992-03-01T23:59:59.000Z

164

TEMPEST: A computer code for three-dimensional analysis of transient fluid dynamics  

SciTech Connect

TEMPEST (Transient Energy Momentum and Pressure Equations Solutions in Three dimensions) is a powerful tool for solving engineering problems in nuclear energy, waste processing, chemical processing, and environmental restoration because it analyzes and illustrates 3-D time-dependent computational fluid dynamics and heat transfer analysis. It is a family of codes with two primary versions, a N- Version (available to public) and a T-Version (not currently available to public). This handout discusses its capabilities, applications, numerical algorithms, development status, and availability and assistance.

Fort, J.A.

1995-06-01T23:59:59.000Z

165

SPECIFIC HEAT DATA ANALYSIS PROGRAM FOR THE IBM 704 DIGITAL COMPUTER  

SciTech Connect

A computer program was developed to calculate the specific heat of a substance in the temperature range from 0.3 to 4.2 deg K, given temperature calibration data for a carbon resistance thermometer, experimental temperature drift, and heating period data. The speciftc heats calculated from these data are then fitted by a curve by the methods of least squares and the specific heats are corrected for the effect of the curvature of the data. The method, operation, program details, and program stops are discussed. A program listing is included. (M.C.G.)

Roach, P.R.

1962-01-01T23:59:59.000Z

166

Verification methodology for the DOE-1 building energy analysis computer program  

SciTech Connect

The ''verification'' for the real-system simulations of DOE-1 (formerly Cal-ERDA) will center on the task of determining the range of applicability (limitations) of the model and the desired accuracy within this range. Phase I is primarily an analytical verification test that includes a series of tests and crosschecks for exercising the DOE-1 program as a computational unit rather than as separate algorithms. Phase II is a field verification test designed to examine the DOE-1 program on an individual algorithm basis. Tasks in the project are listed. (MHR)

Diamond, S.C.; Hunn, B.D.; McDonald, T.E.

1978-01-01T23:59:59.000Z

167

Computer/Controller 1588  

Science Conference Proceedings (OSTI)

... Computer/Controller? What is a computer or controller? ... Computer/controllers in a system supporting IEEE 1588 will typically include a 1588 clock. ...

2010-10-29T23:59:59.000Z

168

Petroleum Gasoline & Distillate Needs Including the Energy ...  

U.S. Energy Information Administration (EIA)

Home > Petroleum > Analysis > Petroleum Gasoline & Distillate Needs Including the Energy Independence and Security Act (EISA) ...

169

GEOCITY: a computer model for systems analysis of geothermal district heating and cooling costs  

DOE Green Energy (OSTI)

GEOCITY is a computer-simulation model developed to study the economics of district heating/cooling using geothermal energy. GEOCITY calculates the cost of district heating/cooling based on climate, population, resource characteristics, and financing conditions. The basis for our geothermal-energy cost analysis is the unit cost of energy which will recover all the costs of production. The calculation of the unit cost of energy is based on life-cycle costing and discounted-cash-flow analysis. A wide variation can be expected in the range of potential geothermal district heating and cooling costs. The range of costs is determined by the characteristics of the resource, the characteristics of the demand, and the distance separating the resource and the demand. GEOCITY is a useful tool for estimating costs for each of the main parts of the production process and for determining the sensitivity of these costs to several significant parameters under a consistent set of assumptions.

Fassbender, L.L.; Bloomster, C.H.

1981-06-01T23:59:59.000Z

170

TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions  

Science Conference Proceedings (OSTI)

TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.

Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.

1990-06-01T23:59:59.000Z

171

Subsurface stratigraphy and petrophysical analysis of the Middle Devonian interval, including the Marcellus Shale, of the central Appalachian basin; northwestern Pennsylvania.  

E-Print Network (OSTI)

??In the central Appalachian basin, the multiple organic-rich intervals of the Middle Devonian, including the Marcellus Shale, are an emerging large resource play with high… (more)

Yanni, Anne.

2010-01-01T23:59:59.000Z

172

SuperComputing | Mathematics | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

Discrete Math Kinetic Theory Linear Algebra Solvers Uncertainty Quantification National Security Systems Modeling Engineering Analysis Behavioral Sciences Geographic Information Science and Technology Quantum Information Science Supercomputing and Computation Home | Science & Discovery | Supercomputing and Computation | Research Areas | Mathematics SHARE Mathematics The Computational Mathematics activities include the developing and deploying computational and applied mathematical capabilities for modeling, simulating, and predicting complex phenomena of importantance to the Department of Energy (DOE). A particular focus is on developing novel scalable algorithms for exploiting novel high performance computing resources for scientific discovery as well as decision-sciences.

173

National cyber defense high performance computing and analysis : concepts, planning and roadmap.  

SciTech Connect

There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

Hamlet, Jason R.; Keliiaa, Curtis M.

2010-09-01T23:59:59.000Z

174

Steam Generator Management Program: Thermal-Hydraulic Analysis of a Recirculating Steam Generator Using Commercial Computational Fluid Dynamics Software  

Science Conference Proceedings (OSTI)

The objective of this research was to demonstrate that a commercial computational fluid dynamics code can be set up to model the thermal-hydraulic physics that occur during the operation of a steam generator. Specific complexities in steam-generator thermal-hydraulic modeling include: phase change and two-phase fluid mechanics, hydrodynamic representation of the tube bundle, and thermal coupling between the primary and secondary sides. A commercial computational fluid dynamics code was used without any s...

2012-02-21T23:59:59.000Z

175

Intruder dose pathway analysis for the onsite disposal of radioactive wastes: The ONSITE/MAXI1 computer program  

Science Conference Proceedings (OSTI)

This document summarizes initial efforts to develop human-intrusion scenarios and a modified version of the MAXI computer program for potential use by the NRC in reviewing applications for onsite radioactive waste disposal. Supplement 1 of NUREG/CR-3620 (1986) summarized modifications and improvements to the ONSITE/MAXI1 software package. This document summarizes a modified version of the ONSITE/MAXI1 computer program. This modified version of the computer program operates on a personal computer and permits the user to optionally select radiation dose conversion factors published by the International Commission on Radiological Protection (ICRP) in their Publication No. 30 (ICRP 1979-1982) in place of those published by the ICRP in their Publication No. 2 (ICRP 1959) (as implemented in the previous versions of the ONSITE/MAXI1 computer program). The pathway-to-human models used in the computer program have not been changed from those described previously. Computer listings of the ONSITE/MAXI1 computer program and supporting data bases are included in the appendices of this document.

Kennedy, W.E. Jr.; Peloquin, R.A.; Napier, B.A.; Neuder, S.M.

1987-02-01T23:59:59.000Z

176

Computational Fluid Dynamic Analysis of the VHTR Lower Plenum Standard Problem  

DOE Green Energy (OSTI)

The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 ºC to perhaps 1000 ºC. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for safety analysis for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present report presents results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made.

Richard W. Johnson; Richard R. Schultz

2009-07-01T23:59:59.000Z

177

Performance analysis of a file catalog for the LHC computing grid  

Science Conference Proceedings (OSTI)

The Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, needs to produce unprecedented volumes of data when it starts operation in 2007. To provide for its computational needs, the LHC computing grid (LCG) should be deployed ...

J.-P. Baud; J. Casey; S. Lemaitre; C. Nicholson

2005-07-01T23:59:59.000Z

178

COMPUTATIONAL CHALLENGES IN PROCESSING AND ANALYSIS OF FULL-WATERCOLUMN MULTIBEAM SONAR DATA  

E-Print Network (OSTI)

Abstract: Several multibeam sonar systems are now capable of collecting and recording data samples covering the full-watercolumn, not just the seabed. Such systems, while still facing hardware challenges such as limited dynamic range and bandwidth, collect vast quantities of data, generally an order of magnitude more than conventional hydrographic multibeam or scientific single beam sonar systems. In this paper, the challenges faced by data processing systems for analysis of full-watercolumn multibeam sonar data are explored. Full-watercolumn multibeam data sets are valuable to scientists from traditionally diverse fields, providing simultaneous information about bathymetry, seabed type and habitats, and biomass in the watercolumn. Aspects of the data processing pipeline that are considered in this paper include raw data storage, data pre-processing, visualization and exploratory data analysis, statistical data analysis and post-processing, and presentation and interpretation of results. A general framework is outlined, and specific aspects applicable to the kind of data and problems at hand are emphasized. Proposed solutions to some of the challenges are reviewed and placed within an overall framework of multibeam sonar watercolumn data analysis. It will become clear that successful contributions to the field have been made, but that a general analysis method has yet to emerge. 1.

Edited S. M. Jesus; O. C. Rodríguez; Arthur Sale

2006-01-01T23:59:59.000Z

179

HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments  

Science Conference Proceedings (OSTI)

HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

McCann, R.A.; Lowery, P.S.

1987-10-01T23:59:59.000Z

180

HYDRA-II: A hydrothermal analysis computer code: Volume 1, Equations and numerics  

Science Conference Proceedings (OSTI)

HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in Cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the Cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits of modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. This volume, Volume I - Equations and Numerics, describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. The final volume, Volume III - Verification/Validation Assessments, presents results of numerical simulations of single- and multiassembly storage systems and comparisons with experimental data. 4 refs.

McCann, R.A.

1987-04-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

Analysis of fuel savings associated with fuel computers in multifamily buildings. Final report  

SciTech Connect

This research was undertaken to quantify the energy savings associated with the installation of a direct monitoring control system (DMC) on steam heating plants in multi-family buildings located in the New York City metropolitan area. The primary objective was to determine whether fuel consumption was lower in buildings employing a DMC relative to those using the more common indirect monitoring control system (IMC) and if so, to what extent. The analysis compares the fuel consumption of 442 buildings over 12 months. The type of control system installed in these buildings was either a Heat-Timer (identified as IMC equipment) or a computer-based unit (identified as DMC equipment). IMC provides control by running the boiler for longer or shorter periods depending on outdoor temperature. This system is termed indirect because there is no feedback from indoor (apartment) temperatures to the control. DMC provides control by sensing apartment temperatures. In a typical multifamily building, sensors are hard wired to between 5 and 10 apartments sensors. The annual savings and simple payback were computed for the DMC buildings by comparing annual fuel consumption among the building groupings. The comparison is based on mean BTUs per degree day consumed annually and normalized for building characteristics, such as, equipment maintenance and boiler steady state efficiency as well as weather conditions. The average annual energy consumption for the DMC buildings was 14.1 percent less than the annual energy consumption for the IMC buildings. This represents 3,826 gallons of No. 6 fuel oil or $2,295 at a price of $0.60 per gallon. A base DMC system costs from $8,400 to $10,000 installed depending on the number of sensors and complexity of the system. The standard IMC system costs from $2,000 to $3,000 installed. Based on this analysis the average simple payback is 2.9 or 4.0 years depending on either an upgrade from IMC to DMC (4.0 years) or a new installation (2.9) years.

McNamara, M.; Anderson, J.; Huggins, E. [EME Group, New York, NY (US)

1993-06-01T23:59:59.000Z

182

Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis  

Science Conference Proceedings (OSTI)

Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

2010-09-30T23:59:59.000Z

183

THE GREEN BANK TELESCOPE 350 MHz DRIFT-SCAN SURVEY II: DATA ANALYSIS AND THE TIMING OF 10 NEW PULSARS, INCLUDING A RELATIVISTIC BINARY  

SciTech Connect

We have completed a 350 MHz Drift-scan Survey using the Robert C. Byrd Green Bank Telescope with the goal of finding new radio pulsars, especially millisecond pulsars that can be timed to high precision. This survey covered {approx}10,300 deg{sup 2} and all of the data have now been fully processed. We have discovered a total of 31 new pulsars, 7 of which are recycled pulsars. A companion paper by Boyles et al. describes the survey strategy, sky coverage, and instrumental setup, and presents timing solutions for the first 13 pulsars. Here we describe the data analysis pipeline, survey sensitivity, and follow-up observations of new pulsars, and present timing solutions for 10 other pulsars. We highlight several sources-two interesting nulling pulsars, an isolated millisecond pulsar with a measurement of proper motion, and a partially recycled pulsar, PSR J0348+0432, which has a white dwarf companion in a relativistic orbit. PSR J0348+0432 will enable unprecedented tests of theories of gravity.

Lynch, Ryan S.; Kaspi, Victoria M.; Archibald, Anne M.; Karako-Argaman, Chen [Department of Physics, McGill University, 3600 University Street, Montreal, QC H3A 2T8 (Canada)] [Department of Physics, McGill University, 3600 University Street, Montreal, QC H3A 2T8 (Canada); Boyles, Jason; Lorimer, Duncan R.; McLaughlin, Maura A.; Cardoso, Rogerio F. [Department of Physics, West Virginia University, 111 White Hall, Morgantown, WV 26506 (United States)] [Department of Physics, West Virginia University, 111 White Hall, Morgantown, WV 26506 (United States); Ransom, Scott M. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States)] [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Stairs, Ingrid H.; Berndsen, Aaron; Cherry, Angus; McPhee, Christie A. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada)] [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Hessels, Jason W. T.; Kondratiev, Vladislav I.; Van Leeuwen, Joeri [ASTRON, The Netherlands Institute for Radio Astronomy, Postbus 2, 7990-AA Dwingeloo (Netherlands)] [ASTRON, The Netherlands Institute for Radio Astronomy, Postbus 2, 7990-AA Dwingeloo (Netherlands); Epstein, Courtney R. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States)] [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Pennucci, Tim [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States)] [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Roberts, Mallory S. E. [Eureka Scientific Inc., 2452 Delmer Street, Suite 100, Oakland, CA 94602 (United States)] [Eureka Scientific Inc., 2452 Delmer Street, Suite 100, Oakland, CA 94602 (United States); Stovall, Kevin, E-mail: rlynch@physics.mcgill.ca [Center for Advanced Radio Astronomy and Department of Physics and Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)] [Center for Advanced Radio Astronomy and Department of Physics and Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)

2013-02-15T23:59:59.000Z

184

User manual for INVICE 0.1-beta : a computer code for inverse analysis of isentropic compression experiments.  

Science Conference Proceedings (OSTI)

INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.

Davis, Jean-Paul

2005-03-01T23:59:59.000Z

185

High Performance Computing in the U.S. 1995- An Analysis on the Basis of the TOP500 List  

Science Conference Proceedings (OSTI)

In 1993 for the first time a list of the top 500 supercomputer sites worldwide has been made available. The TOP500 list allows a much more detailed and well founded analysis of the state of high performance computing. Previously data such as the number ...

Jack J. Dongarra; Horst D. Simon

1995-11-01T23:59:59.000Z

186

Computer-assisted comparison of analysis and test results in transportation experiments  

SciTech Connect

As a part of its ongoing research efforts, Sandia National Laboratories` Transportation Surety Center investigates the integrity of various containment methods for hazardous materials transport, subject to anomalous structural and thermal events such as free-fall impacts, collisions, and fires in both open and confined areas. Since it is not possible to conduct field experiments for every set of possible conditions under which an actual transportation accident might occur, accurate modeling methods must be developed which will yield reliable simulations of the effects of accident events under various scenarios. This requires computer software which is capable of assimilating and processing data from experiments performed as benchmarks, as well as data obtained from numerical models that simulate the experiment. Software tools which can present all of these results in a meaningful and useful way to the analyst are a critical aspect of this process. The purpose of this work is to provide software resources on a long term basis, and to ensure that the data visualization capabilities of the Center keep pace with advancing technology. This will provide leverage for its modeling and analysis abilities in a rapidly evolving hardware/software environment.

Knight, R.D. [Gram, Inc., Albuquerque, NM (United States); Ammerman, D.J.; Koski, J.A. [Sandia National Labs., Albuquerque, NM (United States)

1998-05-10T23:59:59.000Z

187

Computational Analysis of an Evolutionarily Conserved VertebrateMuscle Alternative Splicing Program  

SciTech Connect

A novel exon microarray format that probes gene expression with single exon resolution was employed to elucidate critical features of a vertebrate muscle alternative splicing program. A dataset of 56 microarray-defined, muscle-enriched exons and their flanking introns were examined computationally in order to investigate coordination of the muscle splicing program. Candidate intron regulatory motifs were required to meet several stringent criteria: significant over-representation near muscle-enriched exons, correlation with muscle expression, and phylogenetic conservation among genomes of several vertebrate orders. Three classes of regulatory motifs were identified in the proximal downstream intron, within 200nt of the target exons: UGCAUG, a specific binding site for Fox-1 related splicing factors; ACUAAC, a novel branchpoint-like element; and UG-/UGC-rich elements characteristic of binding sites for CELF splicing factors. UGCAUG was remarkably enriched, being present in nearly one-half of all cases. These studies suggest that Fox and CELF splicing factors play a major role in enforcing the muscle-specific alternative splicing program, facilitating expression of a set of unique isoforms of cytoskeletal proteins that are critical to muscle cell differentiation. Supplementary materials: There are four supplementary tables and one supplementary figure. The tables provide additional detailed information concerning the muscle-enriched datasets, and about over-represented oligonucleotide sequences in the flanking introns. The supplementary figure shows RT-PCR data confirming the muscle-enriched expression of exons predicted from the microarray analysis.

Das, Debopriya; Clark, Tyson A.; Schweitzer, Anthony; Marr,Henry; Yamamoto, Miki L.; Parra, Marilyn K.; Arribere, Josh; Minovitsky,Simon; Dubchak, Inna; Blume, John E.; Conboy, John G.

2006-06-15T23:59:59.000Z

188

Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors  

SciTech Connect

An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

2011-02-01T23:59:59.000Z

189

Radiological Safety Analysis Computer (RSAC) Program Version 7.0 Users’ Manual  

Science Conference Proceedings (OSTI)

The Radiological Safety Analysis Computer (RSAC) Program Version 7.0 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

Dr. Bradley J Schrader

2009-03-01T23:59:59.000Z

190

Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users’ Manual  

SciTech Connect

The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

Dr. Bradley J Schrader

2010-10-01T23:59:59.000Z

191

User manual for GEOCITY: a computer model for geothermal district heating cost analysis  

DOE Green Energy (OSTI)

A computer model called GEOCITY has been developed to systematically calculate the potential cost of district heating using hydrothermal geothermal resources. GEOCITY combines climate, demographic factors, and heat demand of the city, resource conditions, well drilling costs, design of the distribution system, tax rates, and financial factors into one systematic model. The GEOCITY program provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heat from a geothermal resource. Both the geothermal reservoir and distribution system are simulated to model the complete district heating system. GEOCITY consists of two major parts: the geothermal reservoir submodel and the distribution submodel. The reservoir submodel calculates the unit cost of energy by simulating the exploration, development, and operation of a geothermal reservoir and the transmission of this energy to a distribution center. The distribution submodel calculates the unit cost of heat by simulating the design and operation of a district heating distribution system. GEOCITY calculates the unit cost of energy and the unit cost of heat for the district heating system based on the principle that the present worth of the revenues will be equal to the present worth of the expenses including investment return over the economic life of the distribution system.

Huber, H.D.; McDonald, C.L.; Bloomster, C.H.; Schulte, S.C.

1978-10-01T23:59:59.000Z

192

User manual for GEOCOST: a computer model for geothermal cost analysis. Volume 2. Binary cycle version  

DOE Green Energy (OSTI)

A computer model called GEOCOST has been developed to simulate the production of electricity from geothermal resources and calculate the potential costs of geothermal power. GEOCOST combines resource characteristics, power recovery technology, tax rates, and financial factors into one systematic model and provides the flexibility to individually or collectively evaluate their impacts on the cost of geothermal power. Both the geothermal reservoir and power plant are simulated to model the complete energy production system. In the version of GEOCOST in this report, geothermal fluid is supplied from wells distributed throughout a hydrothermal reservoir through insulated pipelines to a binary power plant. The power plant is simulated using a binary fluid cycle in which the geothermal fluid is passed through a series of heat exchangers. The thermodynamic state points in basic subcritical and supercritical Rankine cycles are calculated for a variety of working fluids. Working fluids which are now in the model include isobutane, n-butane, R-11, R-12, R-22, R-113, R-114, and ammonia. Thermodynamic properties of the working fluids at the state points are calculated using empirical equations of state. The Starling equation of state is used for hydrocarbons and the Martin-Hou equation of state is used for fluorocarbons and ammonia. Physical properties of working fluids at the state points are calculated.

Huber, H.D.; Walter, R.A.; Bloomster, C.H.

1976-03-01T23:59:59.000Z

193

A versatile computer model for the design and analysis of electric and hybrid vehicles  

E-Print Network (OSTI)

The primary purpose of the work reported in this thesis was to develop a versatile computer model to facilitate the design and analysis of hybrid vehicle drive-trains. A hybrid vehicle is one in which power for propulsion comes from two distinct sources, usually an internal combustion engine and an electric motor. Because of the design flexibility inherent in a propulsion system that has more than one source of energy, computer er modeling is necessary to identify which parameters are mainly responsible for the performance of the power-plant and to determine which designs are most viable. The modeling system described i@ this thesis was developed to accommodate a wide range of vehicle components and modeling techniques. The modeling framework to which the drive-train component models are attached emphasizes the functional role of components and not their implementation. This creates a uniform component interface which limits access to the inner workings of a component model and improves compatibility between various types of models. Conceptual levels of abstraction are identified in this thesis which can be used to organize information in a hybrid vehicle model. By incorporating these levels into the modeling system, the tasks associated with creating a hybrid vehicle are separated allowing the designer to focus on one aspect at a time. The modeling of the various levels occurs at independent locations in the model and the interfaces between the conceptual levels are defined so that changing the implementation of a particular level does not affect its interaction with other levels. A simulation study is then detailed to show how the model can be used to create and analyze hybrid vehicle designs. The study focuses on two control algorithms which implement a sustainable, electrically-peaking, parallel hybrid design. The first algorithm reduces fuel consumption by minimizing the amount of time that the internal combustion engine is operated. The second algorithm reduces the load on the electric motor by operating the internal combustion engine over its entire speed range. The simulation results indicate that both algorithms can successfully maintain the battery state of charge over the given drive-cycle. Finally, conclusions about the model and recommendations for future studies are discussed.

Stevens, Kenneth Michael

1996-01-01T23:59:59.000Z

194

Visual Analysis of I/O System Behavior for HighEnd Computing  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF). On the ALCF systems, we use the 40-rack Intrepid Blue Gene network. When tracing applications in the ALCF environment, we set up a temporary PVFS2 storage cluster by the ALCF. The extra compute nodes we al- locate for the I/O software are accessible only by our application

Islam, M. Saif

195

Application of soft computing models to hourly weather analysis in southern Saskatchewan, Canada  

Science Conference Proceedings (OSTI)

Accurate weather forecasts are necessary for planning our day-to-day activities. However, dynamic behavior of weather makes the forecasting a formidable challenge. This study presents a soft computing model based on a radial basis function network (RBFN) ... Keywords: Artificial neural networks, Decision support, Forecasting, Modeling, Simulation, Soft computing, Weather

Imran Maqsood; Muhammad Riaz Khan; Guo H. Huang; Rifaat Abdalla

2005-02-01T23:59:59.000Z

196

COMPUTATIONAL SCIENCE CENTER  

SciTech Connect

Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

DAVENPORT, J.

2006-11-01T23:59:59.000Z

197

Analysis, Tuning and Comparison of Two General Sparse Solvers for Distributed Memory Computers  

E-Print Network (OSTI)

We describe the work performed in the context of a Franco-Berkeley funded project between NERSC-LBNL located in Berkeley (USA) and CERFACS-ENSEEIHT located in Toulouse (France). We discuss both the tuning and performance analysis of two distributed memory sparse solvers (SuperLU from Berkeley and MUMPS from Toulouse) on the 512 processor Cray T3E from NERSC (Lawrence Berkeley National Laboratory). This project gave us the opportunity to improve the algorithms and add new features to the codes. We then quite extensively analyse and compare the two approaches on a set of large problems from real applications. We further explain the main differences in the behaviour of the approaches on artificial regular grid problems. As a conclusion to this activity report, we mention a set of parallel sparse solvers on which this type of study should be extended. Keywords: sparse linear systems, distributed memory codes, multifrontal, supernodal, direct methods, comparison of codes. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available at http://www.cerfacs.fr/algor/algo reports.html. The project was supported by the France-Berkeley Fund. This project also utilized resources of the National Energy Research Scientific Computing Center (NERSC) under contract number DE-AC03-76SF00098. 2 amestoy@enseeiht.fr. ENSEEIHT-IRIT, 2 rue Camichel, 31071 Toulouse, France. Much of the work done while a visitor at NERSC. 3 duff@cerfacs.fr. Also at Atlas Centre, RAL, Oxon OX11 0QX, England. 4 jeanyves@nag.co.uk. NAg Ltd, Wilkinson House, Oxford OX2 8DR, England. 5 xiaoye@nersc.gov. NERSC, Lawrence Berkeley National Lab, MS 50F, 1 Cyclotron Rd., Berkeley, CA 94720. The research of this author was supported in part by the National Science Foundation Cooperative Agreement...

Patrick R. Amestoy; Iain S. Duff; Jean-Yves L' Excellent; Xiaoye S. Li

2000-01-01T23:59:59.000Z

198

TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 1, Numerical methods and input instructions  

SciTech Connect

This document describes the numerical methods, current capabilities, and the use of the TEMPEST (Version L, MOD 2) computer program. TEMPEST is a transient, three-dimensional, hydrothermal computer program that is designed to analyze a broad range of coupled fluid dynamic and heat transfer systems of particular interest to the Fast Breeder Reactor thermal-hydraulic design community. The full three-dimensional, time-dependent equations of motion, continuity, and heat transport are solved for either laminar or turbulent fluid flow, including heat diffusion and generation in both solid and liquid materials. 10 refs., 22 figs., 2 tabs.

Trent, D.S.; Eyler, L.L.; Budden, M.J.

1983-09-01T23:59:59.000Z

199

Computational Biology | More Science | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

Computational Biology SHARE Computational Biology Computational Biology research encompasses many important aspects including molecular biophysics for bio-energy, genetic level...

200

Information regarding previous INCITE awards including selected highlights  

Office of Science (SC) Website

Information regarding previous INCITE awards including selected highlights Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

Use of a Real-Time Computer Graphics System in Analysis and Forecasting  

Science Conference Proceedings (OSTI)

Real-time computer graphics systems are being introduced into weather stations throughout the United States. A sample of student forecasters used such a system to solve specific specialized forecasting problems. Results suggest that for some ...

John J. Cahir; John M. Norman; Dale A. Lowry

1981-03-01T23:59:59.000Z

202

Analysis of on-premise to cloud computing migration strategies for enterprises  

E-Print Network (OSTI)

In recent years offering and maturity in Cloud Computing space has gained significant momentum. CIOs are looking at Cloud seriously because of bottom line savings and scalability advantages. According to Gartner's survey ...

Dhiman, Ashok

2011-01-01T23:59:59.000Z

203

A neuro-computational intelligence analysis of the ecological footprint of nations  

Science Conference Proceedings (OSTI)

The per capita ecological footprint (EF) is one of the most-widely recognized measures of environmental sustainability. It seeks to quantify the Earth's biological capacity required to support human activity. This study uses three neuro-computational ...

Mohamed M. Mostafa; Rajan Nataraajan

2009-07-01T23:59:59.000Z

204

Computer Science & Computer Engineering  

E-Print Network (OSTI)

CSCE Computer Science & Computer Engineering #12;Computer scientists and computer engineers design and implement e cient software and hardware solutions to computer-solvable problems. They are involved, virtual reality and robotics. Within the Computer Science department, we o er four exciting majors from

Rohs, Remo

205

Computer resources Computer resources  

E-Print Network (OSTI)

Computer resources 1 Computer resources available to the LEAD group Cédric David 30 September 2009 #12;Ouline · UT computer resources and services · JSG computer resources and services · LEAD computers· LEAD computers 2 #12;UT Austin services UT EID and Password 3 https://utdirect.utexas.edu #12;UT Austin

Yang, Zong-Liang

206

Solar energy conversion systems engineering and economic analysis radiative energy input/thermal electric output computation. Volume III  

DOE Green Energy (OSTI)

The direct energy flux analytical model, an analysis of the results, and a brief description of a non-steady state model of a thermal solar energy conversion system implemented on a code, SIRR2, as well as the coupling of CIRR2 which computes global solar flux on a collector and SIRR2 are presented. It is shown how the CIRR2 and, mainly, the SIRR2 codes may be used for a proper design of a solar collector system. (LEW)

Russo, G.

1982-09-01T23:59:59.000Z

207

BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability  

SciTech Connect

The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

Sellers, C.; Fox, B.; Paulz, J.

1996-03-01T23:59:59.000Z

208

Towards Real-Time High Performance Computing For Power Grid Analysis  

SciTech Connect

Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

2012-11-16T23:59:59.000Z

209

Computational analysis of heat and water transfer in a PEM fuel cell  

Science Conference Proceedings (OSTI)

Proton exchange membrane (PEM) fuel cells are promising power-generation sources for mobile and stationary applications. In this paper a non-isothermal, single-domain and two-dimensional computational fluid dynamics model is presented to investigate ... Keywords: CFD, PEM fuel cell, heat, non-isothermal, single-domain

Ebrahim Afshari; Seyed Ali Jazayeri

2008-02-01T23:59:59.000Z

210

Analysis of LMFBR primary system response to an HCDA using an Eulerian computer code  

SciTech Connect

Applications of an Eulerian code to predict the response of LMFBR containment and primary piping systems to hypothetical core disruptive accidents (HCDA), and to analyze sodium spillage problems, are described. The computer code is an expanded version of the ICECO code. Sample problems are presented for slug impact and sodium spillage, dynamics of the HCDA bubbles, and response of a piping loop. (JWR)

Chang, Y.W.; Wang, C.Y.; Chu, H.Y.; Abdel-Moneim, M.T.; Gvildys, J.

1975-01-01T23:59:59.000Z

211

integrating Data for Analysis, Anonymization and Sharing A National Center for Biomedical Computing  

E-Print Network (OSTI)

and infrastructure need to be coordinated to ensure efficient utilization of resources. On the training side young age, iDASH has already developed different models, tools and infrastructure for data sharing Computing, funded in late 2010. Its goal is to develop infrastructure, ser- vices, and tools to allow

Bandettini, Peter A.

212

A neuro-computational intelligence analysis of the global consumer software piracy rates  

Science Conference Proceedings (OSTI)

Software piracy represents a major damage to the moral fabric associated with the respect of intellectual property. The rate of software piracy appears to be increasing globally, suggesting that additional research that uses new approaches is necessary ... Keywords: Bayesian regression, Ethical behavior, Evolutionary computation models, Global software piracy, Neural networks

Mohamed M. Mostafa

2011-07-01T23:59:59.000Z

213

Visual Analysis of I/O System Behavior for HighEnd Computing  

E-Print Network (OSTI)

Inglett, and Robert Wisniewski for their advice on the Blue Gene hardware. We thank Argonne ALCF's William, and Timothy Williams, for their advice on the ALCF early-access Blue Gene/Q machine. This work was supported] Argonne Leadership Computing Facility: The ANL Intrepid Blue Gene System. http://www.alcf

214

PV1 : an interactive computer model to support commercialization policy for photovoltaics policy analysis  

E-Print Network (OSTI)

The purpose of this report is the demonstrate the use of PVI as a policy analysis device. This analysis consists of the creation of a base case and the subsequent running of 50 additional cases to demonstrate the effects ...

Wulfe, Martin

1981-01-01T23:59:59.000Z

215

High Performance Computing in the U.S. in 1995 -- An Analysis on the Basis of the TOP500 List  

E-Print Network (OSTI)

In 1993 for the first time a list of the top 500 supercomputer sites worldwide has been made available. The TOP500 list allows a much more detailed and well founded analysis of the state of high performance computing. Previously data such as the number and geographical distribution of supercomputer installations were difficult to obtain, and only a few analysts undertook the effort to track the press releases by dozens of vendors. With the TOP500 report now generally and easily available it is possible to present an analysis of the state of High Performance Computing (HPC) in the U.S. This note summarizes some of the most important observations about HPC in the U.S. as of late 1995, in particular the continued dominance of the world market in HPC by the U.S, the market penetration by commodity microprocessor based systems, and the growing industrial use of supercomputers. 1 Introduction The rapid transformation of the high performance computing market in the U.S. which began in 1994...

Jack Dongarra; Horst D. Simon

1996-01-01T23:59:59.000Z

216

High Performance Computing @ Fermilab  

NLE Websites -- All DOE Office Websites (Extended Search)

simulations, research & development of the physics analysis software. Computing Data Handling & storage Networking Analysis Software LHC CMS Experiment one of many experiments...

217

Components of disaster-tolerant computing: analysis of disaster recovery, IT application downtime and executive visibility  

Science Conference Proceedings (OSTI)

This paper provides a review of disaster-tolerant Information Technology (IT). The state of traditional disaster recovery approaches is outlined. The risks of IT application downtime attributable to the increasing dependence on critical information ... Keywords: IT application availability, IT application downtime, business continuity, complex infrastructure systems, criticality-driven, disaster recovery, disaster tolerance, disaster-tolerant computing, emergency management, executive visibility, information technology, interaction, interdependent, survivability

Chad M. Lawler; Michael A. Harper; Stephen A. Szygenda; Mitchell A. Thornton

2008-02-01T23:59:59.000Z

218

Computer hardware fault administration  

DOE Patents (OSTI)

Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

2010-09-14T23:59:59.000Z

219

Computers and Computer Networks  

NLE Websites -- All DOE Office Websites (Extended Search)

and Computer Networks and Computer Networks Computer Science documentation, etc. Computer Science Research and Services at the Lab Super Computing Computer Graphics Computer & Internet information via yahoo.com, categorized by subject Perl UNIX documentation Shareware sites MBONE and Videoconferencing Computer Networks and related documentation Computer Documentation World Wide Web UNIX Documentation TeX, LaTeX FAQ, documents, archives, etc. MacInTouch -- current Macintosh information, from vendor & others Shareware sites The Free On-line Dictionary of Computing PDS: The Performance Database Server of Computer Benchmark Return to Top Return to Top Newsgroups, USEnet, and Mailing Lists Usenet (Internet News Groups) Mailing list software & information Return to Top Return to Top

220

An Analysis Framework for Investigating the Trade-offs Between System Performance and Energy Consumption in a Heterogeneous Computing Environment  

SciTech Connect

Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective genetic algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.

Friese, Ryan [Colorado State University, Fort Collins; Khemka, Bhavesh [Colorado State University, Fort Collins; Maciejewski, Anthony A [Colorado State University, Fort Collins; Siegel, Howard Jay [Colorado State University, Fort Collins; Koenig, Gregory A [ORNL; Powers, Sarah S [ORNL; Hilton, Marcia M [ORNL; Rambharos, Rajendra [ORNL; Okonski, Gene D [ORNL; Poole, Stephen W [ORNL

2013-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

Development of a computer-aided fault tree synthesis methodology for quantitative risk analysis in the chemical process industry  

E-Print Network (OSTI)

There has been growing public concern regarding the threat to people and environment from industrial activities, thus more rigorous regulations. The investigation of almost all the major accidents shows that we could have avoided those tragedies with effective risk analysis and safety management programs. High-quality risk analysis is absolutely necessary for sustainable development. As a powerful and systematic tool, fault tree analysis (FTA) has been adapted to the particular need of chemical process quantitative risk analysis (CPQRA) and found great applications. However, the application of FTA in the chemical process industry (CPI) is limited. One major barrier is the manual synthesis of fault trees. It requires a thorough understanding of the process and is vulnerable to individual subjectivity. The quality of FTA can be highly subjective and variable. The availability of a computer-based FTA methodology will greatly benefit the CPI. The primary objective of this research is to develop a computer-aided fault tree synthesis methodology for CPQRA. The central idea is to capture the cause-and-effect logic around each item of equipment directly into mini fault trees. Special fault tree models have been developed to manage special features. Fault trees created by this method are expected to be concise. A prototype computer program is provided to illustrate the methodology. Ideally, FTA can be standardized through a computer package that reads information contained in process block diagrams and provides automatic aids to assist engineers in generating and analyzing fault trees. Another important issue with regard to QRA is the large uncertainty associated with available failure rate data. In the CPI, the ranges of failure rates observed could be quite wide. Traditional reliability studies using point values of failure rates may result in misleading conclusions. This dissertation discusses the uncertainty with failure rate data and proposes a procedure to deal with data uncertainty in determining safety integrity level (SIL) for a safety instrumented system (SIS). Efforts must be carried out to obtain more accurate values of those data that might actually impact the estimation of SIL. This procedure guides process hazard analysts toward a more accurate SIL estimation and avoids misleading results due to data uncertainty.

Wang, Yanjun

2004-12-01T23:59:59.000Z

222

Comments on the use of computer models for merger analysis in the electricity industry  

E-Print Network (OSTI)

, factors on which information in available in the electricity industry. 1 University of California Energy price. The ability to profitably pursue such a strategy is the primary concern of market power analysis designed to aid in analysis of market power must be able to incorporate strategic firm behavior

California at Berkeley. University of

223

Sieveless particle size distribution analysis of particulate materials through computer vision  

Science Conference Proceedings (OSTI)

This paper explores the inconsistency of ''length-based separation'' by mechanical sieving of particulate materials with standard sieves, which is the standard method of particle size distribution (PSD) analysis. We observed inconsistencies of length-based ... Keywords: Biomass sieve analysis, Dimension, Image processing, ImageJ plugin, Particle size distribution, Physical property

C. Igathinathane; L. O. Pordesimo; E. P. Columbus; W. D. Batchelor; S. Sokhansanj

2009-05-01T23:59:59.000Z

224

Visual analysis of I/O system behavior for high-end computing  

Science Conference Proceedings (OSTI)

As supercomputers grow ever larger, so too do application run times and data requirements. The operational patterns of modern parallel I/O systems are far too complex to allow for a direct analysis of their trace logs. Several visualization methods have ... Keywords: information visualization, parallel I/O, performance analysis tools

Christopher Muelder; Carmen Sigovan; Kwan-Liu Ma; Jason Cope; Sam Lang; Kamil Iskra; Pete Beckman; Robert Ross

2011-06-01T23:59:59.000Z

225

Computer analysis of the sensitivity of the integrated assessment model MERGE-5I  

Science Conference Proceedings (OSTI)

This paper reports on an application of a large-scale non-linear optimization model to the analysis of environmental problems. The authors present first results of the initial stage of the study, a sensitivity analysis of the model's input parameters. ...

Vyacheslav Maksimov; Leo Schrattenholzer; Yaroslav Minullin

2005-09-01T23:59:59.000Z

226

Quantum Computing Computer Scientists  

E-Print Network (OSTI)

Quantum Computing for Computer Scientists Noson S. Yanofsky and Mirco A. Mannucci #12;© May 2007 Noson S. Yanofsky Mirco A. Mannucci #12;Quantum Computing for Computer Scientists Noson S. Yanofsky of Vector Spaces 3 The Leap From Classical to Quantum 3.1 Classical Deterministic Systems 3.2 Classical

Yanofsky, Noson S.

227

CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual  

SciTech Connect

The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)

1993-10-01T23:59:59.000Z

228

Future computing needs for Fermilab  

SciTech Connect

The following recommendations are made: (1) Significant additional computing capacity and capability beyond the present procurement should be provided by 1986. A working group with representation from the principal computer user community should be formed to begin immediately to develop the technical specifications. High priority should be assigned to providing a large user memory, software portability and a productive computing environment. (2) A networked system of VAX-equivalent super-mini computers should be established with at least one such computer dedicated to each reasonably large experiment for both online and offline analysis. The laboratory staff responsible for mini computers should be augmented in order to handle the additional work of establishing, maintaining and coordinating this system. (3) The laboratory should move decisively to a more fully interactive environment. (4) A plan for networking both inside and outside the laboratory should be developed over the next year. (5) The laboratory resources devoted to computing, including manpower, should be increased over the next two to five years. A reasonable increase would be 50% over the next two years increasing thereafter to a level of about twice the present one. (6) A standing computer coordinating group, with membership of experts from all the principal computer user constituents of the laboratory, should

Not Available

1983-12-01T23:59:59.000Z

229

Analysis of operating alternatives for the Naval Computer and Telecommunications Station Cogeneration Facility at Naval Air Station North Island, San Diego, California  

SciTech Connect

The Naval Facilities Engineering Command Southwestern Division commissioned Pacific Northwest Laboratory (PNL), in support of the US Department of Energy (DOE) Federal Energy Management Program (FEMP), to determine the most cost-effective approach to the operation of the cogeneration facility in the Naval Computer and Telecommunications Station (NCTS) at the Naval Air Station North Island (NASNI). Nineteen alternative scenarios were analyzed by PNL on a life-cycle cost basis to determine whether to continue operating the cogeneration facility or convert the plant to emergency-generator status. This report provides the results of the analysis performed by PNL for the 19 alternative scenarios. A narrative description of each scenario is provided, including information on the prime mover, electrical generating efficiency, thermal recovery efficiency, operational labor, and backup energy strategy. Descriptions of the energy and energy cost analysis, operations and maintenance (O&M) costs, emissions and related costs, and implementation costs are also provided for each alternative. A summary table presents the operational cost of each scenario and presents the result of the life-cycle cost analysis.

Parker, S.A.; Carroll, D.M.; McMordie, K.L.; Brown, D.R.; Daellenbach, K.K.; Shankle, S.A.; Stucky, D.J.

1993-12-01T23:59:59.000Z

230

Computing Frontier: Distributed Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and simulations, and allowing for wide-spread participation of large groups of researchers. For a variety of reasons, these resources have become more distributed over a large geographic area, and some resources are highly specialized computing machines. In this report for the Snowmass Computing Frontier Study, we consider several questions about distributed computing

231

MSc Computer Networks Analysis and Development of IT Security at TopSoft  

E-Print Network (OSTI)

I would like to thank Dr. Orhan Gemikonakli of the School of Computing Science for his kind supervision and guidance throughout this project. He was a constant inspiration throughout my stay at Middlesex for my Masters. I sincerely thank him for his considerate and caring efforts. I would like to take this opportunity to thank my parents who have supported me in every possible way throughout my studies. They have not spared any opportunity to encourage me. I have no words to express my gratitude towards them. My industrial placement provided with my first professional experience. Mr. Bernard Parsons and Mr. Tony Jennings were really helpful and supportive. The time I have spent at TopSoft will always be a golden memory. Bernard was an inspiration while Dan (Daniel Clarke) taught me how to stay motivated and focussed as a true professional. I would also like to appreciate

Dr. Orhan Gemikonakli; Siraj Ahmed Shaikh

2001-01-01T23:59:59.000Z

232

Distributional semantics with eyes: using image analysis to improve computational representations of word meaning  

Science Conference Proceedings (OSTI)

The current trend in image analysis and multimedia is to use information extracted from text and text processing techniques to help vision-related tasks, such as automated image annotation and generating semantically rich descriptions of images. In this ... Keywords: language, object recognition, semantics, visual words

Elia Bruni; Jasper Uijlings; Marco Baroni; Nicu Sebe

2012-10-01T23:59:59.000Z

233

Geometric and biomechanical analysis for computer-aided design of assistive medical devices  

Science Conference Proceedings (OSTI)

This paper presents geometric and biomechanical analysis for designing elastic braces used to restrict the motion of injured joints. Towards the ultimate goal of the brace research, which is to design custom-made braces of the stiffness prescribed by ... Keywords: Assisitve medical device design, Biomechanics, Mesh simplification, Strain energy, Strain energy density function, Surface parametrization

Taeseung D. Yoo; Eunyoung Kim; JungHyun Han; Daniel K. Bogen

2005-12-01T23:59:59.000Z

234

64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization  

E-Print Network (OSTI)

64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization Research and Development in Visual Analysis Judith Devaney Terrence Griffin John

Perkins, Richard A.

235

Typologies of Computation and Computational Models  

E-Print Network (OSTI)

We need much better understanding of information processing and computation as its primary form. Future progress of new computational devices capable of dealing with problems of big data, internet of things, semantic web, cognitive robotics and neuroinformatics depends on the adequate models of computation. In this article we first present the current state of the art through systematization of existing models and mechanisms, and outline basic structural framework of computation. We argue that defining computation as information processing, and given that there is no information without (physical) representation, the dynamics of information on the fundamental level is physical/ intrinsic/ natural computation. As a special case, intrinsic computation is used for designed computation in computing machinery. Intrinsic natural computation occurs on variety of levels of physical processes, containing the levels of computation of living organisms (including highly intelligent animals) as well as designed computational devices. The present article offers a typology of current models of computation and indicates future paths for the advancement of the field; both by the development of new computational models and by learning from nature how to better compute using different mechanisms of intrinsic computation.

Mark Burgin; Gordana Dodig-Crnkovic

2013-12-09T23:59:59.000Z

236

860 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 28, NO. 6, JUNE 2009 Multiscale Thermal Analysis for Nanometer-Scale  

E-Print Network (OSTI)

of Electrical, Computer, and Energy Engineering, University of Colorado, Boulder, CO 80309 USA (e-mail: li power densities are making this problem more important. Characterizing the thermal profile of an IC in software and used for the full-chip ther- mal analysis and temperature-dependent leakage analysis of an IC

Dick, Robert

237

Computational Methods for Simulating Quantum Computers  

E-Print Network (OSTI)

This review gives a survey of numerical algorithms and software to simulate quantum computers.It covers the basic concepts of quantum computation and quantum algorithms and includes a few examples that illustrate the use of simulation software for ideal and physical models of quantum computers.

H. De Raedt; K. Michielsen

2004-06-27T23:59:59.000Z

238

The PCMDI visualization and computation system (VCS): A workbench for climate data display and analysis  

SciTech Connect

This software was developed by the Program for Climate Model Diagnosis and Intercomparison (PCMDI) at the Lawrence Livermore National Laboratory in Livermore, California. It was designed to provide some of the basic capabilities needed for validating, comparing, and diagnosing climate model behavior. It can be controlled either interactively, or from a script file, or control can alternate between these modes during a session. A script can be saved during an interactive session and merely replayed, or it can be edited and replayed. The state-of-the-system can be dumped, as a script, at any instant, and that script can be used later to restore that instant of the session. Attributes for data can describe variables existing in a file or variables to be computed as a function of previously selected variables. The dimensions of variables can be subset, reversed, transposed, wrapped-around, and thinned by selecting either a stride of nodes or by randomly selecting individual nodes. Grid transformations are supported by allowing a different set of dimension vectors to be specified in the dimension descriptors. A display page can be output as either Adobe PostScript for hardcopy, or as a raster image for hardcopy or animation.

Williams, D.N.; Mobley, R.L.

1994-03-01T23:59:59.000Z

239

SIMMER-II: A computer program for LMFBR disrupted core analysis  

SciTech Connect

SIMMER-2 (Version 12) is a computer program to predict the coupled neutronic and fluid-dynamics behavior of liquid-metal fast reactors during core-disruptive accident transients. The modeling philosophy is based on the use of general, but approximate, physics to represent interactions of accident phenomena and regimes rather than a detailed representation of specialized situations. Reactor neutronic behavior is predicted by solving space (r,z), energy, and time-dependent neutron conservation equations (discrete ordinates transport or diffusion). The neutronics and the fluid dynamics are coupled via temperature- and background-dependent cross sections and the reactor power distribution. The fluid-dynamics calculation solves multicomponent, multiphase, multifield equations for mass, momentum, and energy conservation in (r,z) or (x,y) geometry. A structure field with nine density and five energy components; a liquid field with eight density and six energy components; and a vapor field with six density and on energy component are coupled by exchange functions representing a modified-dispersed flow regime with a zero-dimensional intra-cell structure model.

Bohl, W.R.; Luck, L.B.

1990-06-01T23:59:59.000Z

240

For a social network analysis of computer networks: a sociological perspective on collaborative work and virtual community  

Science Conference Proceedings (OSTI)

Keywords: computer supported cooperative work, electronic mail, informal relationships, social networks, telework

Barry Wellman

1996-04-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

Computational Study and Analysis of Structural Imperfections in 1D and 2D Photonic Crystals  

SciTech Connect

Dielectric reflectors that are periodic in one or two dimensions, also known as 1D and 2D photonic crystals, have been widely studied for many potential applications due to the presence of wavelength-tunable photonic bandgaps. However, the unique optical behavior of photonic crystals is based on theoretical models of perfect analogues. Little is known about the practical effects of dielectric imperfections on their technologically useful optical properties. In order to address this issue, a finite-difference time-domain (FDTD) code is employed to study the effect of three specific dielectric imperfections in 1D and 2D photonic crystals. The first imperfection investigated is dielectric interfacial roughness in quarter-wave tuned 1D photonic crystals at normal incidence. This study reveals that the reflectivity of some roughened photonic crystal configurations can change up to 50% at the center of the bandgap for RMS roughness values around 20% of the characteristic periodicity of the crystal. However, this reflectivity change can be mitigated by increasing the index contrast and/or the number of bilayers in the crystal. In order to explain these results, the homogenization approximation, which is usually applied to single rough surfaces, is applied to the quarter-wave stacks. The results of the homogenization approximation match the FDTD results extremely well, suggesting that the main role of the roughness features is to grade the refractive index profile of the interfaces in the photonic crystal rather than diffusely scatter the incoming light. This result also implies that the amount of incoherent reflection from the roughened quarterwave stacks is extremely small. This is confirmed through direct extraction of the amount of incoherent power from the FDTD calculations. Further FDTD studies are done on the entire normal incidence bandgap of roughened 1D photonic crystals. These results reveal a narrowing and red-shifting of the normal incidence bandgap with increasing RMS roughness. Again, the homogenization approximation is able to predict these results. The problem of surface scratches on 1D photonic crystals is also addressed. Although the reflectivity decreases are lower in this study, up to a 15% change in reflectivity is observed in certain scratched photonic crystal structures. However, this reflectivity change can be significantly decreased by adding a low index protective coating to the surface of the photonic crystal. Again, application of homogenization theory to these structures confirms its predictive power for this type of imperfection as well. Additionally, the problem of a circular pores in 2D photonic crystals is investigated, showing that almost a 50% change in reflectivity can occur for some structures. Furthermore, this study reveals trends that are consistent with the 1D simulations: parameter changes that increase the absolute reflectivity of the photonic crystal will also increase its tolerance to structural imperfections. Finally, experimental reflectance spectra from roughened 1D photonic crystals are compared to the results predicted computationally in this thesis. Both the computed and experimental spectra correlate favorably, validating the findings presented herein.

K.R. Maskaly

2005-06-01T23:59:59.000Z

242

The Use Of Computational Human Performance Modeling As Task Analysis Tool  

SciTech Connect

During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

Jacuqes Hugo; David Gertman

2012-07-01T23:59:59.000Z

243

SUPERENERGY-2: a multiassembly, steady-state computer code for LMFBR core thermal-hydraulic analysis  

SciTech Connect

Core thermal-hydraulic design and performance analyses for Liquid Metal Fast Breeder Reactors (LMFBRs) require repeated detailed multiassembly calculations to determine radial temperature profiles and subchannel outlet temperatures for various core configurations and subassembly structural analyses. At steady-state, detailed core-wide temperature profiles are required for core restraint calculations and subassembly structural analysis. In addition, sodium outlet temperatures are routinely needed for each reactor operating cycle. The SUPERENERGY-2 thermal-hydraulic code was designed specifically to meet these designer needs. It is applicable only to steady-state, forced-convection flow in LMFBR core geometries.

Basehore, K.L.; Todreas, N.E.

1980-08-01T23:59:59.000Z

244

Computational Biology | Supercomputing & Computation | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

Home | Science & Discovery | Supercomputing and Computation | Research Areas | Biology SHARE Computational Biology Computational Biology research encompasses many important...

245

Kalaupapa, Molokai, Hawaii wind-turbine and battery-storage analysis using the SOLSTOR II computer code  

Science Conference Proceedings (OSTI)

The feasibility of a wind-turbine collector and battery-storage system on the island of Molokai, Hawaii, was investigasted using the SOLSTOR II optimizing computer code. Three system configurations were evaluated: utility-connected, stand-alone with generator backup, and stand-alone without generator backup. The utility-connected version considered both sell-back of energy to the utility and no sell-back. Major analysis conclusions are: the wind regime used in the simulation is extremely good, the annualized specific energy costs for all simulation cases are considerably lower than the current utility electric rate, and moderate battery storage capacity is economically attractive on the island of Mokokai.

Murphy, K.D.

1983-01-01T23:59:59.000Z

246

Plant Engineering: Guideline for the Acceptance of Commercial-Grade Design and Analysis Computer Programs Used in Nuclear Safety-Rel ated Applications  

Science Conference Proceedings (OSTI)

This report provides methodology that can be used to perform safety classification of non-process computer programs, such as design and analysis tools, that are not resident or embedded (installed as part of) plant systems, structures, and components. The report also provides guidance for using commercial-grade dedication methodology to accept commercially procured computer programs that perform a safety-related function. The guidance is intended for use by subject matter experts in the acceptance of com...

2012-06-04T23:59:59.000Z

247

Plant Engineering: Guideline for the Acceptance of Commercial-Grade Design and Analysis Computer Programs Used in Nuclear Safety-Related Applications: Revision 1 of 1025243  

Science Conference Proceedings (OSTI)

This report supersedes EPRI 1025243 and provides methodology that can be used to perform safety classification of non-process computer programs, such as design and analysis tools, that are not resident or embedded (installed as part of) plant systems, structures, and components. The report also provides guidance for using commercial-grade dedication methodology to accept commercially procured computer programs that perform a safety-related function. The guidance is intended for use by subject matter ...

2013-12-19T23:59:59.000Z

248

Analysis, tuning and comparison of two general sparse solvers for distributed memory computers  

Science Conference Proceedings (OSTI)

We describe the work performed in the context of a Franco-Berkeley funded project between NERSC-LBNL located in Berkeley (USA) and CERFACS-ENSEEIHT located in Toulouse (France). We discuss both the tuning and performance analysis of two distributed memory sparse solvers (superlu from Berkeley and mumps from Toulouse) on the 512 processor Cray T3E from NERSC (Lawrence Berkeley National Laboratory). This project gave us the opportunity to improve the algorithms and add new features to the codes. We then quite extensively analyze and compare the two approaches on a set of large problems from real applications. We further explain the main differences in the behavior of the approaches on artificial regular grid problems. As a conclusion to this activity report, we mention a set of parallel sparse solvers on which this type of study should be extended.

Amestoy, P.R.; Duff, I.S.; L'Excellent, J.-Y.; Li, X.S.

2000-06-30T23:59:59.000Z

249

Argonne's Laboratory computing center - 2007 annual report.  

Science Conference Proceedings (OSTI)

Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

Bair, R.; Pieper, G. W.

2008-05-28T23:59:59.000Z

250

Computational Analysis of Thermo-Fluidic Characteristics of a Carbon Nano-Fin  

E-Print Network (OSTI)

Miniaturization of electronic devices for enhancing their performance is associated with higher heat fluxes and cooling requirements. Surface modifi cation by texturing or coating is the most cost-effective approach to enhance the cooling of electronic devices. Experiments on carbon nanotube coated heater surfaces have shown heat transfer enhancement of 60 percent. In addition, silicon nanotubes etched on the silicon substrates have shown heat flux enhancement by as much as 120 percent. The heat flux augmentation is attributed to the combined effects of increase in the surface area due to the protruding nanotubes (nano- n eff ect), disruption of vapor lms and modi fication of the thermal/mass di ffusion boundary layers. Since the e ffects of disruption of vapor lms and modifi cation of the thermal/mass di ffusion boundary layers are similar in the above experiments, the difference in enhancement in heat transfer is the consequence of dissimilar nano- n eff ect. The thermal conductivity of carbon nanotubes is of the order of 6000 W/mK while that of silicon is 150 W/mK. However, in the experiments, carbon nanotubes have shown poor performance compared to silicon. This is the consequence of interfacial thermal resistance between the carbon nanotubes and the surrounding fluid since earlier studies have shown that there is comparatively smaller interface resistance to the heat flow from the silicon surface to the surrounding liquids. At the molecular level, atomic interactions of the coolant molecules with the solid substrate as well as their thermal-physical-chemical properties can play a vital role in the heat transfer from the nanotubes. Characterization of the e ffect of the molecular scale chemistry and structure can help to simulate the performance of a nano fin in diff erent kinds of coolants. So in this work to elucidate the eff ect of the molecular composition and structures on the interfacial thermal resistance, water, ethyl alcohol, 1-hexene, n-heptane and its isomers and chains are considered. Non equilibrium molecular dynamic simulations have been performed to compute the interfacial thermal resistance between the carbon nanotube and different coolants as well as to study the diff erent modes of heat transfer. The approach used in these simulations is based on the lumped capacitance method. This method is applicable due to the very high thermal conductivity of the carbon nanotubes, leading to orders of magnitude smaller temperature gradients within the nanotube than between the nanotube and the coolants. To perform the simulations, a single wall carbon nanotube (nano-fin) is placed at the center of the simulation domain surrounded by fluid molecules. The system is minimized and equilibrated to a certain reference temperature. Subsequently, the temperature of the nanotube is raised and the system is allowed to relax under constant energy. The heat transfer from the nano- fin to the surrounding fluid molecules is calculated as a function of time. The temperature decay rate of the nanotube is used to estimate the relaxation time constant and hence the e ffective thermal interfacial resistance between the nano-fi n and the fluid molecules. From the results it can be concluded that the interfacial thermal resistance depends upon the chemical composition, molecular structure, size of the polymer chains and the composition of their mixtures. By calculating the vibration spectra of the molecules of the fluids, it was observed that the heat transfer from the nanotube to the surrounding fluid occurs mutually via the coupling of the low frequency vibration modes.

Singh, Navdeep

2010-12-01T23:59:59.000Z

251

Continuously variable transmission. Final technical report. [ANSYS Computer Code for performance analysis  

DOE Green Energy (OSTI)

Ford Motor Company over the past two decades has been studying various types of Infinitely Variable (I.V.) Transmissions for improving passenger car fuel economy. Of the traction drive mechanisms investigated, the Forster I.V. transmission appeared the most attractive because it reduces the complexity and manufacturing costs associated with other traction drives by virtue of its unique ratio-changing mechanism. Ford Motor Company was awarded contract E(11-1)-2674 on July 1, 1975 to study an infinitely variable traction drive transmission based upon the Forster Concept. The program plan was set-up in two phases. Phase I consisted of a design study, whereby the traction drive mechanism was experimentally and analytically evaluated and an I.V. transmission designed. Phase II, contingent on the outcome of Phase I, was to cover the build, vehicle evaluation and a manufacturing cost study. Testing and stress analysis of the flexible discs proved that the concept was not a feasible design, therefore Phase II (transmission build) was not recommended. In an effort to obtain baseline data on traction coefficients and efficiencies of traction drive transmissions a contract extension was awarded. Rigid discs with fixed geometry were designed to develop this data.

Hughson, D.; Emmadi, R.; Topouzian, A.; Lampinen, B.; Bhavsar, C.

1977-12-01T23:59:59.000Z

252

Developing a paradigm for visualizing architecture using computational methods : an analysis of the Havana Project by Lebbeus Woods  

E-Print Network (OSTI)

This thesis is concerned with developing a more detailed and efficient process for visualizing architectural forms with computational tools. The thesis will examine the origins of computer visualization and its current ...

Anderson, Gregory E. (Gregory Eugene)

1996-01-01T23:59:59.000Z

253

User's guide for the Data Analysis, Retrieval, and Tabulation System (DARTS), revised edition: A mainframe computer code for generating cross-tabulation reports  

SciTech Connect

A computer system unknown as the Data Analysis, Retrieval, and Tabulation System (DARTS) was developed by the Energy Systems Division at Argonne National Laboratory to generate tables of descriptive statistics derived from analyses of housing and energy data sources. Through a simple input command, the user can request the preparation of a hierarchical table based on any combination of several hundred of the most commonly analyzed variables. The system was written in the Statistical Analysis System (SAS) language and designed for use on a large-scale IBM mainframe computer.

Anderson, J.L.

1990-10-01T23:59:59.000Z

254

Mathematical Foundations of Quantum Information and Computation and Its Applications to Nano- and Bio-systems  

Science Conference Proceedings (OSTI)

This monograph provides a mathematical foundation to the theory of quantum information and computation, with applications to various open systems including nano and bio systems. It includes introductory material on algorithm, functional analysis, probability ...

Masanori Ohya, I. Volovich

2013-02-01T23:59:59.000Z

255

Image analysis algorithms for estimating porous media multiphase flow variables from computed microtomography data: a validation study  

Science Conference Proceedings (OSTI)

Image analysis of three-dimensional microtomographic image data has become an integral component of pore scale investigations of multiphase flow through porous media. This study focuses on the validation of image analysis algorithms for identifying phases and estimating porosity, saturation, solid surface area, and interfacial area between fluid phases from gray-scale X-ray microtomographic image data. The data used in this study consisted of (1) a two-phase high precision bead pack from which porosity and solid surface area estimates were obtained and (2) three-phase cylindrical capillary tubes of three different radii, each containing an air-water interface, from which interfacial area was estimated. The image analysis algorithm employed here combines an anisotropic diffusion filter to remove noise from the original gray-scale image data, a k-means cluster analysis to obtain segmented data, and the construction of isosurfaces to estimate solid surface area and interfacial area. Our method was compared with laboratory measurements, as well as estimates obtained from a number of other image analysis algorithms presented in the literature. Porosity estimates for the two-phase bead pack were within 1.5% error of laboratory measurements and agreed well with estimates obtained using an indicator kriging segmentation algorithm. Additionally, our method estimated the solid surface area of the high precision beads within 10% of the laboratory measurements, whereas solid surface area estimates obtained from voxel counting and two-point correlation functions overestimated the surface area by 20--40%. Interfacial area estimates for the air-water menisci contained within the capillary tubes were obtained using our image analysis algorithm, and using other image analysis algorithms, including voxel counting, two-point correlation functions, and the porous media marching cubes. Our image analysis algorithm, and other algorithms based on marching cubes, resulted in errors ranging from 1% to 20% of the analytical interfacial area estimates, whereas voxel counting and two-point correlation functions overestimated the analytical interfacial area by 20--40%. In addition, the sensitivity of the image analysis algorithms on the resolution of the microtomographic image data was investigated, and the results indicated that there was little or no improvement in the comparison with laboratory estimates for the resolutions and conditions tested.

Porter, Mark L.; Wildenschild, Dorthe; (Oregon State U.)

2010-09-03T23:59:59.000Z

256

Study and Analysis 100-car Naturalistic Driving Data Amanda Justiniano (Dr. Eliza Y. Du), Department of Electrical and Computer Engineering, Purdue  

E-Print Network (OSTI)

Study and Analysis 100-car Naturalistic Driving Data Amanda Justiniano (Dr. Eliza Y. Du), Department of Electrical and Computer Engineering, Purdue School of Engineering, Indianapolis, IN 46202 Every uses facilities such as car simulators, Drive Safety DS-600c, directed towards the research

Zhou, Yaoqi

257

Supercomputing | Computer Science | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

Resilience Engineering of Scientific Software Translation Quantum Computing Machine Learning Information Retrieval Content Tagging Visual Analytics Data Earth Sciences Energy Science Future Technology Knowledge Discovery Materials Mathematics National Security Systems Modeling Engineering Analysis Behavioral Sciences Geographic Information Science and Technology Quantum Information Science Supercomputing and Computation Home | Science & Discovery | Supercomputing and Computation | Research Areas | Computer Science SHARE Computer Science Computer Science at ORNL involves extreme scale scientific simulations through research and engineering efforts advancing the state of the art in algorithms, programming environments, tools, and system software. ORNL's work is strongly motivated by, and often carried out in direct

258

Appendix F Cultural Resources, Including  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Appendix F Appendix F Cultural Resources, Including Section 106 Consultation STATE OF CALIFORNIA - THE RESOURCES AGENCY EDMUND G. BROWN, JR., Governor OFFICE OF HISTORIC PRESERVATION DEPARTMENT OF PARKS AND RECREATION 1725 23 rd Street, Suite 100 SACRAMENTO, CA 95816-7100 (916) 445-7000 Fax: (916) 445-7053 calshpo@parks.ca.gov www.ohp.parks.ca.gov June 14, 2011 Reply in Reference To: DOE110407A Angela Colamaria Loan Programs Office Environmental Compliance Division Department of Energy 1000 Independence Ave SW, LP-10 Washington, DC 20585 Re: Topaz Solar Farm, San Luis Obispo County, California Dear Ms. Colamaria: Thank you for seeking my consultation regarding the above noted undertaking. Pursuant to 36 CFR Part 800 (as amended 8-05-04) regulations implementing Section

259

Display computers  

E-Print Network (OSTI)

A Display Computer (DC) is an everyday object: Display Computer = Display + Computer. The Â?DisplayÂ? part is the standard viewing surface found on everyday objects that conveys information or art. The Â?ComputerÂ? is found on the same everyday object; but by its ubiquitous nature, it will be relatively unnoticeable by the DC user, as it is manufactured Â?in the marginsÂ?. A DC may be mobile, moving with us as part of the everyday object we are using. DCs will be ubiquitous: Â?effectively invisibleÂ?, available at a glance, and seamlessly integrated into the environment. A DC should be an example of WeiserÂ?s calm technology: encalming to the user, providing peripheral awareness without information overload. A DC should provide unremarkable computing in support of our daily routines in life. The nbaCub (nightly bedtime ambient Cues utility buddy) prototype illustrates a sample application of how DCs can be useful in the everyday environment of the home of the future. Embedding a computer into a toy, such that the display is the only visible portion, can present many opportunities for seamless and nontraditional uses of computing technology for our youngest user community. A field study was conducted in the home environment of a five-year old child over ten consecutive weeks as an informal, proof of concept of what Display Computers for children can look like and be used for in the near future. The personalized nbaCub provided lightweight, ambient information during the necessary daily routines of preparing for bed (evening routine) and preparing to go to school (morning routine). To further understand the childÂ?s progress towards learning abstract concepts of time passage and routines, a novel Â?test by designÂ? activity was included. Here, the role of the subject changed to primary designer/director. Final post-testing showed the subject knew both morning and bedtime routines very well and correctly answered seven of eight questions based on abstract images of time passage. Thus, the subject was in the process of learning the more abstract concept of time passage, but was not totally comfortable with the idea at the end of the study.

Smith, Lisa Min-yi Chen

260

TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 2, Assessment and verification results  

Science Conference Proceedings (OSTI)

During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.

Eyler, L.L.; Trent, D.S.; Budden, M.J.

1983-09-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis  

E-Print Network (OSTI)

Chang. Changes in Tropical Cyclone Number, Duration, andSimulation of Future Tropical Cyclone Statistics in a High-Finding Tropical Cyclones on a Cloud Computing Cluster:

Hasenkamp, Daren

2011-01-01T23:59:59.000Z

262

Countries Gasoline Prices Including Taxes  

Gasoline and Diesel Fuel Update (EIA)

Countries (U.S. dollars per gallon, including taxes) Countries (U.S. dollars per gallon, including taxes) Date Belgium France Germany Italy Netherlands UK US 01/13/14 7.83 7.76 7.90 8.91 8.76 8.11 3.68 01/06/14 8.00 7.78 7.94 8.92 8.74 8.09 3.69 12/30/13 NA NA NA NA NA NA 3.68 12/23/13 NA NA NA NA NA NA 3.63 12/16/13 7.86 7.79 8.05 9.00 8.78 8.08 3.61 12/9/13 7.95 7.81 8.14 8.99 8.80 8.12 3.63 12/2/13 7.91 7.68 8.07 8.85 8.68 8.08 3.64 11/25/13 7.69 7.61 8.07 8.77 8.63 7.97 3.65 11/18/13 7.99 7.54 8.00 8.70 8.57 7.92 3.57 11/11/13 7.63 7.44 7.79 8.63 8.46 7.85 3.55 11/4/13 7.70 7.51 7.98 8.70 8.59 7.86 3.61 10/28/13 8.02 7.74 8.08 8.96 8.79 8.04 3.64 10/21/13 7.91 7.71 8.11 8.94 8.80 8.05 3.70 10/14/13 7.88 7.62 8.05 8.87 8.74 7.97 3.69

263

Computationally unifying urban masterplanning  

Science Conference Proceedings (OSTI)

Architectural design, particularly in large scale masterplanning projects, has yet to fully undergo the computational revolution experienced by other design-led industries such as automotive and aerospace. These industries use computational frameworks ... Keywords: architectural analysis, architectural design, engineering analysis, unification, urban masterplanning

David Birch; Paul H. J. Kelly; Anthony J. Field; Alvise Simondetti

2013-05-01T23:59:59.000Z

264

COMPUTATIONAL SCIENCE CENTER  

SciTech Connect

The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security.

DAVENPORT,J.

2004-11-01T23:59:59.000Z

265

NIST.gov - Computer Security Division - Computer Security ...  

Science Conference Proceedings (OSTI)

... security including, but not limited to: accreditation, audit trails, authorization ... US Department of Energy Computer Incident Advisory Capability (CIAC ...

266

Evaluation of computer-based ultrasonic inservice inspection systems  

SciTech Connect

This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)

1994-03-01T23:59:59.000Z

267

Computational Combustion  

DOE Green Energy (OSTI)

Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

2004-08-26T23:59:59.000Z

268

: Computer Aided Learning in Computer  

E-Print Network (OSTI)

CAL2 : Computer Aided Learning in Computer Architecture Laboratory JOVAN DJORDJEVIC,1 BOSKO NIKOLIC,1 TANJA BOROZAN,1 ALEKSANDAR MILENKOVIC´ 2 1 Computer Engineering Department, Faculty of Electrical Engineering, University of Belgrade, Belgrade, Serbia 2 Electrical and Computer Engineering Department

Milenkovi, Aleksandar

269

Scientific computations section monthly report, November 1993  

Science Conference Proceedings (OSTI)

This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

Buckner, M.R.

1993-12-30T23:59:59.000Z

270

ATLAS distributed computing: experience and evolution  

E-Print Network (OSTI)

The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

Nairz, A; The ATLAS collaboration

2013-01-01T23:59:59.000Z

271

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

in-depth tracking and analysis of job failures, and supportautomatic analysis after batch compute jobs complete.automatic analysis after batch compute jobs complete.

Gerber, Richard A.

2011-01-01T23:59:59.000Z

272

Data Analysis Activities and Problems for the Computer Science Major in a Post-calculus Introductory Statistics Course  

E-Print Network (OSTI)

Allen 1990) that the number of customers in the system has aSo the expected number of customers is Computer Scientistsnumber of cus- tomers in the system (or in the queue) and the average amount of time a customer

Juana Sanchez

2011-01-01T23:59:59.000Z

273

High Performance Computing in the U.S. - An Analysis on the Basis of the TOP500 List Horst D. Simon  

E-Print Network (OSTI)

In 1993 for the first time a list of the top 500 supercomputer sites worldwide has been made available. The TOP500 list allows a much more detailed and well founded analysis of the state of high performance computing. Previously data such as the number and geographical distribution of supercomputer installations were difficult to obtain, and only a few analysts undertook the effort to track the press releases by dozens of vendors. With the TOP500 report now generally and easily available it is possible to present an analysis of the state of High Performance Computing (HPC) in the U.S. This note summa- rizes some of the most important observations about HPC in the U.S.

Applied Research Branch; Horst D. Simon

1994-01-01T23:59:59.000Z

274

Faculty of Science Computer Science  

E-Print Network (OSTI)

Faculty of Science Computer Science Computer software engineering, network and system analysis.uwindsor.ca/computerscience The University of Windsor offers a variety of computer science programs to prepare students for a career in the technology industry or in research and academia. A computer science degree provides an in-depth understanding

275

Modeling resource-coupled computations  

Science Conference Proceedings (OSTI)

Increasingly massive datasets produced by simulations beg the question How will we connect this data to the computational and display resources that support visualization and analysis? This question is driving research into new approaches to allocating ... Keywords: coupled computations, data intensive computing, high-performance computing, simulation

Mark Hereld; Joseph A. Insley; Eric C. Olson; Michael E. Papka; Thomas D. Uram; Venkatram Vishwanath

2009-11-01T23:59:59.000Z

276

COMPUTATIONAL SCIENCE CENTER  

SciTech Connect

The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

DAVENPORT, J.

2005-11-01T23:59:59.000Z

277

Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis  

E-Print Network (OSTI)

scale the parallel data analysis job to a modest number ofMPI node fails the whole analysis job fails. In short, this

Hasenkamp, Daren

2011-01-01T23:59:59.000Z

278

PDSF, NERSC's Physics Computing Cluster  

NLE Websites -- All DOE Office Websites (Extended Search)

PDSF PDSF PDSF PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations. For more details see About PDSF. Click on the graphs below to see larger versions and longer term graphs. Running Jobs Pending Jobs 24 hour rolling usage graph Rolling 24 Pending Jobs by Group About Find out more about PDSF, including a general overview, and information about research groups and staff... Read More » PDSF Login Node Status Getting Started Guidance on obtaining a new user account, access, passwords, and setup files... Read More » Hardware Configuration Provides guidance on hardware configurations, including: login, compute, grid and transfer nodes, and working with particular file systems.

279

The DIII-D Computing Environment: Characteristics and Recent Changes  

Science Conference Proceedings (OSTI)

The DIII-D tokamak national fusion research facility along with its predecessor Doublet III has been operating for over 21 years. The DIII-D computing environment consists of real-time systems controlling the tokamak, heating systems, and diagnostics, and systems acquiring experimental data from instrumentation; major data analysis server nodes performing short term and long term data access and data analysis; and systems providing mechanisms for remote collaboration and the dissemination of information over the world wide web. Computer systems for the facility have undergone incredible changes over the course of time as the computer industry has changed dramatically. Yet there are certain valuable characteristics of the DIII-D computing environment that have been developed over time and have been maintained to this day. Some of these characteristics include: continuous computer infrastructure improvements, distributed data and data access, computing platform integration, and remote collaborations. These characteristics are being carried forward as well as new characteristics resulting from recent changes which have included: a dedicated storage system and a hierarchical storage management system for raw shot data, various further infrastructure improvements including deployment of Fast Ethernet, the introduction of MDSplus, LSF and common IDL based tools, and improvements to remote collaboration capabilities. This paper will describe this computing environment, important characteristics that over the years have contributed to the success of DIII-D computing systems, and recent changes to computer systems.

McHarg, B.B., Jr.

1999-07-01T23:59:59.000Z

280

IEEE Computer Society: http://computer.org Computer: http://computer.org/computer  

E-Print Network (OSTI)

IEEE Computer Society: http://computer.org Computer: http://computer.org/computer computer@computer.org IEEE Computer Society Publications Office: +1 714 821 8380 COVER FEATURES GUEST EDITOR'S INTRODUCTION 28 Computational Photography--The Next Big Step Oliver Bimber Computational photography extends

Stanford University

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

Computer Rekonstruktion  

NLE Websites -- All DOE Office Websites (Extended Search)

Computer Rekonstruktion Von jedem Kollisionsereignis registriert der Detektor Millionen von Datenpunkten. Daher ist es ntig, dass ein Computer diese Datenmenge verarbeitet: die...

282

RELAP5-3D Thermal Hydraulics Computer Program Analysis Coupled with DAKOTA and STAR-CCM+ Codes  

E-Print Network (OSTI)

RELAP5-3D has been coupled with both DAKOTA and STAR-CCM+ in order to expand the capability of the thermal-hydraulic code and facilitate complex studies of desired systems. In the first study, RELAP5-3D was coupled with DAKOTA to perform a sensitivity study of the South Texas Project (STP) power plant during steady-state and transient scenarios. The coupled software was validated by analyzing the simulation results with respect of the physical expectations and behavior of the power plant, and thermal-hydraulic parameters which caused greatest sensitivity where identified: inlet core temperature and reactor thermal power. These variables, along with break size and discharge coefficients, were used for further investigation of the sensitivity of the RELAP5-3D LOCA transient simulation under three difference cases: two inch break, six inch break, and guillotine break. Reactor thermal power, core inlet temperature, and break size were identified as producing the greatest sensitivity; therefore, future research would include uncertainty quantification for these parameters. In the second study, a small scale experimental facility, designed to study the thermal hydraulic phenomena of the Reactor Cavity Cooling System (RCCS) for a Very High Temperature Reactor (VHTR), was used as a model to test the capabilities of coupling Star-CCM+ and RELAP5-3D. This chapter discusses the capabilities and limitations of the STAR-CCM+/RELAP5-3D coupling, and a simulation, on the RCCS facility, was performed using STAR-CCM+ to study the flow patterns where expected complex flow phenomena occur and RELAP5-3D for the complete system. The code showed inability to perform flow coupling simulations and it is unable, at this time, to handle closed loop systems. The thermal coupling simulation was successful and showed congruent qualitative results to physical expectations. The locations of large fluid vortices were located specifically in the pipes closest to the inlet of the bottom manifold. In conclusion, simulations using coupled codes were presented which greatly improved the capabilities of RELAP5-3D stand-alone and computational time required to perform complex thermal-hydraulic studies. These improvements show greatly benefit for industrial applications in order to perform large scale thermal-hydraulic systems studies with greater accuracy while minimizing simulation time.

Rodriguez, Oscar

2012-12-01T23:59:59.000Z

283

Models of Procyon A including seismic constraints  

E-Print Network (OSTI)

Detailed models of Procyon A based on new asteroseismic measurements by Eggenberger et al (2004) have been computed using the Geneva evolution code including shellular rotation and atomic diffusion. By combining all non-asteroseismic observables now available for Procyon A with these seismological data, we find that the observed mean large spacing of 55.5 +- 0.5 uHz favours a mass of 1.497 M_sol for Procyon A. We also determine the following global parameters of Procyon A: an age of t=1.72 +- 0.30 Gyr, an initial helium mass fraction Y_i=0.290 +- 0.010, a nearly solar initial metallicity (Z/X)_i=0.0234 +- 0.0015 and a mixing-length parameter alpha=1.75 +- 0.40. Moreover, we show that the effects of rotation on the inner structure of the star may be revealed by asteroseismic observations if frequencies can be determined with a high precision. Existing seismological data of Procyon A are unfortunately not accurate enough to really test these differences in the input physics of our models.

P. Eggenberger; F. Carrier; F. Bouchy

2005-01-14T23:59:59.000Z

284

Computer Forensics  

Science Conference Proceedings (OSTI)

Computer Forensics. National Software Reference Library (NSRL) -- The National Software Reference Library (NSRL) is ...

2010-10-05T23:59:59.000Z

285

Quantum Computational Complexity  

E-Print Network (OSTI)

This article surveys quantum computational complexity, with a focus on three fundamental notions: polynomial-time quantum computations, the efficient verification of quantum proofs, and quantum interactive proof systems. Properties of quantum complexity classes based on these notions, such as BQP, QMA, and QIP, are presented. Other topics in quantum complexity, including quantum advice, space-bounded quantum computation, and bounded-depth quantum circuits, are also discussed.

John Watrous

2008-04-21T23:59:59.000Z

286

High-resolution structural and thermodynamic analysis of extreme stabilization of human procarboxypeptidase by computational protein design  

E-Print Network (OSTI)

Recent efforts to design de novo or redesign the sequence and structure of proteins using computational techniques have met with significant success. Most, if not all, of these computational methodologies attempt to model atomic-level interactions, and hence high-resolution structural characterization of the designed proteins is critical for evaluating the atomic-level accuracy of the underlying design force-fields. We previously used our computational protein design protocol RosettaDesign to completely redesign the sequence of the activation domain of human procarboxypeptidase A2. With 68 % of the wild-type sequence changed, the designed protein, AYEdesign, is over 10 kcal/mol more stable than the wild-type protein. Here, we describe the high-resolution crystal structure and solution NMR structure of AYEdesign, which show that the experimentally determined backbone and side-chains conformations are effectively superimposable with the computational model at atomic resolution. To isolate the origins of the remarkable stabilization, we have designed and

Gautam Dantas; Colin Corrent; Steve L. Reichow; James J. Havranek; Ziad M. Eletr; Nancy G. Isern; Brian Kuhlman; Gabriele Varani; Ethan A. Merritt; David Baker; Howard Hughes Medical

2007-01-01T23:59:59.000Z

287

Analysis of in-core experiment activities for the MIT Research Reactor using the ORIGEN computer code  

E-Print Network (OSTI)

The objective of this study is to devise a method for utilizing the ORIGEN-S computer code to calculate the activation products generated in in-core experimental assemblies at the MIT Research Reactor (MITR-II). ORIGEN-S ...

Helvenston, Edward M. (Edward March)

2006-01-01T23:59:59.000Z

288

Computing DOI 10.1007/s00607-012-0241-9 Non-convex systems of sets for numerical analysis  

E-Print Network (OSTI)

Abstract The notion of a system of sets generated by a family of functionals is introduced. A generalization of the classical support function of convex subsets of Rd allows to transfer the concept of the convex hull to these systems of sets. Approximation properties of the generalized convex hull and its use for practical computations are investigated.

Janosch Rieger

2012-01-01T23:59:59.000Z

289

Equation-Free Multiscale Computations in Social Networks: from Agent-based Modelling to Coarse-grained Stability and Bifurcation Analysis  

E-Print Network (OSTI)

We focus at the interface between multiscale computations, bifurcation theory and social networks. In particular we address how the Equation-Free approach, a recently developed computational framework, can be exploited to systematically extract coarse-grained, emergent dynamical information by bridging detailed, agent-based models of social interactions on networks, with macroscopic, systems-level, continuum numerical analysis tools. For our illustrations we use a simple dynamic agent-based model describing the propagation of information between individuals interacting under mimesis in a social network with private and public information. We describe the rules governing the evolution of the agents emotional state dynamics and discover, through simulation, multiple stable stationary states as a function of the network topology. Using the Equation-Free approach we track the dependence of these stationary solutions on network parameters and quantify their stability in the form of coarse-grained bifurcation diagr...

Tsoumanis, A C; Kevrekidis, Yu G; Bafas, G V

2009-01-01T23:59:59.000Z

290

Multi-processor including data flow accelerator module  

DOE Patents (OSTI)

An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

Davidson, George S. (Albuquerque, NM); Pierce, Paul E. (Albuquerque, NM)

1990-01-01T23:59:59.000Z

291

Soft Molecular Computing Computer Science  

E-Print Network (OSTI)

Soft Molecular Computing Max Garzon Computer Science The University of Memphis Memphis, TN 38152@memphis.edu Abstract Molecular computing (MC) utilizes the complex interaction of biomolecules and molecular biology for computational purposes. Five years later, substantial obstacles remain to bring the potential of molecular

Deaton, Russell J.

292

Computing and Electronics Computer Technology  

E-Print Network (OSTI)

Computing and Electronics Technology Computer Technology NetworkManagementoption InformationSystemsManagementoption Computer System Technician Electronics Technology Energy Technology ace.cte.umt.edu www.cte.umt.edu Department of Applied Computing and Electronics Chair: Tom Gallagher Phone: 406.243.7814 Email: Thomas

Crone, Elizabeth

293

Argonne's Laboratory computing resource center : 2006 annual report.  

SciTech Connect

Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

2007-05-31T23:59:59.000Z

294

Integrated Computational Materials Engineering: Digital Resource ...  

Science Conference Proceedings (OSTI)

Feb 15, 2007 ... For other multi-scale computational research project descriptions including: " Computational Modeling of Fatigue, Fracture, and Ductile Failure ...

295

NETL: Advanced Research - Computation Energy Sciences  

NLE Websites -- All DOE Office Websites (Extended Search)

Computational Energy Sciences > APECS Computational Energy Sciences > APECS Advanced Research Computational Energy Sciences APECS APECS Virtual Plant APECS (Advanced Process Engineering Co-Simulator) is the first simulation software to combine the disciplines of process simulation and computational fluid dynamics (CFD). This unique combination makes it possible for engineers to create "virtual plants" and to follow complex thermal and fluid flow phenomena from unit to unit across the plant. Advanced visualization software tools aid in analysis and optimization of the entire plant's performance. This tool can significantly reduce the cost of power plant design and optimization with an emphasis on multiphase flows critical to advanced power cycles. A government-industry-university collaboration (including DOE, NETL, Ansys/

296

Natively probabilistic computation  

E-Print Network (OSTI)

I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify ...

Mansinghka, Vikash Kumar

2009-01-01T23:59:59.000Z

297

Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes -- Update to Include Analyses of an Economizer Option and Alternative Winter Water Heating Control Option  

Science Conference Proceedings (OSTI)

The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of an integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006). The present report is an update to that document. Its primary purpose is to summarize results of an analysis of the potential of adding an outdoor air economizer operating mode to the IHPs to take advantage of free cooling (using outdoor air to cool the house) whenever possible. In addition it provides some additional detail for an alternative winter water heating/space heating (WH/S

Baxter, Van D [ORNL

2006-12-01T23:59:59.000Z

298

A Dynamic Parallel Data-Computing Environment for Cross-Sensor Satellite Data Merger and Scientific Analysis  

Science Conference Proceedings (OSTI)

The Data Processing and Error Analysis System (DPEAS) is a dynamic parallel data processing system for cross-sensor satellite data merger and analysis. Using a peer-to-peer methodology, DPEAS is able to distribute and simplify the near-real-time ...

Andrew S. Jones; Thomas H. Vonder Haar

2002-09-01T23:59:59.000Z

299

Need for computer-assisted qualitative data analysis in the strategic planning of e-government research  

Science Conference Proceedings (OSTI)

The eGovRTD2020 project developed a policy-oriented science and technology roadmapping method for strategic planning of eGovernment research. This method consists of for steps of investigations combining various research techniques. In these steps, a ... Keywords: e-government research, gap analysis, qualitative data analysis, scenario technique, state-of-play, technology roadmapping

Melanie Bicking; Maria A. Wimmer

2010-05-01T23:59:59.000Z

300

Information Science, Computing, Applied Math  

NLE Websites -- All DOE Office Websites (Extended Search)

threat-reduction activities, such as Intelligence analysis Cybersecurity Nuclear non-proliferation Contact Us Associate Director John Sarrao Theory, Simulation and Computation...

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

Spatial computation  

Science Conference Proceedings (OSTI)

This paper describes a computer architecture, Spatial Computation (SC), which is based on the translation of high-level language programs directly into hardware structures. SC program implementations are completely distributed, with no centralized ... Keywords: application-specific hardware, dataflow machine, low-power, spatial computation

Mihai Budiu; Girish Venkataramani; Tiberiu Chelcea; Seth Copen Goldstein

2004-12-01T23:59:59.000Z

302

Light Computing  

E-Print Network (OSTI)

A configuration of light pulses is generated, together with emitters and receptors, that allows computing. The computing is extraordinarily high in number of flops per second, exceeding the capability of a quantum computer for a given size and coherence region. The emitters and receptors are based on the quantum diode, which can emit and detect individual photons with high accuracy.

Gordon Chalmers

2006-10-13T23:59:59.000Z

303

HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra  

E-Print Network (OSTI)

1 HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra Computer Science Department, University detailed and well-founded analysis of the state of high performance computing. This paper summarizes some of systems available for performing grid based computing. Keywords High performance computing, Parallel

Dongarra, Jack

304

Computer Science Research: Computation Directorate  

Science Conference Proceedings (OSTI)

This report contains short papers in the following areas: large-scale scientific computation; parallel computing; general-purpose numerical algorithms; distributed operating systems and networks; knowledge-based systems; and technology information systems.

Durst, M.J. (ed.); Grupe, K.F. (ed.)

1988-01-01T23:59:59.000Z

305

Power throttling of collections of computing elements  

Science Conference Proceedings (OSTI)

An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

Bellofatto, Ralph E. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Crumley, Paul G. (Yorktown Heights, NY); Gara, Alan G. (Mount Kidsco, NY); Giampapa, Mark E. (Irvington, NY); Gooding; Thomas M. (Rochester, MN); Haring, Rudolf A. (Cortlandt Manor, NY); Megerian, Mark G. (Rochester, MN); Ohmacht, Martin (Yorktown Heights, NY); Reed, Don D. (Mantorville, MN); Swetz, Richard A. (Mahopac, NY); Takken, Todd (Brewster, NY)

2011-08-16T23:59:59.000Z

306

Computational methods in structural engineering.  

E-Print Network (OSTI)

???The present research is focused on computational methods in structural engineering. The main work includes three parts: 1) the development of cubic B-spline finite elements… (more)

Yang, Hao (??)

2010-01-01T23:59:59.000Z

307

Science Accelerator content now includes multimedia  

Office of Scientific and Technical Information (OSTI)

Science Accelerator content now includes multimedia Science Accelerator has expanded its suite of collections to include ScienceCinema, which contains videos produced by the U.S....

308

Hertzian Stress Field and Ring Crack Initiation Analysis Including ...  

Science Conference Proceedings (OSTI)

Author(s), Bhasker Paliwal, Rajan Tandon, Jeffrey M Rodelas, Thomas E Buchheit. On-Site Speaker (Planned), Bhasker Paliwal. Abstract Scope, We have  ...

309

Computer applications: a service course  

Science Conference Proceedings (OSTI)

A paperless computer applications course, which is driven by an online syllabus is described. Students demo their work on the computer rather than hand in paper assignments. Several advanced topics are included to challenge the students and downloads ... Keywords: computer applications, data duplication, paperless

Abdul Sattar; Torben Lorenzen

2007-12-01T23:59:59.000Z

310

Center for Computational Medicine and Bioinformatics fostering interdisciplinary research in computational medicine and biology  

E-Print Network (OSTI)

, and administrative activities of CCMB. Facilities include high performance computing, file and database servers, workstations, web servers, networking, and printing services. CCDU supports multiple high performance computing

Rosenberg, Noah

311

Fostering Computational Thinking  

E-Print Network (OSTI)

Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term 1357 students in this course solved a suite of fourteen computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated in a proctored environment using a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation and the implications for instruction in computational modeling in introductory STEM courses.

Caballero, Marcos D; Schatz, Michael F

2011-01-01T23:59:59.000Z

312

EAGLES: An interactive environment for scientific computing  

Science Conference Proceedings (OSTI)

The EAGLES Project is creating a computing system and interactive environment for scientific applications using object-oriented software principles. This software concept leads to well defined data interfaces for integrating experiment control with acquisition and analysis codes. Tools for building object-oriented systems for user interfaces and codes are discussed. Also the terms of object-oriented programming are introduced and later defined in the appendix. These terms include objects, methods, messages, encapsulation and inheritance.

Lawver, B.S.; O'Brien, D.W.; Poggio, M.E.; Shectman, R.M.

1987-08-01T23:59:59.000Z

313

EAGLES: An interactive environment for scientific computing  

Science Conference Proceedings (OSTI)

The EAGLES Project is creating a computing system and interactive environment for scientific applications using object-oriented software principles. This software concept leads to well defined data interfaces for integrating experiments control with acquisition and analysis codes. Tools for building object-oriented systems for user interfaces and codes are discussed. Also the terms of object-oriented programming are introduced and later defined in the appendix. These terms include objects, methods, messages, encapsulation and inheritance.

Lawver, B.S.; O'Brien, D.W.; Poggio, M.E.; Shectman, R.M.

1987-05-11T23:59:59.000Z

314

Petroleum Gasoline & Distillate Needs Including the Energy ...  

U.S. Energy Information Administration (EIA)

Petroleum Gasoline & Distillate Needs Including the Energy Independence and Security Act (EISA) Impacts

315

Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research  

Science Conference Proceedings (OSTI)

Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems, and (4) design, situational awareness and control of complex networks. The program elements consist of a group of Complex Networked Systems Research Institutes (CNSRI), tightly coupled to an associated individual-investigator-based Complex Networked Systems Basic Research (CNSBR) program. The CNSRI's will be principally located at the DOE National Laboratories and are responsible for identifying research priorities, developing and maintaining a networked systems modeling and simulation software infrastructure, operating summer schools, workshops and conferences and coordinating with the CNSBR individual investigators. The CNSBR individual investigator projects will focus on specific challenges for networked systems. Relevancy of CNSBR research to DOE needs will be assured through the strong coupling provided between the CNSBR grants and the CNSRI's.

Brown, D L

2009-05-01T23:59:59.000Z

316

Computer Science  

NLE Websites -- All DOE Office Websites (Extended Search)

in Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics Cite Seer Department of Energy provided open access science research citations...

317

Computational Chemistry  

Science Conference Proceedings (OSTI)

... and numerical tools to quantify uncertainties for computational quantum chemistry. ... Results appear in the issue of The Journal of Chemical Physics. ...

2010-10-05T23:59:59.000Z

318

Computational Analysis of the Pyrolysis of ..beta..-O4 Lignin Model Compounds: Concerted vs. Homolytic Fragmentation  

SciTech Connect

The thermochemical conversion of biomass to liquid transportation fuels is a very attractive technology for expanding the utilization of carbon neutral processes and reducing dependency on fossil fuel resources. As with all such emerging technologies, biomass conversion through gasification or pyrolysis has a number of obstacles that need to be overcome to make these processes cost competitive with the refining of fossil fuels. Our current efforts have focused on the investigation of the thermochemistry of the linkages between lignin units using ab initio calculations on dimeric lignin model compounds. All calculations were carried out using M062X density functional theory at the 6-311++G(d,p) basis set. The M062X method has been shown to be consistent with the CBS-QB3 method while being significantly less computationally expensive. To date we have only completed the study on the b-O4 compounds. The theoretical calculations performed in the study indicate that concerted elimination pathways dominate over bond homolysis reactions under typical pyrolysis conditions. However, this does not mean that concerted elimination will be the dominant loss process for lignin. Bimolecular radical chemistry could very well dwarf the unimolecular pathways investigated in this study. These concerted pathways tend to form stable, reasonably non-reactive products that would be more suited producing a fungible bio-oil for the production of liquid transportation fuels.

Clark, J. M.; Robichaud, D. J.; Nimlos, M. R.

2012-01-01T23:59:59.000Z

319

High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)  

Science Conference Proceedings (OSTI)

This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

Gore, Brooklin [Morgridge Institute for Research

2011-10-12T23:59:59.000Z

320

Quantum computing  

E-Print Network (OSTI)

This article gives an elementary introduction to quantum computing. It is a draft for a book chapter of the "Handbook of Nature-Inspired and Innovative Computing", Eds. A. Zomaya, G.J. Milburn, J. Dongarra, D. Bader, R. Brent, M. Eshaghian-Wilner, F. Seredynski (Springer, Berlin Heidelberg New York, 2006).

J. Eisert; M. M. Wolf

2004-01-05T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

Computational analysis of coupled fluid, heat, and mass transport in ferrocyanide single-shell tanks: FY 1994 interim report. Ferrocyanide Tank Safety Project  

Science Conference Proceedings (OSTI)

A computer modeling study was conducted to determine whether natural convection processes in single-shell tanks containing ferrocyanide wastes could generate localized precipitation zones that significantly concentrate the major heat-generating radionuclide, {sup 137}Cs. A computer code was developed that simulates coupled fluid, heat, and single-species mass transport on a regular, orthogonal finite-difference grid. The analysis showed that development of a ``hot spot`` is critically dependent on the temperature dependence for the solubility of Cs{sub 2}NiFe(CN){sub 6} or CsNaNiFe(CN){sub 6}. For the normal case, where solubility increases with increasing temperature, the net effect of fluid flow, heat, and mass transport is to disperse any local zones of high heat generation rate. As a result, hot spots cannot physically develop for this case. However, assuming a retrograde solubility dependence, the simulations indicate the formation of localized deposition zones that concentrate the {sup 137}Cs near the bottom center of the tank where the temperatures are highest. Recent experimental studies suggest that Cs{sub 2}NiFe(CN){sub 6}(c) does not exhibit retrograde solubility over the temperature range 25{degree}C to 90{degree}C and NaOH concentrations to 5 M. Assuming these preliminary results are confirmed, no natural mass transport process exists for generating a hot spot in the ferrocyanide single-shell tanks.

McGrail, B.P.

1994-11-01T23:59:59.000Z

322

Dedicated heterogeneous node scheduling including backfill scheduling  

DOE Patents (OSTI)

A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

Wood, Robert R. (Livermore, CA); Eckert, Philip D. (Livermore, CA); Hommes, Gregg (Pleasanton, CA)

2006-07-25T23:59:59.000Z

323

Automotive Underhood Thermal Management Analysis Using 3-D Coupled Thermal-Hydrodynamic Computer Models: Thermal Radiation Modeling  

SciTech Connect

The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of work in section F.

Pannala, S.; D'Azevedo, E.; Zacharia, T.

2002-02-26T23:59:59.000Z

324

Computer Science Sample Occupations  

E-Print Network (OSTI)

Computer Science Sample Occupations COMPUTER OPERATIONS Computer Hardware/ Software Engineer Computer Operator Database Manager/ Administrator Data Entry Operator Operations Manager DESIGN & MANUFACTURING, ENGINEERING Coder CAD Computer Applications Engineers Computer Research Scientist Computer

Ronquist, Fredrik

325

Development of a Computer Heating Monitoring System and Its Applications  

E-Print Network (OSTI)

This paper develops a computer heating monitoring system, introduces the components and principles of the monitoring system, and provides a study on its application to residential building heating including analysis of indoor and outdoor air temperature, heating index and energy savings. The results show that the current heating system has a great potential for energy conservation.

Chen, H.; Li, D.; Shen, L.

2006-01-01T23:59:59.000Z

326

A Computer Analysis of Energy Use and Energy Conservation Options for a Twelve Story Office Building in Austin, Texas  

E-Print Network (OSTI)

The energy use of the Travis Building at Austin, Texas was analyzed using the DOE 2.1B building energy simulation program. An analysis was made for the building as specified in the building plans and as operated by the personnel currently occupying the building. The energy consumption of the building was compared with the energy consumption of the building modified to comply with the proposed ASHRAE 90.1p standards. The base design and the ASHRAE design of the Travis building were evaluated in Brownsville, Houston, Lubbock, and El Paso to study the influence of the weather on its energy consumption. In addition, a glass with high reflectivity and low overall heat transfer coefficient was used to study the reduction of glass conduction and glass solar loads. Finally, the energy consumption of the modified building was compared with the energy consumption of the modified building which conformed to the California energy standards.

Katipamula, S.; O'Neal, D. L.; Farad, M.

1986-01-01T23:59:59.000Z

327

Chromatin Computation  

E-Print Network (OSTI)

In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this ‘‘chromatin computer’ ’ to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines.

Barbara Bryant

2012-01-01T23:59:59.000Z

328

PNNL: Computational Sciences & Mathematics - Fundamental & Computational  

NLE Websites -- All DOE Office Websites (Extended Search)

News News Contacts mathematical sciences, Computational Sciences & Mathematics We focus on merging high performance computing with data-centric analysis capabilities to solve significant problems in energy, the environment, and national security. PNNL has made scientific breakthroughs and advanced frontiers in high performance computer science, computational biology and bioinformatics, subsurface simulation modeling, and multiscale mathematics. Stream flowing through rocks with binary data on top Testing a Land Model's Water Cycle Simulation Skills Scientists at Pacific Northwest National Laboratory and Oak Ridge National Laboratory, exploring new research territory in a popular Earth system model, applied a computational technique to systematically evaluate the

329

GENERAL COMPUTATIONAL UPDATE:  

E-Print Network (OSTI)

More is available at the site including more analysis of the digits. ... Two CPUs were definitely used through single job parallel processing for a total of four ...

330

Computing compliance  

Science Conference Proceedings (OSTI)

Inquisitive semantics (cf. Groenendijk, 2008) provides a formal framework for reasoning about information exchange. The central logical notion that the semantics gives rise to is compliance. This paper presents an algorithm that computes the set of compliant ...

Ivano Ciardelli; Irma Cornelisse; Jeroen Groenendijk; Floris Roelofsen

2009-10-01T23:59:59.000Z

331

Computational analysis of fluid flow and zonal deposition in ferrocyanide single-shell tanks. Ferrocyanide Safety Program  

SciTech Connect

Safety of single-shell tanks containing ferrocyanide wastes is of concern. Ferrocyanide in the presence of an oxidizer such as NaNO{sub 3} or NaNO{sub 2} is explosively combustible when concentrated and heated. Evaluating the processes that could affect the fuel content of waste and distribution of the tank heat load is important. Highly alkaline liquid wastes were transferred in and out of the tanks over several years. Since Na{sub 2}NiFe(CN){sub 6} is much more soluble in alkaline media, the ferrocyanide could be dispersed from the tank more easily. If Cs{sub 2}NiFe(CN){sub 6} or CsNaNiFe(CN){sub 6} are also soluble in alkaline media, solubilization and transport of {sup 137}Cs could also occur. Transporting this heat generating radionuclide to a localized area in the tanks is a potential mechanism for generating a ``hot spot.`` Fluid convection could potentially speed the transport process considerably over aqueous diffusion alone. A stability analysis was performed for a dense fluid layer overlying a porous medium saturated by a less dense fluid with the finding that the configuration is unconditionally unstable and independent of the properties of the porous medium or the magnitude of the fluid density difference. A parametric modeling study of the buoyancy-driven flow due to a thermal gradient was combusted to establish the relationship between the waste physical and thermal properties and natural convection heat transfer. The effects of diffusion and fluid convection on the redistribution of the {sup 137}Cs were evaluated with a 2-D coupled heat and mass transport model. The maximum predicted temperature rise associated with the formation of zones was only 5{degrees}C and thus is of no concern in terms of generating a localized ``hot spot.``

McGrail, B.P.; Trent, D.S.; Terrones, G.; Hudson, J.D.; Michener, T.E.

1993-10-01T23:59:59.000Z

332

NERSC Computer Security  

NLE Websites -- All DOE Office Websites (Extended Search)

Security Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as possible at security@nersc.gov. You may also call the NERSC consultants (or NERSC Operations during non-business hours) at 1-800-66-NERSC. Please save any evidence of the break-in and include as many details as possible in your communication with us. NERSC Computer Security Tutorial

333

Stencil Computation Optimization  

NLE Websites -- All DOE Office Websites (Extended Search)

Stencil Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures Kaushik Datta ∗† , Mark Murphy † , Vasily Volkov † , Samuel Williams ∗† , Jonathan Carter ∗ , Leonid Oliker ∗† , David Patterson ∗† , John Shalf ∗ , and Katherine Yelick ∗† ∗ CRD/NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA † Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, USA Abstract Understanding the most efficient design and utilization of emerging multicore systems is one of the most chal- lenging questions faced by the mainstream and scientific computing industries in several decades. Our work ex- plores multicore stencil (nearest-neighbor) computations - a class of algorithms at the heart of many structured grid codes, including PDE solvers. We develop a number of effective optimization

334

Notices ROUTINE USES OF RECORDS MAINTAINED IN THE SYSTEM, INCLUDING  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

83 Federal Register 83 Federal Register / Vol. 78, No. 51 / Friday, March 15, 2013 / Notices ROUTINE USES OF RECORDS MAINTAINED IN THE SYSTEM, INCLUDING CATEGORIES OF USERS AND THE PURPOSES OF SUCH USES: The Department may disclose information contained in a record in this system of records under the routine uses listed in this system of records without the consent of the individual if the disclosure is compatible with the purposes for which the record was collected. These disclosures may be made on a case-by-case basis or, if the Department has complied with the computer matching requirements of the Privacy Act of 1974, as amended (Privacy Act), under a computer matching agreement. Any disclosure of individually identifiable information from a record in this system must also comply with the requirements of section

335

Computing persistent homology  

Science Conference Proceedings (OSTI)

We study the homology of a filtered d-dimensional simplicial complex K as a single algebraic entity and establish a correspondence that provides a simple description over fields. Our analysis enables us to derive a natural algorithm for ... Keywords: computational topology, persistent homology

Afra Zomorodian; Gunnar Carlsson

2004-06-01T23:59:59.000Z

336

Internode data communications in a parallel computer  

DOE Patents (OSTI)

Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

2013-09-03T23:59:59.000Z

337

High-Precision Computation and Mathematical Physics  

SciTech Connect

At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

Bailey, David H.; Borwein, Jonathan M.

2008-11-03T23:59:59.000Z

338

Gas storage materials, including hydrogen storage materials  

DOE Patents (OSTI)

A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

2013-02-19T23:59:59.000Z

339

Polish contribution to the worldwide LHC computing  

Science Conference Proceedings (OSTI)

The computing requirements of LHC experiments, as well as their computing models, are briefly presented. The origin of grid technology and its development in high energy community is outlined, including the Polish participation. The LHC Computing Grid ... Keywords: LHC, Tier-2, WLCG, distributed computing, gLite, grid, high energy physics, middleware

Artur Binczewski; Micha$#322; Bluj; Antoni Cyz; Micha$#322; Dwu?nik; Maciej Filocha; ?ukasz Flis; Ryszard Gokieli; Jaros$#322;aw Iwaszkiewicz; Marek Kowalski; Patryk Laso$#324;; Rafa$#322; Lichwa$#322;a; Micha$#322; ?opuszy$#324;ski; Marek Magry$#347;; Piotr Malecki; Norbert Meyer; Krzysztof Nawrocki; Andrzej Olszewski; Andrzej Ozi?b$#322;o; Adam Padée; Henryk Pa$#322;ka; Marcin Pospieszny; Marcin Radecki; Rados$#322;aw Rowicki; Dorota Stojda; Marcin Stolarek; Tomasz Szepieniec; Tadeusz Szymocha; Micha$#322; Tura$#322;a; Karol Wawrzyniak; Wojciech Wi$#347;licki; Mariusz Witek; Pawe$#322; Wolniewicz

2012-01-01T23:59:59.000Z

340

Computational biology and high performance computing  

E-Print Network (OSTI)

Acknowledgements for Community White Paper in ComputationalComputational Biology white paper Is there strong objectionportions of community white paper on high end computing

Shoichet, Brian

2011-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

Computational biology and high performance computing  

E-Print Network (OSTI)

Biology and High Performance Computing Manfred Zorn, TeresaBiology and High Performance Computing Presenters: Manfred99-Portland High performance computing has become one of the

Shoichet, Brian

2011-01-01T23:59:59.000Z

342

Homepage: Computer, Computational, and Statistical Sciences,...  

NLE Websites -- All DOE Office Websites (Extended Search)

ADTSC Computer, Computational, & Statistical Sciences, CCS Home Internal Home About Us Organization Jobs CCS Home Groups Computational Physics & Methods CCS-2 Information Sciences...

343

Renaissance Computing  

E-Print Network (OSTI)

We describe version 2 of RENCI PowerMon, a device that can be inserted between a computer power supply and the computer’s main board to measure power usage at each of the DC power rails supplying the board. PowerMon 2 provides a capability to collect accurate, frequent, and time-correlated measurements. Since the measurements occur after the AC power supply, this approach eliminates power supply efficiency and time-domain filtering perturbations of the power measurements. PowerMon 2 provides detail about the power consumption of the hardware subsystems connected to each of its eight measurement channles. The device fits in an internal 3.5 ” hard disk drive bay, thus allowing it to be used in a 1U server chassis. It cost less than $150 per unit to fabricate our small quantity of prototypes. 1

Daniel Bedard; Min Yeol Lim; Robert Fowler; Allan Porterfield

2009-01-01T23:59:59.000Z

344

Tomo-gravity How to ComputeHow to Compute  

E-Print Network (OSTI)

Tomo-gravity How to ComputeHow to Compute Accurate Traffic Matrices forAccurate Traffic MatricesStanford Shannon LabShannon Lab #12;Tomo-gravity Want to know demands from source to destination ProblemProblem Have link traffic measurements (from SNMP) A B C #12;Tomo-gravity Example App: reliability analysis

Roughan, Matthew

345

Cloud Computing at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Cloud Computing Energy Efficient Computing Exascale Computing Performance & Monitoring Tools Petascale Initiative Science Gateway Development Storage and IO Technologies Testbeds...

346

A computer music instrumentarium  

E-Print Network (OSTI)

Chapter 6. COMPUTERS: To Solder or Not toMusic Models : A Computer Music Instrumentarium . . . . .Interactive Computer Systems . . . . . . . . . . . . . . 101

Oliver La Rosa, Jaime Eduardo

2011-01-01T23:59:59.000Z

347

Argonne's Laboratory Computing Resource Center : 2005 annual report.  

Science Conference Proceedings (OSTI)

Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

2007-06-30T23:59:59.000Z

348

Intentionally Including - Engaging Minorities in Physics Careers |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Intentionally Including - Engaging Minorities in Physics Careers Intentionally Including - Engaging Minorities in Physics Careers Intentionally Including - Engaging Minorities in Physics Careers April 24, 2013 - 4:37pm Addthis Joining Director Dot Harris (second from left) were Marlene Kaplan, the Deputy Director of Education and director of EPP, National Oceanic and Atmospheric Administration, Claudia Rankins, a Program Officer with the National Science Foundation and Jim Stith, the past Vice-President of the American Institute of Physics Resources. Joining Director Dot Harris (second from left) were Marlene Kaplan, the Deputy Director of Education and director of EPP, National Oceanic and Atmospheric Administration, Claudia Rankins, a Program Officer with the National Science Foundation and Jim Stith, the past Vice-President of the

349

High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Information Science, Computing, Applied Math High Performance Computing High Performance Computing Providing world-class high performance computing capability that enables...

350

NEWTON's Computer Science Videos  

NLE Websites -- All DOE Office Websites (Extended Search)

Computer Science Videos Do you have a great computer science video? Please click our Ideas page. Featured Videos: Computer Science Videos from Purdue Computer Science Videos from...

351

Transmission line including support means with barriers  

DOE Patents (OSTI)

A gas insulated transmission line includes an elongated outer sheath, a plurality of inner conductors disposed within and extending along the outer sheath, and an insulating gas which electrically insulates the inner conductors from the outer sheath. A support insulator insulatably supports the inner conductors within the outer sheath, with the support insulator comprising a main body portion including a plurality of legs extending to the outer sheath, and barrier portions which extend between the legs. The barrier portions have openings therein adjacent the main body portion through which the inner conductors extend.

Cookson, Alan H. (Pittsburgh, PA)

1982-01-01T23:59:59.000Z

352

Interactive computer graphics for computer aided design in civil engineering  

Science Conference Proceedings (OSTI)

Interactive computer graphics can be an effective and efficient aid in the analysis/design cycle of engineering problems. The vistas of much engineering research and design can be expanded with the tools and methodology available with the introduction ...

John L. Wilson; Charles R. Lansberry

1976-08-01T23:59:59.000Z

353

Graduate school introductory computational simulation course pedagogy  

E-Print Network (OSTI)

Numerical methods and algorithms have developed and matured vastly over the past three decades now that computational analysis can be performed on almost any personal computer. There is a need to be able to teach and present ...

Proctor, Laura L. (Laura Lynne), 1975-

2009-01-01T23:59:59.000Z

354

DISASTER POLICY Including Extreme Emergent Situations (EES)  

E-Print Network (OSTI)

on the ACGME website with information relating to the ACGME response to the disaster. 3. The University-specific Program Requirements. Defined Responsibilities Following the Declaration of a Disaster or Extreme EmergentPage 123 DISASTER POLICY Including Extreme Emergent Situations (EES) The University of Connecticut

Oliver, Douglas L.

355

Institute of Computer Science Computational experience with ...  

E-Print Network (OSTI)

Institute of Computer Science. Academy of Sciences of the Czech Republic. Computational experience with modified. conjugate gradient methods for.

356

ComputationalComputational ScienceScience  

E-Print Network (OSTI)

ComputationalComputational ScienceScience KenKen HawickHawick k.a.k.a.hawickhawick@massey.ac.nz@massey.ac.nz Massey UniversityMassey University #12;Computational Science / eScienceComputational Science / eScience Computational Science concerns the application of computer science to physics, mathematics, chemistry, biology

Hawick, Ken

357

User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal energy storage coupled with district heating or cooling systems. Volume I. Main text  

DOE Green Energy (OSTI)

A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. The AQUASTOR model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two principal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains the main text, including introduction, program description, input data instruction, a description of the output, and Appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

Huber, H.D.; Brown, D.R.; Reilly, R.W.

1982-04-01T23:59:59.000Z

358

Broadcasting a message in a parallel computer  

DOE Patents (OSTI)

Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

2011-08-02T23:59:59.000Z

359

Broadcasting a message in a parallel computer  

Science Conference Proceedings (OSTI)

Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

2011-08-02T23:59:59.000Z

360

Performing an allreduce operation on a plurality of compute nodes of a parallel computer  

DOE Patents (OSTI)

Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

Faraj, Ahmad (Rochester, MN)

2012-04-17T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

Buildings Included on EMS Reports"  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Office of Legacy Management Office of Legacy Management Buildings Included on EMS Reports" "Site","Property Name","Property ID","GSF","Incl. in Water Baseline (CY2007)","Water Baseline (sq. ft.)","Water CY2008 (sq. ft.)","Water CY2009 (sq. ft.)","Water Notes","Incl. in Energy Baseline (CY2003)","Energy Baseline (sq. ft.)","CY2008 Energy (sq. ft.)","CY2009 Energy (sq. ft.)","Energy Notes","Included as Existing Building","CY2008 Existing Building (sq. ft.)","Reason for Building Exclusion" "Column Totals",,"Totals",115139,,10579,10579,22512,,,3183365,26374,115374,,,99476 "Durango, CO, Disposal/Processing Site","STORAGE SHED","DUD-BLDG-STORSHED",100,"no",,,,,"no",,,,"OSF","no",,"Less than 5,000 GSF"

362

Power generation method including membrane separation  

SciTech Connect

A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.

Lokhandwala, Kaaeid A. (Union City, CA)

2000-01-01T23:59:59.000Z

363

Electric power monthly, September 1990. [Glossary included  

SciTech Connect

The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)

1990-12-17T23:59:59.000Z

364

Nuclear reactor shield including magnesium oxide  

DOE Patents (OSTI)

An improvement in nuclear reactor shielding of a type used in reactor applications involving significant amounts of fast neutron flux, the reactor shielding including means providing structural support, neutron moderator material, neutron absorber material and other components as described below, wherein at least a portion of the neutron moderator material is magnesium in the form of magnesium oxide either alone or in combination with other moderator materials such as graphite and iron.

Rouse, Carl A. (Del Mar, CA); Simnad, Massoud T. (La Jolla, CA)

1981-01-01T23:59:59.000Z

365

Internal Controls Over Personal Computers at Los Alamos National...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

over the extensive inventory of laptop computers at Los Alamos National Laboratory (LANL). Computers are used in the full range of operations at LANL, to include processing...

366

New Ion Trap May Lead to Large Quantum Computers  

Science Conference Proceedings (OSTI)

... promising candidates for use as quantum bits (qubits) in quantum computers. ... all the basic building blocks for a quantum computer, including key ...

2013-08-07T23:59:59.000Z

367

RATIO COMPUTER  

DOE Patents (OSTI)

An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

Post, R.F.

1958-11-11T23:59:59.000Z

368

High Performance Computing  

Science Conference Proceedings (OSTI)

High Performance Computing. Summary: High Performance Computing (HPC) enables work on challenging problems that ...

2012-03-05T23:59:59.000Z

369

Computational Intelligence: Concepts to Implementations  

Science Conference Proceedings (OSTI)

Russ Eberhart and Yuhui Shi have succeeded in integrating various natural and engineering disciplines to establish Computational Intelligence. This is the first comprehensive textbook, including lots of practical examples. -Shun-ichi Amari, RIKEN Brain ... Keywords: Artificial Intelligence, Neural Networks

Russell C. Eberhart

2007-08-01T23:59:59.000Z

370

Simulated annealing implementation with shorter Markov chain length to reduce computational burden and its application to the analysis of pulmonary airway architecture  

Science Conference Proceedings (OSTI)

A new way to implement the Simulated Annealing (SA) algorithm was developed and tested that improves computation performance by using shorter Markov chain length (inner iterations) and repeating the entire SA process until the final function value meets ... Keywords: CT image, Computational speed, Markov chain length, Pulmonary airway structure, Simulated annealing, Sprague Dawley rats

DongYoub Lee; Anthony S. Wexler

2011-08-01T23:59:59.000Z

371

Computable randomness and betting for computable probability spaces  

E-Print Network (OSTI)

Unlike Martin-L\\"of randomness and Schnorr randomness, computable randomness has not been defined, except for a few ad hoc cases, outside of Cantor space. This paper offers such a definition (actually, many equivalent definitions), and further, provides a general method for abstracting "bit-wise" definitions of randomness from Cantor space to arbitrary computable probability spaces. This same method is also applied to give machine characterizations of computable and Schnorr randomness for computable probability spaces, extending the previous known results. This paper also addresses "Schnorr's Critique" that gambling characterizations of Martin-L\\"of randomness are not computable enough. The paper contains a new type of randomness---endomorphism randomness---which the author hopes will shed light on the open question of whether Kolmogorov-Loveland randomness is equivalent to Martin-L\\"of randomness. It ends with other possible applications of the methods presented, including a possible definition of computable...

Rute, Jason

2012-01-01T23:59:59.000Z

372

Grid Computing  

Science Conference Proceedings (OSTI)

... the development of measurement methods is needed for analysis and management. ... A set of metrics, scenarios, and methods against which to ...

2011-12-02T23:59:59.000Z

373

COMPUTER SCIENCE EECS Department  

E-Print Network (OSTI)

COMPUTER SCIENCE EECS Department The Electrical Engineering and Computer Science (EECS) Department at WSU offers undergraduate degrees in electrical engineering, computer engineering and computer science. The EECS Department offers master of science degrees in computer science, electrical engineering

374

Computer Science UNDERGRADUATE  

E-Print Network (OSTI)

447 Computer Science UNDERGRADUATE PROGRAMS The Department of Computer Science provides undergraduate instruction leading to the bachelor's degree in computer science. This program in computer science is accredited by the Computer Science Accreditation Board (CSAB), a specialized accrediting body recognized

Suzuki, Masatsugu

375

COMPUTER ENGINEERING EECS Department  

E-Print Network (OSTI)

COMPUTER ENGINEERING EECS Department The Electrical Engineering and Computer Science (EECS) Department at WSU offers undergraduate degrees in electrical engineering, computer engineering and computer science. The EECS Department offers Master of Science degrees in computer science, electrical engineering

376

Duality and Recycling Computing in Quantum Computers  

E-Print Network (OSTI)

Quantum computer possesses quantum parallelism and offers great computing power over classical computer \\cite{er1,er2}. As is well-know, a moving quantum object passing through a double-slit exhibits particle wave duality. A quantum computer is static and lacks this duality property. The recently proposed duality computer has exploited this particle wave duality property, and it may offer additional computing power \\cite{r1}. Simply put it, a duality computer is a moving quantum computer passing through a double-slit. A duality computer offers the capability to perform separate operations on the sub-waves coming out of the different slits, in the so-called duality parallelism. Here we show that an $n$-dubit duality computer can be modeled by an $(n+1)$-qubit quantum computer. In a duality mode, computing operations are not necessarily unitary. A $n$-qubit quantum computer can be used as an $n$-bit reversible classical computer and is energy efficient. Our result further enables a $(n+1)$-qubit quantum computer to run classical algorithms in a $O(2^n)$-bit classical computer. The duality mode provides a natural link between classical computing and quantum computing. Here we also propose a recycling computing mode in which a quantum computer will continue to compute until the result is obtained. These two modes provide new tool for algorithm design. A search algorithm for the unsorted database search problem is designed.

Gui Lu Long; Yang Liu

2007-08-15T23:59:59.000Z

377

Thermovoltaic semiconductor device including a plasma filter  

DOE Patents (OSTI)

A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.

Baldasaro, Paul F. (Clifton Park, NY)

1999-01-01T23:59:59.000Z

378

User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices  

DOE Green Energy (OSTI)

A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

Huber, H.D.; Brown, D.R.; Reilly, R.W.

1982-04-01T23:59:59.000Z

379

Software and Libraries | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT +ddt L Multithreaded, multiprocess source code debugger for high performance computing. bgqstack @default I A tool to debug and provide postmortem analysis of...

380

COMPUTABLE CATEGORICITY VERSUS RELATIVE COMPUTABLE CATEGORICITY  

E-Print Network (OSTI)

COMPUTABLE CATEGORICITY VERSUS RELATIVE COMPUTABLE CATEGORICITY RODNEY G. DOWNEY, ASHER M. KACH, STEFFEN LEMPP, AND DANIEL D. TURETSKY Abstract. We study the notion of computable categoricity of computable structures, comparing it especially to the notion of relative computable cate- goricity and its

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

USERDA computer program summaries. Numbers 177--239  

SciTech Connect

Since 1960 the Argonne Code Center has served as a U. S. Atomic Energy Commission information center for computer programs developed and used primarily for the solution of problems in nuclear physics, reactor design, reactor engineering and operation. The Center, through a network of registered installations, collects, validates, maintains, and distributes a library of these computer programs and publishes a compilation of abstracts describing them. In 1972 the scope of the Center's activities was officially expanded to include computer programs developed in all of the U. S. Atomic Energy Commission program areas and the compilation and publication of this report. The Computer Program Summary report contains summaries of computer programs at the specification stage, under development, being checked out, in use, or available at ERDA offices, laboratories, and contractor installations. Programs are divided into the following categories: cross section and resonance integral calculations; spectrum calculations, generation of group constants, lattice and cell problems; static design studies; depletion, fuel management, cost analysis, and reactor economics; space-independent kinetics; space--time kinetics, coupled neutronics-- hydrodynamics--thermodynamics and excursion simulations; radiological safety, hazard and accident analysis; heat transfer and fluid flow; deformation and stress distribution computations, structural analysis and engineering design studies; gamma heating and shield design programs; reactor systems analysis; data preparation; data management; subsidiary calculations; experimental data processing; general mathematical and computing system routines; materials; environmental and earth sciences; space sciences; electronics and engineering equipment; chemistry; particle accelerators and high-voltage machines; physics; controlled thermonuclear research; biology and medicine; and data. (RWR)

1975-10-01T23:59:59.000Z

382

Computer analysis of four channel x-ray microscopy images to obtain source and spectral emission data on laser fusion targets  

SciTech Connect

A description is given of how a typical four channel microscope experiment is recorded and processed. The computer code MISE is described. Seventeen figures showing various results are given. (MOW)

Harper, T.L.

1975-12-01T23:59:59.000Z

383

Diagnostic computation of moisture budgets in the ERA-Interim Reanalysis with reference to analysis of CMIP-archived atmospheric model data  

Science Conference Proceedings (OSTI)

The diagnostic evaluation of moisture budgets in archived atmosphere model data is examined. Sources of error in diagnostic computation can arise from the use of numerical methods different to those used in the atmosphere model, the time and ...

Richard Seager; Naomi Henderson

384

Diagnostic Computation of Moisture Budgets in the ERA-Interim Reanalysis with Reference to Analysis of CMIP-Archived Atmospheric Model Data  

Science Conference Proceedings (OSTI)

The diagnostic evaluation of moisture budgets in archived atmosphere model data is examined. Sources of error in diagnostic computation can arise from the use of numerical methods different from those used in the atmosphere model, the time and ...

Richard Seager; Naomi Henderson

2013-10-01T23:59:59.000Z

385

Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program  

DOE Patents (OSTI)

The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

Hively, Lee M. (Philadelphia, TN)

2011-07-12T23:59:59.000Z

386

Rowan University Department of Computer Science  

E-Print Network (OSTI)

Project 0704.400 does not change) b. Sponsor: Ganesh R. Baliga, Associate Professor, Computer ScienceRowan University Department of Computer Science Minor Curricular Change Changing Prerequisites for Computer Science Senior Project 1. Details a. Change requested: Add the course Design and Analysis

Kay, Jennifer S.

387

Computational Identification of Thermal Technical Properties of Building Materials and its Reliability  

SciTech Connect

Reliable computational analysis of heat transfer in building materials needs realistic input data. New building constructions contain advanced composite materials whose macroscopic thermal technical (both insulation and accumulation) properties should be identified properly, following all available qualitative results from their microstructural analysis. However, the corresponding inverse problems are much more complicated than the original direct ones. The paper demonstrates the mathematical and computational support for the identification of basic thermal technical characteristics using certain class of inexpensive primary non-stationary measurement devices, including the uncertainty analysis of measurements.

Vala, J. [Brno University of Technology, Faculty of Civil Engineering, CZ-602 00 Brno, Veveri 95 (Czech Republic)

2010-09-30T23:59:59.000Z

388

Computer Forensics In Forensis  

E-Print Network (OSTI)

U.N. In Proceedings of CMAD IV: Computer Misuse and Anomaly4] J. P. Anderson. Computer Security Threat Monitoring andof the Fifth Annual Computer Security Applications

Peisert, Sean; Bishop, Matt; Marzullo, Keith

2008-01-01T23:59:59.000Z

389

Computational biology and high performance computing  

E-Print Network (OSTI)

Paper in Computational Biology The First Step Beyond theM . Glaeser, Mol. & Cell Biology, UCB and Life SciencesLBNL-44460 Computational Biology and High Performance

Shoichet, Brian

2011-01-01T23:59:59.000Z

390

Fermilab | Science at Fermilab | Computing | Grid Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing center in existence could...

391

Time Series Dependent Analysis of Unparametrized Thomas Networks  

Science Conference Proceedings (OSTI)

This paper is concerned with the analysis of labeled Thomas networks using discrete time series. It focuses on refining the given edge labels and on assessing the data quality. The results are aimed at being exploitable for experimental design and include ... Keywords: Time series analysis,Regulators,Computational modeling,Time measurement,Bioinformatics,Computational biology,Labeling,constraint satisfaction.,Time series analysis,model checking,temporal logic,biology and genetics

Hannes Klarner; Heike Siebert; Alexander Bockmayr

2012-09-01T23:59:59.000Z

392

Introduction to Small-Scale Photovoltaic Systems (Including RETScreen Case  

Open Energy Info (EERE)

Introduction to Small-Scale Photovoltaic Systems (Including RETScreen Case Introduction to Small-Scale Photovoltaic Systems (Including RETScreen Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale Photovoltaic Systems (Including RETScreen Case Study) (Webinar) Focus Area: Solar Topics: Market Analysis Website: www.leonardo-energy.org/webinar-introduction-small-scale-photovoltaic- Equivalent URI: cleanenergysolutions.org/content/introduction-small-scale-photovoltaic Language: English Policies: Deployment Programs DeploymentPrograms: Project Development This video teaches the viewer about photovoltaic arrays and RETscreen's photovoltaic module, which can be used to project the cost and production of an array. An example case study was

393

Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 2 progress report  

SciTech Connect

The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of January through March 2012.

Lottes, S.A.; Bojanowski, C.; Shen, J.; Xie, Z.; Zhai, Y. (Energy Systems); (Turner-Fairbank Highway Research Center)

2012-06-28T23:59:59.000Z

394

Computational mechanics research and support for aerodynamics and hydraulics at TFHRC year 1 quarter 4 progress report.  

SciTech Connect

The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFCHR wind engineering laboratory, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of July through September 2011.

Lottes, S.A.; Kulak, R.F.; Bojanowski, C. (Energy Systems)

2011-12-09T23:59:59.000Z

395

Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 2 quarter 1 progress report.  

SciTech Connect

The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water effects on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to improve design allowing for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, CFD analysis of the operation of the wind tunnel in the TFHRC wind engineering laboratory. This quarterly report documents technical progress on the project tasks for the period of October through December 2011.

Lottes, S.A.; Bojanowski, C.; Shen, J.; Xie, Z.; Zhai, Y. (Energy Systems); (Turner-Fairbank Highway Research Center)

2012-04-09T23:59:59.000Z

396

Soft Computing for diagnostics in equipment service  

Science Conference Proceedings (OSTI)

We present methods and tools from the Soft Computing (SC) domain, which is used within the diagnostics and prognostics framework to accommodate imprecision of real systems. SC is an association of computing methodologies that includes as its principal ... Keywords: Diagnosis, Diagnostics, Hybrid Systems, Soft Computing

Piero Bonissone; Kai Goebel

2001-09-01T23:59:59.000Z

397

Quantum computers: Definition and implementations  

Science Conference Proceedings (OSTI)

The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria: Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.

Perez-Delgado, Carlos A.; Kok, Pieter [Department of Physics and Astronomy, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield, S3 7RH (United Kingdom)

2011-01-15T23:59:59.000Z

398

Final technical report for DOE Computational Nanoscience Project: Integrated Multiscale Modeling of Molecular Computing Devices  

Science Conference Proceedings (OSTI)

This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.

Cummings, P. T.

2010-02-08T23:59:59.000Z

399

CDF GlideinWMS usage in grid computing of high energy physics  

SciTech Connect

Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

Zvada, Marian; /Fermilab /Kosice, IEF; Benjamin, Doug; /Duke U.; Sfiligoi, Igor; /Fermilab

2010-01-01T23:59:59.000Z

400

Police Use of Computers  

E-Print Network (OSTI)

Colton, K. (1978). Police Computer Systems. Lexington, MA:The impact and use of computer technology by the police.K.L. (1986). People and Computers: The Impacts of Computing

Northrop, Alana

1993-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

AGILA: The Ateneo High Performance Computing System  

E-Print Network (OSTI)

A Beowulf cluster is a low-cost parallel high performance computing system that uses commodity hardware components such as personal computers and standard Ethernet adapters and switches and runs on freely available software such as Linux and LAM-MPI. In this paper the development of the AGILA HPCS, which stands for the Ateneo GigaflopsRange Performance, Linux OS, and Athlon Processors High Performance Computing System, is discussed including its hardware and software configurations and performance evaluation. Keywords High-performance computing, commodity cluster computing, parallel computing, Beowulf-class cluster 1.

Rafael Salda Na; Felix P. Muga Ii; Jerrold J. Garcia; William Emmanuel; S. Yu

2000-01-01T23:59:59.000Z

402

Applied and Computational Mathematics Division  

Science Conference Proceedings (OSTI)

Applied and Computational Mathematics Division. Topic Areas. Mathematics; Scientific Computing; Visualization; Quantum Computing. ...

2013-05-09T23:59:59.000Z

403

Keeneland: Computational Science Using Heterogeneous GPU Computing  

E-Print Network (OSTI)

................................................................. 152 123 #12;124 Contemporary High Performance Computing: From Petascale toward Exascale 7.1 Overview of Computational Sciences, and Oak Ridge National Laboratory. NSF 08-573: High Performance Computing System performance computing system. The Keeneland project is led by the Georgia Institute of Technology (Georgia

Dongarra, Jack

404

COMPUTATIONAL ECONOMICS AT THE COMPUTATION INSTITUTE  

E-Print Network (OSTI)

COMPUTATIONAL ECONOMICS AT THE COMPUTATION INSTITUTE Summary of 3-D Discussions Prepared by Ken Judd Autumn Quarter, 2006 In the Autumn quarter, 2006, Computation Institute Director Ian Foster of some topic. The first set of 3-D talks brought together a variety of computational scientists

405

Computer Algebra Course - CECM  

E-Print Network (OSTI)

Michael Monagan: Computer Algebra. ... (1) Representing and simplifying mathematical formulae on a computer. (1) Data structures for multivariate polynomials.

406

Computer Algebra Course - CECM  

E-Print Network (OSTI)

Michael Monagan: Computer Algebra. ... Representing and simplifying mathematical formulae on a computer. Symbolic differentiation of a formula.

407

BNL | Computational Biology & Bioinformatics  

NLE Websites -- All DOE Office Websites (Extended Search)

Computational Biology & Bioinformatics Computational Biology and Bioinformatics groups focuses on quantitative predictive models of complex biological systems and their underlying...

408

Computer Security Division Homepage  

Science Conference Proceedings (OSTI)

Computer Security Division. ... The 2012 Computer Security Division Annual Report (Special Publication 800-165) is now available. ...

2013-09-12T23:59:59.000Z

409

Quality Assurance of Computational Model  

NLE Websites -- All DOE Office Websites (Extended Search)

of Computational of Computational Models Presented at the Annual Department of Presented at the Annual Department of Energy Quality Council Meeting Subir K. Sen Sub . Se Office of Quality Assurance, HS-33 December 7, 2011 Outline Outline * Introduction Introduction * GAO Report 11-143 i l h C il * National Research Council Focus * DOE Model Validation/Performance * Summary 2 Introduction Introduction * Computer models are used in EM's massive Computer models are used in EM s massive clean up effort to model physical and biogeochemical processes biogeochemical processes. * Results from these computational models are often used to make costly cleanup decisions often used to make costly cleanup decisions including selection, performance assessment and annual

410

Computer Refurbishment  

SciTech Connect

The major activity for the 18-month refurbishment outage at the Point Lepreau Generating Station is the replacement of all 380 fuel channel and calandria tube assemblies and the lower portion of connecting feeder pipes. New Brunswick Power would also take advantage of this outage to conduct a number of repairs, replacements, inspections and upgrades (such as rewinding or replacing the generator, replacement of shutdown system trip computers, replacement of certain valves and expansion joints, inspection of systems not normally accessible, etc.). This would allow for an additional 25 to 30 years. Among the systems to be replaced are the PDC's for both shutdown systems. Assessments have been completed for both the SDS1 and SDS2 PDC's, and it has been decided to replace the SDS2 PDCs with the same hardware and software approach that has been used successfully for the Wolsong 2, 3, and 4 and the Qinshan 1 and 2 SDS2 PDCs. For SDS1, it has been decided to use the same software development methodology that was used successfully for the Wolsong and Qinshan called the I A and to use a new hardware platform in order to ensure successful operation for the 25-30 year station operating life. The selected supplier is Triconex, which uses a triple modular redundant architecture that will enhance the robustness/fault tolerance of the design with respect to equipment failures.

Ichiyen, Norman; Chan, Dominic; Thompson, Paul

2004-01-15T23:59:59.000Z

411

FAST Program Computer Based Training  

Science Conference Proceedings (OSTI)

FAST CBT is a computer based training module that allows users to access training when desired and review it at their own pace. It provides graphics and interactive features to enhance learning. This computer based training module was created to help instruct users on an existing EPRI engineering computer program Product ID# 1004565 (FAST 1.0 Flow Path Analysis for Steam Turbines, Version 1.0). Users of this training will consist mainly of steam turbine performance engineers and outage managers. A subjec...

2010-11-17T23:59:59.000Z

412

142 Computer Science 143 Computer science is concerned with the study of computers and computing,  

E-Print Network (OSTI)

142 Computer Science 143 Computer science is concerned with the study of computers and computing, focusing on algorithms, programs and programming, and computational systems. The main goal of computational systems and to show how this body of knowledge can be used to produce solutions to real

Richards-Kortum, Rebecca

413

Analysis Tools  

NLE Websites -- All DOE Office Websites (Extended Search)

Genome Channel Generation GRAIL GRAILEXP Pipeline Domain Parser Prospect MIRA Analysis Tools We are the Computational Biology and Bioinformatics Group of the Biosciences Division...

414

The advanced computational testing and simulation toolkit (ACTS)  

DOE Green Energy (OSTI)

During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS.

Drummond, L.A.; Marques, O.

2002-05-21T23:59:59.000Z

415

Brain registration and subtraction - improved localization for SPECT analysis (B.R.A.S.I.L.): a computer-aided diagnosis in epilepsy tool kit  

Science Conference Proceedings (OSTI)

Surgery is an important option in the treatment of patients with medically intractable epilepsy. Traditional techniques for the localization of the epileptogenic zone (EZ), e.g. surface electroencephalography (EEG) and magnetic resonance (MR) imaging, ... Keywords: SPECT image, computer aided diagnosis (CAD), epilepsy, image registration

Lucas F. de Oliveira; Paulo M. de Azevedo-Marques; Lauro Wichert-Ana; Américo Ceiki Sakamoto

2008-03-01T23:59:59.000Z

416

Energy Efficient Computing at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Green Flash Exascale Computing Performance & Monitoring Tools Petascale Initiative Science Gateway Development Storage and I/O Technologies Design Forward Home » R & D » Energy Efficient Computing Energy Efficient Computing Energy Monitoring to Improve Efficiency NERSC has instrumented its machine room with state-of-the-art wireless monitoring technology from SynapSense. To date, the center has installed 834 sensors, which gather information on variables important to machine room operation, including air temperature, pressure, and humidity. The sensors signal temperature changes and NERSC has already seen benefits in center reliability. For example, after cabinets of a large, decommissioned system were shut down and removed, cold air pockets developed near

417

Water is used for many purposes, includ-ing growing crops, producing copper,  

E-Print Network (OSTI)

WATER USES Water is used for many purposes, includ- ing growing crops, producing copper, generating electricity, watering lawns, keeping clean, drinking and recreation. Bal- ancing the water budget comes down of the water budget. Reducing demand involves re- ducing how much water each person uses, lim- iting the number

418

Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.  

SciTech Connect

The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of April through June 2011.

Lottes, S.A.; Kulak, R.F.; Bojanowski, C. (Energy Systems)

2011-08-26T23:59:59.000Z

419

Energy consumption of personal computer workstations  

SciTech Connect

A field study directly measured the electric demand of 189 personal computer workstations for 1-week intervals, and a survey recorded the connected equipment at 1,846 workstations in six buildings. Each separate workstation component (e.g., computer, monitor, printer, modem, and other peripheral) was individually monitored to obtain detailed electric demand profiles. Other analyses included comparison of nameplate power rating with measured power consumption and the energy savings potential and cost-effectiveness of a controller that automatically turns off computer workstation equipment during inactivity. An important outcome of the work is the development of a standard workstation demand profile and a technique for estimating a whole-building demand profile. Together, these provide a method for transferring this information to utility energy analysts, design engineers, building energy modelers, and others. A life-cycle cost analysis was used to determine the cost-effectiveness of three energy conservation measures: (1) energy awareness education, (2) retrofit power controller installation, and (3) purchase of energy-efficient PCs.

Szydlowski, R.F.; Chvala, W.D. Jr.

1994-08-01T23:59:59.000Z

420

Development of computer graphics  

SciTech Connect

The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

Nuttall, H.E. [Univ. of New Mexico, Albuquerque, NM (US). Dept. of Chemical and Nuclear Engineering

1989-07-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

Chronology of Computing Devices  

Science Conference Proceedings (OSTI)

A chronology of computing devices is given. It begins with the abacus and counting tables, and traces the development through desk calculators, analog computers, and finally stored program automatic digital computers. Significant dates relative to the ... Keywords: Calculating machines, chronology, computers, computing devices, history.

H. D. Huskey; V. R. Huskey

1976-12-01T23:59:59.000Z

422

Corporate Information & Computing Services  

E-Print Network (OSTI)

Corporate Information & Computing Services High Performance Computing Report March 2008 Author The University of Sheffield's High Performance Computing (HPC) facility is provided by CiCS. It consists of both Graduate Students and Staff. #12;Corporate Information & Computing Services High Performance Computing

Martin, Stephen John

423

High Performance Computing in  

E-Print Network (OSTI)

High Performance Computing in Bioinformatics Thomas Ludwig (t.ludwig@computer.org) Ruprecht PART I: High Performance Computing Thomas Ludwig PART II: HPC Computing in Bioinformatics Alexandros #12;© Thomas Ludwig, Alexandros Stamatakis, GCB'04 3 PART I High Performance Computing Introduction

Stamatakis, Alexandros

424

Student Manual Computers Gorlaeus  

E-Print Network (OSTI)

1 Student Manual Computers Gorlaeus September 2007 Universiteit Leiden Computerdienst Gorlaeus Helpdesk@science.leidenuniv.nl tel. (071-527) 4702 #12;Student Manual Computers Gorlaeus sep-07 Computer. .............................................................................................. 6 #12;Student Manual Computers Gorlaeus sep-07 Computer Helpdesk helpdesk@chem.leidenuniv.nl 2 1

Galis, Frietson

425

Combinatorial evaluation of systems including decomposition of a system representation into fundamental cycles  

DOE Patents (OSTI)

One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.

Oliveira, Joseph S. (Richland, WA); Jones-Oliveira, Janet B. (Richland, WA); Bailey, Colin G. (Wellington, NZ); Gull, Dean W. (Seattle, WA)

2008-07-01T23:59:59.000Z

426

NIST, Computer Security Division, Computer Security ...  

Science Conference Proceedings (OSTI)

... Standards. ITL January 1999, Jan 1999, Secure Web-Based Access to High Performance Computing Resources. ITL November ...

427

NERSC seeks Computational Systems Group Lead  

NLE Websites -- All DOE Office Websites (Extended Search)

seeks Computational Systems Group Lead seeks Computational Systems Group Lead NERSC seeks Computational Systems Group Lead January 6, 2011 by Katie Antypas Note: This position is now closed. The Computational Systems Group provides production support and advanced development for the supercomputer systems at NERSC. Manage the Computational Systems Group (CSG) which provides production support and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing Center). These systems, which include the second fastest supercomputer in the U.S., provide 24x7 computational services for open (unclassified) science to world-wide researchers supported by DOE's Office of Science. Duties/Responsibilities Manage the Computational Systems Group's staff of approximately 10

428

AGILA: The Ateneo High Performance Computing System  

E-Print Network (OSTI)

A Beowulf cluster is a low-cost parallel high performance computing system that uses commodity hardware components such as personal computers and standard Ethernet adapters and switches and runs on freely available software such as Linux and LAM-MPI. In this paper the development of the AGILA HPCS, which stands for the Ateneo GigaflopsRange Performance, Linux OS, and Athlon Processors High Performance Computing System, is discussed including its hardware and software configurations and performance evaluation. Keywords High-performance computing, commodity cluster computing, parallel computing, Beowulf-class cluster 1. INTRODUCTION In the Philippines today, computing power in the range of gigaflops is not generally available for use in research and development. Conventional supercomputers or high performance computing systems are very expensive and are beyond the budgets of most university research groups especially in developing countries such as the Philippines. A lower cost option...

Rafael P. Saldaña; Felix P. Muga; II; Jerrold J. Garcia; William Emmanuel S. Yu; S. Yu

2000-01-01T23:59:59.000Z

429

Oak Ridge Leadership Computing Facility  

NLE Websites -- All DOE Office Websites

Oak Ridge Leadership Computing Facility Oak Ridge Leadership Computing Facility The OLCF was established at Oak Ridge National Laboratory in 2004 with the mission of standing up a supercomputer 100 times more powerful than the leading systems of the day. Connect with OLCF Facebook Twitter YouTube Vimeo Search OLCF.ORNL.GOV Home About OLCF Overview Leadership Team Groups Org Chart User Council Careers Visitor Information & Tours Contact Us Leadership Science Biological Sciences Chemistry Computer Science Earth Science Engineering Materials Science Physics 2013 INCITE Projects 2013 ALCC Projects Computing Resources Titan Cray XK7 Eos Lens EVEREST Rhea Sith Smoky Data Management Data Analysis Center Projects Adios CCI eSiMon File System Projects IOTA OpenSFS SWTools XGAR User Support Getting Started System User Guides KnowledgeBase

430

Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts  

Science Conference Proceedings (OSTI)

The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

NONE

1997-12-31T23:59:59.000Z

431

Human computing and machine understanding of human behavior: a survey  

Science Conference Proceedings (OSTI)

A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing ... Keywords: affective computing, analysis, human behavior understanding, human sensing, multimodal data, socially-aware computing

Maja Pantic; Alex Pentland; Anton Nijholt; Thomas S. Huang

2007-01-01T23:59:59.000Z

432

Computing with almost periodic functions  

E-Print Network (OSTI)

The paper develops a method for discrete computational Fourier analysis of functions defined on quasicrystals and other almost periodic sets. A key point is to build the analysis around the emerging theory of quasicrystals and diffraction in the setting on local hulls and dynamical systems. Numerically computed approximations arising in this way are built out of the Fourier module of the quasicrystal in question, and approximate their target functions uniformly on the entire infinite space. The methods are entirely group theoretical, being based on finite groups and their duals, and they are practical and computable. Examples of functions based on the standard Fibonacci quasicrystal serve to illustrate the method (which is applicable to all quasicrystals modeled on the cut and project formalism).

R. V. Moody; M. Nesterenko; J. Patera

2008-08-13T23:59:59.000Z

433

Defining computational aesthetics  

Science Conference Proceedings (OSTI)

This paper attempts to define the discipline of Computational Aesthetics in the context of computer science, partly reflecting the contributions and comprehensive discussions of the first EG Workshop on Computational Aesthetics in Graphics, Visualization ...

Florian Hoenig

2005-05-01T23:59:59.000Z

434

ComputerCluster ? FCW  

NLE Websites -- All DOE Office Websites (Extended Search)

CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. (16 August 2010) N.B. The latest block of the CNMS Computer Cluster...

435

Computer Forensics In Forensis  

E-Print Network (OSTI)

23 2007. [62] K. J. Ziese. Computer based forensics – a caseU.N. In Proceedings of CMAD IV: Computer Misuse and Anomalyinvestigations. Journal of Computer Security, 12(5):753–776,

Peisert, Sean; Bishop, Matt; Marzullo, Keith

2008-01-01T23:59:59.000Z

436

Compute Node MD3000 Storage Array  

E-Print Network (OSTI)

Compute Node MD3000 Storage Array Dell 2950 Head Node 24-Port Switch Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node Compute Node 24-Port Switch Dell 2950

Weber, David J.

437

Computing trends using graphic processor in high energy physics  

E-Print Network (OSTI)

One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

Niculescu, Mihai

2011-01-01T23:59:59.000Z

438

Computing trends using graphic processor in high energy physics  

E-Print Network (OSTI)

One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

Mihai Niculescu; Sorin-Ion Zgura

2011-06-30T23:59:59.000Z

439

Review and evaluation of the RELAP5YA computer code and the Vermont Yankee LOCA (Loss-of-Coolant Accident) licensing analysis model for use in small and large break BWR (Boiling Water Reactor) LOCAS  

SciTech Connect

A review has been completed of the RELAP5YA computer code to determine its acceptability for performing licensing analyses. The review was limited to Boiling Water Reactor (BWR) reactor applications. In addition, a Loss-Of-Coolant Accident (LOCA) licensing analysis method, using the RELAP5YA computer code, has been reviewed. This method is applicable to the Vermont Yankee Nuclear Power Station to perform full break spectra LOCA and fuel cycle independent analyses. The review of the RELAP5YA code consisted of an evaluation of all Yankee Atomic Electric Company (YAEC) incorporated modifications to the RELAP5/MOD1 Cycle 18 computer code from which the licensing version of the code originated. Qualifying separate and integral effects assessment calculations were reviewed to evaluate the validity and proper implementation of the various added models. The LOCA licensing method was assessed by reviewing two RELAP5YA system input models and evaluating several small and large break qualifying transient calculations. A review of the RELAP5YA code modifications and their assessments, as well as the submitted LOCA licensing method, is given and the results of the review are provided.

Jones, J.L.

1987-01-01T23:59:59.000Z

440

MELCOR computer code manuals  

Science Conference Proceedings (OSTI)

MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

1995-03-01T23:59:59.000Z

Note: This page contains sample records for the topic "analysis including computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

Computer-aided documentation  

Science Conference Proceedings (OSTI)

Current standards for high-quality documentation of complex computer systems include many criteria, based on the application and user levels. Important points common to many systems are: targeting to specific user groups; being complete, concise, and structured; containing both tutorials and reference material; being field-tested; and being timely in appearance relative to the software delivery. To achieve these goals, uniform quality standards should be more vigorously applied, the documentation development cycle should be shortened, more documentation/software help should be available on line, and more user interaction should be solicited. For future computer systems, the proposal is made that the documentation be machine comprehensible. This should be phased in, with the immediate goal being to facilitate user querying for information, and with an ultimate goal of providing a database for programmer apprentice artificial-intelligence programs that assist software development. This new functionality will be the result of several trends, including the drastically reduced cost of read-only online random-access storage via video optical disks, the ongoing successes of artificial-intelligence programs when applied to limited application areas, and the ever increasing cost of software programmers. 3 references.

Rosenberg, S.

1982-01-01T23:59:59.000Z

442

Exascale Computing at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Science Gateway Development Storage and IO Technologies Testbeds Home R & D Exascale Computing Exascale Computing CoDEx Project: A HardwareSoftware Codesign Environment...

443

BNL | CFN: Theory & Computation  

NLE Websites -- All DOE Office Websites (Extended Search)

Theory and Computation Contact: Mark Hybertsen Advances in theory, numerical algorithms and computational capabilities have enabled an unprecedented opportunity for fundamental...

444

Computer Algebra Course - CECM  

E-Print Network (OSTI)

Michael Monagan: Computer Algebra. ... Intro to Computer Algebra, Spring 2013. MACM 401/701 and CMPT 881/MATH 819, Spring 2013. Office hours: ...

445

NEWTON's Computer Science Archive  

NLE Websites -- All DOE Office Websites (Extended Search)

Computer Science Archive: Loading Most Recent Computer Science Questions: Preparation for Video Game Creator Most Common Programming and Web Script How Does Spell Check Work? How...

446

BNL | Computational Science Center  

NLE Websites -- All DOE Office Websites (Extended Search)

Computational Science Center Home Research Support Areas Publications Staff EBC Environmental, Biological, and Computational Sciences Directorate CSC image CSC image CSC image CSC...

447

Computational Biology | Biosciences Division  

NLE Websites -- All DOE Office Websites (Extended Search)

Computational Biology BIO Home Page About BIO News Releases Research Publications People Contact Us Organization Chart Site Index Inside BIO BIO Safety About Argonne Computational...

448

Programming a paintable computer  

E-Print Network (OSTI)

A paintable computer is defined as an agglomerate of numerous, finely dispersed, ultra-miniaturized computing particles; each positioned randomly, running asynchronously and communicating locally. Individual particles are ...

Butera, William J. (William Joseph)

2002-01-01T23:59:59.000Z

449

A Comparative Analysis of Computational Approaches to Relative Protein Quantification Using Peptide Peak Intensities in Label-free LC-MS Proteomics Experiments  

SciTech Connect

Liquid chromatography coupled with mass spectrometry (LC-MS) is widely used to identify and quantify peptides in complex biological samples. In particular, label-free shotgun proteomics is highly effective for the identification of peptides and subsequently obtaining a global protein profile of a sample. As a result, this approach is widely used for discovery studies. Typically, the objective of these discovery studies is to identify proteins that are affected by some condition of interest (e.g. disease, exposure). However, for complex biological samples, label-free LC-MS proteomics experiments measure peptides and do not directly yield protein quantities. Thus, protein quantification must be inferred from one or more measured peptides. In recent years, many computational approaches to relative protein quantification of label-free LC-MS data have been published. In this review, we examine the most commonly employed quantification approaches to relative protein abundance from peak intensity values, evaluate their individual merits, and discuss challenges in the use of the various computational approaches.

Matzke, Melissa M.; Brown, Joseph N.; Gritsenko, Marina A.; Metz, Thomas O.; Pounds, Joel G.; Rodland, Karin D.; Shukla, Anil K.; Smith, Richard D.; Waters, Katrina M.; McDermott, Jason E.; Webb-Robertson, Bobbie-Jo M.

2013-02-01T23:59:59.000Z

450

Brook for GPUs: stream computing on graphics hardware  

Science Conference Proceedings (OSTI)

In this paper, we present Brook for GPUs, a system for general-purpose computation on programmable graphics hardware. Brook extends C to include simple data-parallel constructs, enabling the use of the GPU as a streaming co-processor. We present a compiler ... Keywords: Data Parallel Computing, GPU Computing, Brook, Programmable Graphics Hardware, Stream Computing

Ian Buck; Tim Foley; Daniel Horn; Jeremy Sugerman; Kayvon Fatahalian; Mike Houston; Pat Hanrahan

2004-08-01T23:59:59.000Z

451

Can Cloud Computing Address the Scientific Computing Requirements...  

NLE Websites -- All DOE Office Websites (Extended Search)

Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for...

452

Materials Reliability Project: Benchmark Study of Reactor Pressure Vessel Integrity Probabilistic Computational Results Using the Fracture Analysis of Vessels – Oak Ridge (FAVOR) Software Code (MRP-371)  

Science Conference Proceedings (OSTI)

This report reports the results from the Fracture Analysis of Vessels – Oak Ridge (FAVOR) software analysis of three transients that simulated pressurized thermal shock events in pressurized water reactor (PWR) reactor pressure vessels (RPVs). It was determined that software modifications would be required to complete the probabilistic analyses for the wide range of flaw sizes and locations of interest in the study. Consequently, two software revisions were provided by EPRI to enable ...

2013-08-22T23:59:59.000Z

453

Office of Legacy Management Buildings Included on EMS Reports...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Office of Legacy Management Buildings Included on EMS Reports Office of Legacy Management Buildings Included on EMS Reports Office of Legacy Management Buildings Included on EMS...

454

The Computational Physics Program of the national MFE Computer Center  

Science Conference Proceedings (OSTI)

Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

Mirin, A.A.

1989-01-01T23:59:59.000Z

455

User computer system pilot project  

Science Conference Proceedings (OSTI)

The User Computer System (UCS) is a general purpose unclassified, nonproduction system for Mound users. The UCS pilot project was successfully completed, and the system currently has more than 250 users. Over 100 tables were installed on the UCS for use by subscribers, including tables containing data on employees, budgets, and purchasing. In addition, a UCS training course was developed and implemented.

Eimutis, E.C.

1989-09-06T23:59:59.000Z