Sample records for analysis including computer

  1. Energy Consumption of Personal Computing Including Portable

    E-Print Network [OSTI]

    Namboodiri, Vinod

    Energy Consumption of Personal Computing Including Portable Communication Devices Pavel Somavat1 consumption, questions are being asked about the energy contribution of computing equipment. Al- though studies have documented the share of energy consumption by this type of equipment over the years, research

  2. Including Blind Students in Computer Science Through Access to Graphs

    E-Print Network [OSTI]

    Young, R. Michael

    Including Blind Students in Computer Science Through Access to Graphs Suzanne Balik, Sean Mealin SKetching tool, GSK, to provide blind and sighted people with a means to create, examine, and share graphs (node-link diagrams) in real-time. GSK proved very effective for one blind computer science student

  3. acid analysis including: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Nairn, John A. 12 A bottom-up analysis of including aviation within theEU's Emissions Trading Scheme Geosciences Websites Summary: A bottom-up analysis of including aviation...

  4. analysis including quantification: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ausloos 2004-12-31 29 A bottom-up analysis of including aviation within theEU's Emissions Trading Scheme Geosciences Websites Summary: A bottom-up analysis of including aviation...

  5. Human-computer interface including haptically controlled interactions

    DOE Patents [OSTI]

    Anderson, Thomas G.

    2005-10-11T23:59:59.000Z

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  6. #include #include

    E-Print Network [OSTI]

    Campbell, Andrew T.

    process #12;#include #include pid_t pid = fork(); if (pid () failed */ } else if (pid == 0) { /* parent process */ } else { /* child process */ } #12;thread #12

  7. #include #include

    E-Print Network [OSTI]

    Poinsot, Laurent

    #include #include //Rappels : "getpid()" permet d'obtenir son propre pid // "getppid()" renvoie le pid du père d'un processus int main (void) { pid_t pid_fils; pid_fils = fork(); if(pid_fils==-1) { printf("Erreur de création du processus fils\

  8. Computational prediction and analysis of protein structure

    E-Print Network [OSTI]

    Meruelo, Alejandro Daniel

    2012-01-01T23:59:59.000Z

    I, and Bowie JU. Kink prediction in membrane proteins.Los Angeles Computational prediction and analysis of proteinOF THE DISSERTATION Computational prediction and analysis of

  9. Reliability analysis of electric power systems including time dependent sources

    E-Print Network [OSTI]

    Kim, Younjong

    1987-01-01T23:59:59.000Z

    ' the PEP S subsystem is computed by: POP, = NP POt (3. 22) where POPt = power output of the PEPS subsystem during the jth hour NP = number of PEPS units in the subsystem 25 The parameters used for designing the PEPS subsystem are taken from I5... and wind velocity data are obtained for output computation. The capacity of two unconventional subsystem is designed to be equal to each other. 5. 1 System Description Generation System For the purpose of this study, one of the EPRI reduced scenarios...

  10. analysis including plasma: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Assembly 2010 Space Plasmas in the Solar System, including Planetary Magnetospheres (D) Solar Variability, Cosmic Rays and Climate (D21) GEOMAGNETIC ACTIVITY AT HIGH-LATITUDE:...

  11. Accounting for the Energy Consumption of Personal Computing Including Portable Devices

    E-Print Network [OSTI]

    Namboodiri, Vinod

    Accounting for the Energy Consumption of Personal Computing Including Portable Devices Pavel.S.A vinod.namboodiri@wichita.edu ABSTRACT In light of the increased awareness of global energy consumption the share of energy consumption due to these equipment over the years, these have rarely characterized

  12. Reliability analysis of electric power systems including time dependent sources

    E-Print Network [OSTI]

    Kim, Younjong

    1987-01-01T23:59:59.000Z

    Chairman of Advisory Committee: Chanan Singh A method for reliability analysis of electric power systems with time dependent sources, such as photovoltaic and wind generation, is introduced. The fluctuating characteristic of unconventional generation... and active solar. wind, geothermal, and hydropower. Of all the renewable energy technologies that have been the focus of encouraging government and private R k D efforts, photovoltaic generation and wind turbine generation appear to be the leading...

  13. Certification plan for reactor analysis computer codes

    SciTech Connect (OSTI)

    Toffer, H.; Crowe, R.D.; Schwinkendorf, K.N. [Westinghouse Hanford Co., Richland, WA (United States); Pevey, R.E. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1990-01-01T23:59:59.000Z

    A certification plan for reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations has been prepared. An action matrix, checklists, a time schedule, and a resource commitment table have been included in the plan. These items identify what is required to achieve certification of the codes, the time table that this will be accomplished on, and the resources needed to support such an effort.

  14. Organizational Analysis in Computer Science

    E-Print Network [OSTI]

    Kling, Rob

    1993-01-01T23:59:59.000Z

    trying to develop high performance computing applicationsFor example, the High Performance Computing Act will providehelping to develop high performance computing applications

  15. Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions

    Broader source: Energy.gov [DOE]

    Plenary V: Biofuels and Sustainability: Acknowledging Challenges and Confronting MisconceptionsQuantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG EmissionsJennifer B....

  16. Computational analysis of noncoding RNAs

    E-Print Network [OSTI]

    Washietl, Stefan

    Noncoding RNAs have emerged as important key players in the cell. Understanding their surprisingly diverse range of functions is challenging for experimental and computational biology. Here, we review computational methods ...

  17. Computational microscopy for sample analysis

    E-Print Network [OSTI]

    Ikoma, Hayato

    2014-01-01T23:59:59.000Z

    Computational microscopy is an emerging technology which extends the capabilities of optical microscopy with the help of computation. One of the notable example is super resolution fluorescence microscopy which achieves ...

  18. Computational Analysis of Shrouded Wind Turbine Configurations

    E-Print Network [OSTI]

    Alonso, Juan J.

    Computational Analysis of Shrouded Wind Turbine Configurations Aniket C. Aranake Vinod K. Lakshminarayan Karthik Duraisamy Computational analysis of diuser-augmented turbines is performed using high-dimensional simulations of shrouded wind turbines are performed for selected shroud geometries. The results are compared

  19. A bottom-up analysis of including aviation within theEU's Emissions Trading Scheme

    E-Print Network [OSTI]

    Watson, Andrew

    A bottom-up analysis of including aviation within theEU's Emissions Trading Scheme Alice Bows-up analysis of including aviation within the EU's Emissions Trading Scheme Alice Bows & Kevin Anderson Tyndall's emissions trading scheme. Results indicate that unless the scheme adopts both an early baseline year

  20. Planning, Execution, and Analysis of the Meridian UAS Flight Test Program Including System and Parameter Identification

    E-Print Network [OSTI]

    Tom, Jonathan

    2010-04-27T23:59:59.000Z

    The purpose of this Master Thesis is to present the flight test procedures, planning, and analysis including system identification, parameter identification, and drag calculations of the Meridian UAS. The system identification is performed using...

  1. Application of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the...

  2. Computational Analysis of Merchant Marine GPS Data* CASOS Technical Report

    E-Print Network [OSTI]

    Sadeh, Norman M.

    . Keywords: Geospatial analysis, network analysis, clustering * This work was supported in partComputational Analysis of Merchant Marine GPS Data* CASOS Technical Report George B. Davis ISRI - Institute for Software Research International CASOS - Center for Computational Analysis

  3. MTX data acquisition and analysis computer network

    SciTech Connect (OSTI)

    Butner, D.N.; Casper, T.A.; Brown, M.D.; Drlik, M.; Meyer, W.H.; Moller, J.M. (Lawrence Livermore National Laboratory, University of California, Livermore, CA (USA))

    1990-10-01T23:59:59.000Z

    For the MTX experiment, we use a network of computers for plasma diagnostic data acquisition and analysis. This multivendor network employs VMS, UNIX, and BASIC based computers connected in a local area Ethernet network. Some of the data is acquired directly into a VAX/VMS computer cluster over a fiber-optic serial CAMAC highway. Several HP-Unix workstations and HP-BASIC instrument control computers acquire and analyze data for the more data intensive or specialized diagnostics. The VAX/VMS system is used for global analysis of the data and serves as the central data archiving and retrieval manager. Shot synchronization and control of data flow are implemented by task-to-task message passing using our interprocess communication system. The system has been in operation during our initial MTX tokamak and FEL experiments; it has operated reliably with data rates typically in the range of 5 Mbytes/shot without limiting the experimental shot rate.

  4. Power Flow Analysis Algorithm for Islanded LV Microgrids Including Distributed Generator Units with

    E-Print Network [OSTI]

    Chaudhary, Sanjay

    Power Flow Analysis Algorithm for Islanded LV Microgrids Including Distributed Generator Units With larger portion of growing electricity demand which is being fed through distributed generation (DG power system. Being able to operate in both grid-connected and islanded mode, a microgrid manages

  5. Notes 07. Thermal analysis of finite length journal bearings including fluid inertia

    E-Print Network [OSTI]

    San Andres, Luis

    2009-01-01T23:59:59.000Z

    and holes), multiple pads with mechanical preloads to enhance their load capacity and stability. The analysis includes the evaluation of the film mean temperature field from an energy transport equation. The film temperature affects the viscosity... of typical cylindrical journal bearings comprised of a journal rotating with angular speed (?) and a bearing with one or more arcuate pads. A film of lubricant fills the gap between the bearing and its journal. Journal center dislacements (eX, eY) refer...

  6. A Performance and Cost Analysis of the Amazon Elastic Compute Cloud (EC2) Cluster Compute Instance

    E-Print Network [OSTI]

    Bjrnstad, Ottar Nordal

    A Performance and Cost Analysis of the Amazon Elastic Compute Cloud (EC2) Cluster Compute Instance the availability of Elastic Compute Cloud (EC2) Cluster Compute Instances specifically designed for high compute power available on demand the question arises if cloud computing with using and Amazon EC2 HPC

  7. APPENDIX B -GRAPHICS Most computer simulation work produces lots of numerical data. The analysis of

    E-Print Network [OSTI]

    Boal, David

    APPENDIX B - GRAPHICS Most computer simulation work produces lots of numerical data. The analysis. In this section, we describe some elements of computer graphics that are appropriate to the Apple PowerPCs of the Computational Physics Lab. Further editions of these notes will include Windows versions of the graphics. Our

  8. Analysis of the Thermonuclear Instability including Low-Power ICRH Minority Heating in IGNITOR

    E-Print Network [OSTI]

    Cardinali, Alessandro

    2014-01-01T23:59:59.000Z

    The nonlinear thermal balance equation for classical plasma in a toroidal geometry is analytically and numerically investigated including ICRH power. The determination of the equilibrium temperature and the analysis of the stability of the solution are performed by solving the energy balance equation that includes the transport relations obtained by the kinetic theory. An estimation of the confinement time is also provided. We show that the ICRH heating in the IGNITOR experiment, among other applications, is expected to stabilize the power of the thermonuclear burning by automatic regulation of the RF coupled power. Here a scenario is considered where IGNITOR is led to operate in a slightly sub-critical regime by adding a small fraction of ${}^3He$ to the nominal 50-50 Deuterium-Tritium mixture. The difference between power lost and alpha heating is compensated by additional ICRH heating, which should be able to increase the global plasma temperature via collisions between ${}^3He$ minority and the background...

  9. automated computer analysis: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    tool for the analysis of software quality Zhu, Hong 4 Please do not quote. In press, Handbook of affective computing. New York, NY: Oxford Automated Face Analysis for...

  10. RSAC -6 Radiological Safety Analysis Computer Program

    SciTech Connect (OSTI)

    Schrader, Bradley J; Wenzel, Douglas Rudolph

    2001-06-01T23:59:59.000Z

    RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface and plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FGR 11 and 12.

  11. BWR transient analysis using neutronic / thermal hydraulic coupled codes including uncertainty quantification

    SciTech Connect (OSTI)

    Hartmann, C.; Sanchez, V. [Karlsruhe Inst. of Technology (KIT), Inst. for Neutron Physics and Reactor Technology INR, Hermann-vom-Helmholtz-Platz-1, D-76344 Eggenstein-Leopoldshafen (Germany); Tietsch, W. [Westinghouse Electric Germany GmbH, Mannheim (Germany); Stieglitz, R. [Karlsruhe Inst. of Technology (KIT), Inst. for Neutron Physics and Reactor Technology INR, Hermann-vom-Helmholtz-Platz-1, D-76344 Eggenstein-Leopoldshafen (Germany)

    2012-07-01T23:59:59.000Z

    The KIT is involved in the development and qualification of best estimate methodologies for BWR transient analysis in cooperation with industrial partners. The goal is to establish the most advanced thermal hydraulic system codes coupled with 3D reactor dynamic codes to be able to perform a more realistic evaluation of the BWR behavior under accidental conditions. For this purpose a computational chain based on the lattice code (SCALE6/GenPMAXS), the coupled neutronic/thermal hydraulic code (TRACE/PARCS) as well as a Monte Carlo based uncertainty and sensitivity package (SUSA) has been established and applied to different kind of transients of a Boiling Water Reactor (BWR). This paper will describe the multidimensional models of the plant elaborated for TRACE and PARCS to perform the investigations mentioned before. For the uncertainty quantification of the coupled code TRACE/PARCS and specifically to take into account the influence of the kinetics parameters in such studies, the PARCS code has been extended to facilitate the change of model parameters in such a way that the SUSA package can be used in connection with TRACE/PARCS for the U and S studies. This approach will be presented in detail. The results obtained for a rod drop transient with TRACE/PARCS using the SUSA-methodology showed clearly the importance of some kinetic parameters on the transient progression demonstrating that the coupling of a best-estimate coupled codes with uncertainty and sensitivity tools is very promising and of great importance for the safety assessment of nuclear reactors. (authors)

  12. An Approach for Security Evaluation and Analysis in Cloud Computing

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    An Approach for Security Evaluation and Analysis in Cloud Computing T. Probst1,2 , E. Alata1,3 , M for security evaluation and analysis in cloud computing environments. The objective is to provide an automated way to evaluate the efficiency of security mechanisms aiming at protecting the cloud computing

  13. DNA Computing Complexity Analysis Using DNA/DNA Hybridization Kinetics

    E-Print Network [OSTI]

    DNA Computing Complexity Analysis Using DNA/DNA Hybridization Kinetics SooYong Shin 1 , Eun Jeong the complexity of DNA computing. The complexity of any computational algorithm is typically measured in terms of time and space. In DNA computing, the time complexity can be measured by the total reaction time

  14. DNA Computing Complexity Analysis Using DNA/DNA Hybridization Kinetics

    E-Print Network [OSTI]

    DNA Computing Complexity Analysis Using DNA/DNA Hybridization Kinetics Soo-Yong Shin1 , Eun Jeong of DNA computing. The complexity of any computational algorithm is typically measured in terms of time and space. In DNA computing, the time complexity can be measured by the total reaction time

  15. Availability Analysis of Repairable Computer Systems and Stationarity Detection

    E-Print Network [OSTI]

    Sericola, Bruno

    Availability Analysis of Repairable Computer Systems and Stationarity Detection Bruno Sericola AbstractPoint availability and expected interval availability are dependability measures respectively in this paper a new algorithm to compute these two availability measures. This algorithm is based

  16. Analysis of advanced european nuclear fuel cycle scenarios including transmutation and economical estimates

    SciTech Connect (OSTI)

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F. [CIEMAT, Avda. Complutense, 40, 28040 Madrid (Spain)

    2013-07-01T23:59:59.000Z

    In this work the transition from the existing Light Water Reactors (LWR) to the advanced reactors is analyzed, including Generation III+ reactors in a European framework. Four European fuel cycle scenarios involving transmutation options have been addressed. The first scenario (i.e., reference) is the current fleet using LWR technology and open fuel cycle. The second scenario assumes a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel. The third scenario is a modification of the second one introducing Minor Actinide (MA) transmutation in a fraction of the FR fleet. Finally, in the fourth scenario, the LWR fleet is replaced using FR with MOX fuel as well as Accelerator Driven Systems (ADS) for MA transmutation. All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for a period of 200 years looking for equilibrium mass flows. The simulations were made using the TR-EVOL code, a tool for fuel cycle studies developed by CIEMAT. The results reveal that all scenarios are feasible according to nuclear resources demand (U and Pu). Concerning to no transmutation cases, the second scenario reduces considerably the Pu inventory in repositories compared to the reference scenario, although the MA inventory increases. The transmutation scenarios show that elimination of the LWR MA legacy requires on one hand a maximum of 33% fraction (i.e., a peak value of 26 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation). On the other hand a maximum number of ADS plants accounting for 5% of electricity generation are predicted in the fourth scenario (i.e., 35 ADS units). Regarding the economic analysis, the estimations show an increase of LCOE (Levelized cost of electricity) - averaged over the whole period - with respect to the reference scenario of 21% and 29% for FR and FR with transmutation scenarios respectively, and 34% for the fourth scenario. (authors)

  17. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    to be adequate and slightly conservative Application of the Computer Program SASSI for Seismic SSI Analysis for WTP Facilities, Farhang Ostadan & Raman Venkata, October 25,...

  18. Analysis of Solar Passive Techniques and Natural Ventilation Concepts in a Residential Building Including CFD Simulation

    E-Print Network [OSTI]

    Quince, N.; Ordonez, A.; Bruno, J. C.; Coronas, A.

    2010-01-01T23:59:59.000Z

    step to increase energy performance in buildings is to use passive strategies, such as orientation, natural ventilation or envelope optimisation. This paper presents an analysis of solar passive techniques and natural ventilation concepts in a case...

  19. Intrinsic and Extrinsic Analysis on Computational Anatomy

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    in the anatomy. 68 Mathematical Foundations of Computational Anatomy (MFCA'06) inria-00635889,version1-26Oct2011

  20. The Green Computing Observatory: status of acquisition and analysis

    E-Print Network [OSTI]

    Lefvre, Laurent

    The Green Computing Observatory: status of acquisition and analysis Ccile Germain-Renaud1, Julien, CNRS, INRIA 2: Laboratoire de l'Acclrateur Linaire, CNRS-IN2P3 #12; Previous GreenDays talks o GreenDays@Paris The Green Computing Observatory: plans and scientific challenges o GreenDays@Lyon The Green Computing

  1. EXPERIMENTAL AND NUMERICAL ANALYSIS OF THE FLOW INSIDE A CONFIGURATION INCLUDING AN AXIAL PUMP AND A TUBULAR

    E-Print Network [OSTI]

    Boyer, Edmond

    EXPERIMENTAL AND NUMERICAL ANALYSIS OF THE FLOW INSIDE A CONFIGURATION INCLUDING AN AXIAL PUMP-30723) In centrifugal and axial pumps, the flow is characterized by a turbulent and complex behavior and also of a configuration that includes an axial pump and a bundle of tubes that mimics the cool source of a heat exchanger

  2. Computational InfrastructureComputational Infrastructure for Systems Genetics Analysis

    E-Print Network [OSTI]

    Yandell, Brian S.

    are shared Benefits of wider access to datasets and models: 1 catalyze new insights on disease & methods 2 biologists & analysts to share tools UWMadison: Yandell,Attie,Broman,Kendziorski Jackson Labs: Churchill U steps 24 as required BioWeb Server Run analysis and 3 Run analysis and load results File Storage

  3. Improved One-dimensional Analysis of CMOS Photodiode Including Epitaxial-Substrate Junction

    E-Print Network [OSTI]

    Hornsey, Richard

    -dimensional analysis of CMOS photodiode has been derived in which the effect of the substrate, which forms a high, following the classical one- dimensional analyses of photodiodes and other photovoltaic devices [1-3], subsequent efforts focused on the effects of lateral diffusion in linear and two-dimensional arrays

  4. Computer methods for structural neurological systems analysis

    E-Print Network [OSTI]

    Lo?pez, Roberto Eugenio

    1991-01-01T23:59:59.000Z

    of the digital computer has allowed our progress to increase enormously. Unfortunately, this great tool has shown many limitations. In the early 1960's, researchers in the area of artificial intelligence made wild predictions about computers emulating... and neuroscientists together to study the brain itself. In general, the artificial intelligence approach attempts to make computers exhibit "intelligent" behavior, without correlating This Thesis follows the style and format of Communications of rhe ACM...

  5. Quantum Wavepacket Ab Initio Molecular Dynamics: An Approach for Computing Dynamically Averaged Vibrational Spectra Including Critical Nuclear Quantum Effects

    E-Print Network [OSTI]

    Iyengar, Srinivasan S.

    the precise vibrational signatures that contribute to dynamics in soft-mode hydrogen-bonded systems ReceiVed: June 12, 2007; In Final Form: August 11, 2007 We have introduced a computational methodology of hydrogen-bonded systems and hydrogen transfer extends beyond fundamental chemistry and well into the areas

  6. RDI's Wisdom Way Solar Village Final Report: Includes Utility Bill Analysis of Occupied Homes

    SciTech Connect (OSTI)

    Robb Aldrich, Steven Winter Associates

    2011-07-01T23:59:59.000Z

    In 2010, Rural Development, Inc. (RDI) completed construction of Wisdom Way Solar Village (WWSV), a community of ten duplexes (20 homes) in Greenfield, MA. RDI was committed to very low energy use from the beginning of the design process throughout construction. Key features include: 1. Careful site plan so that all homes have solar access (for active and passive); 2. Cellulose insulation providing R-40 walls, R-50 ceiling, and R-40 floors; 3. Triple-pane windows; 4. Airtight construction (~0.1 CFM50/ft2 enclosure area); 5. Solar water heating systems with tankless, gas, auxiliary heaters; 6. PV systems (2.8 or 3.4kWSTC); 7. 2-4 bedrooms, 1,100-1,700 ft2. The design heating loads in the homes were so small that each home is heated with a single, sealed-combustion, natural gas room heater. The cost savings from the simple HVAC systems made possible the tremendous investments in the homes' envelopes. The Consortium for Advanced Residential Buildings (CARB) monitored temperatures and comfort in several homes during the winter of 2009-2010. In the Spring of 2011, CARB obtained utility bill information from 13 occupied homes. Because of efficient lights, appliances, and conscientious home occupants, the energy generated by the solar electric systems exceeded the electric energy used in most homes. Most homes, in fact, had a net credit from the electric utility over the course of a year. On the natural gas side, total gas costs averaged $377 per year (for heating, water heating, cooking, and clothes drying). Total energy costs were even less - $337 per year, including all utility fees. The highest annual energy bill for any home evaluated was $458; the lowest was $171.

  7. Computational prediction and analysis of protein structure

    E-Print Network [OSTI]

    Meruelo, Alejandro Daniel

    2012-01-01T23:59:59.000Z

    Properties studied include packing density, burial fraction, hydrogenproperties such as surface loop length, interhelical hydrogen

  8. Finite element analysis and computed tomography based structural rigidity analysis of rat tibia with simulated lytic defects

    E-Print Network [OSTI]

    Vaziri, Ashkan

    Finite element analysis and computed tomography based structural rigidity analysis of rat tibia) (Damron et al., 2003; Mirels, 1989). In contrast, Computed Tomography based Structural Rigidly Ana- lysis

  9. An analysis of the cloud computing platform

    E-Print Network [OSTI]

    Bhattacharjee, Ratnadeep

    2009-01-01T23:59:59.000Z

    A slew of articles have been written about the fact that computing will eventually go in the direction of electricity. Just as most software users these days also own the hardware that runs the software, electricity users ...

  10. Surface and grain boundary scattering in nanometric Cu thin films: A quantitative analysis including twin boundaries

    SciTech Connect (OSTI)

    Barmak, Katayun [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 and Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Darbal, Amith [Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Ganesh, Kameswaran J.; Ferreira, Paulo J. [Materials Science and Engineering, The University of Texas at Austin, 1 University Station, Austin, Texas 78712 (United States); Rickman, Jeffrey M. [Department of Materials Science and Engineering and Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States); Sun, Tik; Yao, Bo; Warren, Andrew P.; Coffey, Kevin R., E-mail: kb2612@columbia.edu [Department of Materials Science and Engineering, University of Central Florida, 4000 Central Florida Boulevard, Orlando, Florida 32816 (United States)

    2014-11-01T23:59:59.000Z

    The relative contributions of various defects to the measured resistivity in nanocrystalline Cu were investigated, including a quantitative account of twin-boundary scattering. It has been difficult to quantitatively assess the impact twin boundary scattering has on the classical size effect of electrical resistivity, due to limitations in characterizing twin boundaries in nanocrystalline Cu. In this study, crystal orientation maps of nanocrystalline Cu films were obtained via precession-assisted electron diffraction in the transmission electron microscope. These orientation images were used to characterize grain boundaries and to measure the average grain size of a microstructure, with and without considering twin boundaries. The results of these studies indicate that the contribution from grain-boundary scattering is the dominant factor (as compared to surface scattering) leading to enhanced resistivity. The resistivity data can be well-described by the combined FuchsSondheimer surface scattering model and MayadasShatzkes grain-boundary scattering model using Matthiessen's rule with a surface specularity coefficient of p?=?0.48 and a grain-boundary reflection coefficient of R?=?0.26.

  11. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Broader source: Energy.gov (indexed) [DOE]

    Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October...

  12. The Realizability Approach to Computable Analysis and Topology

    E-Print Network [OSTI]

    , of the NSF, NAFSA, or the U.S. government. #12;Keywords: computability, realizability, modest sets combinatory alge- bras with subalgebras of computable elements, out of which categories of modest sets and analysis are special cases of the general theory of modest sets. In the first part of the dissertation, I

  13. The Realizability Approach to Computable Analysis and Topology

    E-Print Network [OSTI]

    , of the NSF, NAFSA, or the U.S. government. #12; Keywords: computability, realizability, modest sets categories of modest sets are constructed. The internal logic of these categories is suitable for developing approaches to computable topology and analysis are special cases of the general theory of modest sets

  14. analysis cai computer: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    analysis cai computer First Page Previous Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 Copyright2006 Computer Aid, Inc....

  15. analysis computer rsac: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    analysis computer rsac First Page Previous Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 Computability & Complexity in...

  16. Process for computing geometric perturbations for probabilistic analysis

    DOE Patents [OSTI]

    Fitch, Simeon H. K. (Charlottesville, VA); Riha, David S. (San Antonio, TX); Thacker, Ben H. (San Antonio, TX)

    2012-04-10T23:59:59.000Z

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  17. Student Ownership of Work Created in Computer Science Classes and Ownership of software, including the source code, that students create as part of his or

    E-Print Network [OSTI]

    Dyer, Bill

    Student Ownership of Work Created in Computer Science Classes and Projects Ownership of software, including the source code, that students create as part of his or her MSU education activities a perpetual royaltyfree nonexclusive right to use the source code and make derivative works for educational

  18. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University

    2010-05-19T23:59:59.000Z

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  19. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner

    2010-05-24T23:59:59.000Z

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  20. Global sensitivity analysis of computer models with functional inputs

    E-Print Network [OSTI]

    Boyer, Edmond

    -based sensitivity analysis techniques, based on the so-called Sobol's indices, when some input variables computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We). The "mean model" allows to estimate the sensi- tivity indices of each scalar input variables, while

  1. Engineering Analysis of Intermediate Loop and Process Heat Exchanger Requirements to Include Configuration Analysis and Materials Needs

    SciTech Connect (OSTI)

    T.M. Lillo; R.L. Williamson; T.R. Reed; C.B. Davis; D.M. Ginosar

    2005-09-01T23:59:59.000Z

    The need to locate advanced hydrogen production facilities a finite distance away from a nuclear power source necessitates the need for an intermediate heat transport loop (IHTL). This IHTL must not only efficiently transport energy over distances up to 500 meters but must also be capable of operating at high temperatures (>850oC) for many years. High temperature, long term operation raises concerns of material strength, creep resistance and general material stability (corrosion resistance). IHTL design is currently in the initial stages. Many questions remain to be answered before intelligent design can begin. The report begins to look at some of the issues surrounding the main components of an IHTL. Specifically, a stress analysis of a compact heat exchanger design under expected operating conditions is reported. Also the results of a thermal analysis performed on two ITHL pipe configurations for different heat transport fluids are presented. The configurations consist of separate hot supply and cold return legs as well as annular design in which the hot fluid is carried in an inner pipe and the cold return fluids travels in the opposite direction in the annular space around the hot pipe. The effects of insulation configurations on pipe configuration performance are also reported. Finally, a simple analysis of two different process heat exchanger designs, one a tube in shell type and the other a compact or microchannel reactor are evaluated in light of catalyst requirements. Important insights into the critical areas of research and development are gained from these analyses, guiding the direction of future areas of research.

  2. MTX (Microwave Tokamak Experiment) data acquisition and analysis computer network

    SciTech Connect (OSTI)

    Butner, D.N.; Casper, T.A.; Brown, M.D.; Drlik, M.; Meyer, W.H.; Moller, J.M.

    1990-06-01T23:59:59.000Z

    For the MTX experiment, we use a network of computers for plasma diagnostic data acquisition and analysis. This multivendor network employs VMS, UNIX, and BASIC based computers connected in a local area Ethernet network. Some of the data is acquired directly into a VAX/VMS computer cluster over a fiber-optic serial CAMAC highway. Several HP-Unix workstations and HP-BASIC instrument control computers acquire and analyze data for the more data intensive or specialized diagnostics. The VAX/VMS system is used for global analysis of the data and serves as the central data archiving and retrieval manager. Shot synchronization and control of data flow are implemented by task-to-task message passing using our interprocess communication system. The system has been in operation during our initial MTX tokamak and FEL experiments; it has operated reliably with data rates typically in the range of 5 megabytes/shot without limiting the experimental shot rate.

  3. Numerical power balance and free energy loss analysis for solar cells including optical, thermodynamic, and electrical aspects

    SciTech Connect (OSTI)

    Greulich, Johannes, E-mail: johannes.greulich@ise.fraunhofer.de; Hffler, Hannes; Wrfel, Uli; Rein, Stefan [Fraunhofer Institute for Solar Energy Systems, Heidenhofstr. 2, D-79110 Freiburg (Germany)

    2013-11-28T23:59:59.000Z

    A method for analyzing the power losses of solar cells is presented, supplying a complete balance of the incident power, the optical, thermodynamic, and electrical power losses and the electrical output power. The involved quantities have the dimension of a power density (units: W/m{sup 2}), which permits their direct comparison. In order to avoid the over-representation of losses arising from the ultraviolet part of the solar spectrum, a method for the analysis of the electrical free energy losses is extended to include optical losses. This extended analysis does not focus on the incident solar power of, e.g., 1000?W/m{sup 2} and does not explicitly include the thermalization losses and losses due to the generation of entropy. Instead, the usable power, i.e., the free energy or electro-chemical potential of the electron-hole pairs is set as reference value, thereby, overcoming the ambiguities of the power balance. Both methods, the power balance and the free energy loss analysis, are carried out exemplarily for a monocrystalline p-type silicon metal wrap through solar cell with passivated emitter and rear (MWT-PERC) based on optical and electrical measurements and numerical modeling. The methods give interesting insights in photovoltaic (PV) energy conversion, provide quantitative analyses of all loss mechanisms, and supply the basis for the systematic technological improvement of the device.

  4. Computational analysis of temperature rise phenomena in electric induction motors

    E-Print Network [OSTI]

    Melnik, Roderick

    machines in general, and induction motors in particular, temperature limits is a key factor affectingComputational analysis of temperature rise phenomena in electric induction motors Ying Huai Kraftwerkstechnik, Petersenstra?e 30, 64287 Darmstadt, Germany b Faculty of Science and Engineering, Mads Clausen

  5. Computer analysis of the two versions of Byzantine chess

    E-Print Network [OSTI]

    Anatole Khalfine; Ed Troyan

    2007-10-04T23:59:59.000Z

    In the Byzantine Empire of 11-15 CE chess was played on the circular board. Two versions were known - REGULAR and SYMMETRIC. The difference between them is easy: the white queen is placed either on light (regular) or on dark square (symmetric). However, the computer analysis reveals the results of this 'small perturbation'.

  6. Computing support for advanced medical data analysis and imaging

    E-Print Network [OSTI]

    Wi?licki, W; Bia?as, P; Czerwi?ski, E; Kap?on, ?; Kochanowski, A; Korcyl, G; Kowal, J; Kowalski, P; Kozik, T; Krzemie?, W; Molenda, M; Moskal, P; Nied?wiecki, S; Pa?ka, M; Pawlik, M; Raczy?ski, L; Rudy, Z; Salabura, P; Sharma, N G; Silarski, M; S?omski, A; Smyrski, J; Strzelecki, A; Wieczorek, A; Zieli?ski, M; Zo?, N

    2014-01-01T23:59:59.000Z

    We discuss computing issues for data analysis and image reconstruction of PET-TOF medical scanner or other medical scanning devices producing large volumes of data. Service architecture based on the grid and cloud concepts for distributed processing is proposed and critically discussed.

  7. Computational and Experimental Analysis of Redundancy in the Central Metabolism

    E-Print Network [OSTI]

    Lovley, Derek

    in the subsurface, and their capacity to harvest electricity from waste organic matter [1­ 3]. GeobacterComputational and Experimental Analysis of Redundancy in the Central Metabolism of Geobacter of the metabolic network of Geobacter sulfurreducens suggested the existence of several redundant pathways. Here

  8. Air Ingress Benchmarking with Computational Fluid Dynamics Analysis

    E-Print Network [OSTI]

    1 Air Ingress Benchmarking with Computational Fluid Dynamics Analysis Tieliang Zhai Professor by the US Nuclear Regulatory Commission #12;2 Air Ingress Accident Objectives and Overall Strategy: Depresurization Pure Diffusion Natural Convection Challenging: Natural convection Multi-component Diffusion (air

  9. Air Ingress Benchmarking with Computational Fluid Dynamics Analysis

    E-Print Network [OSTI]

    Air Ingress Benchmarking with Computational Fluid Dynamics Analysis Andrew C. Kadak Department District Beijing, China September 22-24, 2004 Abstract Air ingress accident is a complicated accident scenario is compounded by multiple physical phenomena that are involved in the air ingress event

  10. analysis computer program: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    analysis computer program First Page Previous Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 Elements of mathematics and...

  11. GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT

    E-Print Network [OSTI]

    Stuttgart, Universitt

    GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING) implementation of the algorithms on high performance computing platforms. #12;2. INVARIANTS REPRESENTATION

  12. FREQFIT: Computer program which performs numerical regression and statistical chi-squared goodness of fit analysis

    SciTech Connect (OSTI)

    Hofland, G.S.; Barton, C.C.

    1990-10-01T23:59:59.000Z

    The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program`s results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig.

  13. Interface design of VSOP'94 computer code for safety analysis

    SciTech Connect (OSTI)

    Natsir, Khairina, E-mail: yenny@batan.go.id; Andiwijayakusuma, D.; Wahanani, Nursinta Adi [Center for Development of Nuclear Informatics - National Nuclear Energy Agency, PUSPIPTEK, Serpong, Tangerang, Banten (Indonesia); Yazid, Putranto Ilham [Center for Nuclear Technology, Material and Radiometry- National Nuclear Energy Agency, Jl. Tamansari No.71, Bandung 40132 (Indonesia)

    2014-09-30T23:59:59.000Z

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  14. Heat transfer analysis capabilities of the scale computational system

    SciTech Connect (OSTI)

    Parks, C.V.; Giles, G.E.; Childs, K.W.; Bryan, C.B.

    1986-01-01T23:59:59.000Z

    The heat transfer capabilities within the modular SCALE computational system are centered about the HEATING6 functional module. This paper reviews the features and modeling capabilities of HEATING6, discusses the supportive plotting capabilities of REGPLOT6 and HEATPLOT-S, and finally provides a general description of the Heat Transfer Analysis Sequence No.1 (HTASI) available in SCALE for performing thermal analyses of transport casks via HEATING6. The HTASI control module is an easy-to-use tool that allows an inexperienced HEATING6 user to obtain reliable thermal analysis results. A summary of the recent verification efforts undertaken for HEATING6 is also provided. 16 refs., 14 figs.

  15. ReseaRch at the University of Maryland Bioinformatics: Computational Analysis of Biological Information

    E-Print Network [OSTI]

    Hill, Wendell T.

    ReseaRch at the University of Maryland Bioinformatics: Computational Analysis of Biological Information Bioinformatics--the use of advanced computational techniques for biological research's Center for Bioinformatics and Computational Biology (CBCB) is at the forefront of bioinformatics research

  16. Coral Extension Rate Analysis Using Computed Axial Tomography

    E-Print Network [OSTI]

    Yudelman, Eleanor Ann

    2014-01-10T23:59:59.000Z

    CORAL EXTENSION RATE ANALYSIS USING COMPUTED AXIAL TOMOGRAPHY A Thesis by ELEANOR ANN YUDELMAN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the requirements for the degree... of MASTER OF SCIENCE Chair of Committee, Niall Slowey Committee Members, Deborah Thomas Benjamin Giese George P. Schmahl Head of Department, Deborah Thomas May 2014 Major Subject: Oceanography Copyright 2014 Eleanor Ann Yudelman ii ABSTRACT...

  17. Uncertainty and sensitivity analysis for long-running computer codes : a critical review

    E-Print Network [OSTI]

    Langewisch, Dustin R

    2010-01-01T23:59:59.000Z

    This thesis presents a critical review of existing methods for performing probabilistic uncertainty and sensitivity analysis for complex, computationally expensive simulation models. Uncertainty analysis (UA) methods ...

  18. Collection and analysis of environmental radiation data using a desktop computer

    SciTech Connect (OSTI)

    Gogolak, C V

    1982-04-01T23:59:59.000Z

    A portable instrumentation sytem using a Hewlett-Packard HP-9825 desktop computer for the collection and analysis of environmental radiation data is described. Procedures for the transmission of data between the HP-9825 and various nuclear counters are given together with a description of the necessary hardware and software. Complete programs for the analysis of Ge(Li) and NaI(Tl) gamma-ray spectra, high pressure ionization chamber monitor data, /sup 86/Kr monitor data and air filter sample alpha particle activity measurements are presented. Some utility programs, intended to increase system flexibility, are included.

  19. Computational chemistry in Argonne`s Reactor Analysis Division

    SciTech Connect (OSTI)

    Gelbard, E.; Agrawal, R.; Fanning, T.

    1997-08-01T23:59:59.000Z

    Roughly 3 years ago work on Argonne`s Integral Fast Reactor ({open_quotes}IFR{close_quotes}) was terminated and at that time, ANL funding was redirected to a number of alternative programs. One such alternative was waste management and, since disposal of spent fuel from ANL`s EBR-II reactor presents some special problems, this seemed an appropriate area for ANL work. Methods for the treatment and disposal of spent fuel (particularly from EBR-II but also from other sources) are now under very active investigation at ANL. The very large waste form development program is mainly experimental at this point, but within the Reactor Analysis ({open_quotes}RA{close_quotes}) Division a small computational chemistry program is underway, designed to supplement the experimental program. One of the most popular proposals for the treatment of much of our high-level wastes is vitrification. As noted below, this approach has serious drawbacks for EBR-II spent fuel. ANL has proposed, instead, that spent fuel first be pretreated by a special metallurgical process which produces, as waste, chloride salts of the various fission products; these salts would then be adsorbed in zeolite A, which is subsequently bonded with glass to produce a waste form suitable for disposal. So far it has been the main mission of RA`s computational chemistry program to study the process by which leaching occurs when the glass-bonded zeolite waste form is exposed to water. It is the purpose of this paper to describe RA`s computational chemistry program, to discuss the computational techniques involved in such a program, and in general to familiarize the M. and C. Division with a computational area which is probably unfamiliar to most of its member. 11 refs., 2 figs.

  20. analysis scientific computing: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the need Kuzmanov, Georgi 3 SCIINSTITUTE Scientific Computing and Imaging Institute Computer Technologies and Information Sciences Websites Summary: SCIINSTITUTE Scientific...

  1. Computational modeling and analysis of thermoelectric properties of nanoporous silicon

    SciTech Connect (OSTI)

    Li, H.; Yu, Y.; Li, G., E-mail: gli@clemson.edu [Department of Mechanical Engineering, Clemson University, Clemson, South Carolina 29634-0921 (United States)

    2014-03-28T23:59:59.000Z

    In this paper, thermoelectric properties of nanoporous silicon are modeled and studied by using a computational approach. The computational approach combines a quantum non-equilibrium Green's function (NEGF) coupled with the Poisson equation for electrical transport analysis, a phonon Boltzmann transport equation (BTE) for phonon thermal transport analysis and the Wiedemann-Franz law for calculating the electronic thermal conductivity. By solving the NEGF/Poisson equations self-consistently using a finite difference method, the electrical conductivity ? and Seebeck coefficient S of the material are numerically computed. The BTE is solved by using a finite volume method to obtain the phonon thermal conductivity k{sub p} and the Wiedemann-Franz law is used to obtain the electronic thermal conductivity k{sub e}. The figure of merit of nanoporous silicon is calculated by ZT=S{sup 2}?T/(k{sub p}+k{sub e}). The effects of doping density, porosity, temperature, and nanopore size on thermoelectric properties of nanoporous silicon are investigated. It is confirmed that nanoporous silicon has significantly higher thermoelectric energy conversion efficiency than its nonporous counterpart. Specifically, this study shows that, with a n-type doping density of 10{sup 20}?cm{sup 3}, a porosity of 36% and nanopore size of 3 nm ?3?nm, the figure of merit ZT can reach 0.32 at 600?K. The results also show that the degradation of electrical conductivity of nanoporous Si due to the inclusion of nanopores is compensated by the large reduction in the phonon thermal conductivity and increase of absolute value of the Seebeck coefficient, resulting in a significantly improved ZT.

  2. Computational Analysis of Material Flow During Friction Stir Welding of AA5059 Aluminum Alloys

    E-Print Network [OSTI]

    Grujicic, Mica

    Computational Analysis of Material Flow During Friction Stir Welding of AA5059 Aluminum Alloys M welding (FSW) process are investigated computationally. Within the numerical model of the FSW process component. The employed coupled Eulerian/Lagrangian computational analysis of the welding process

  3. Analysis of gallium arsenide deposition in a horizontal chemical vapor deposition reactor using massively parallel computations

    SciTech Connect (OSTI)

    Salinger, A.G.; Shadid, J.N.; Hutchinson, S.A. [and others

    1998-01-01T23:59:59.000Z

    A numerical analysis of the deposition of gallium from trimethylgallium (TMG) and arsine in a horizontal CVD reactor with tilted susceptor and a three inch diameter rotating substrate is performed. The three-dimensional model includes complete coupling between fluid mechanics, heat transfer, and species transport, and is solved using an unstructured finite element discretization on a massively parallel computer. The effects of three operating parameters (the disk rotation rate, inlet TMG fraction, and inlet velocity) and two design parameters (the tilt angle of the reactor base and the reactor width) on the growth rate and uniformity are presented. The nonlinear dependence of the growth rate uniformity on the key operating parameters is discussed in detail. Efficient and robust algorithms for massively parallel reacting flow simulations, as incorporated into our analysis code MPSalsa, make detailed analysis of this complicated system feasible.

  4. Using citation analysis techniques for computer-assisted legal research in continental jurisdictions

    E-Print Network [OSTI]

    Geist, Anton

    2009-01-01T23:59:59.000Z

    The following research investigates the use of citation analysis techniques for relevance ranking in computer-assisted legal research systems. Overviews on information retrieval, legal research, computer-assisted legal ...

  5. The Speedup-Test: A Statistical Methodology for Program Speedup Analysis and Computation

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    TOUATI , Julien WORMS, Sebastien BRIAIS May 2012 Abstract In the area of high performance computing improvement compared to the usual performance analysis method in high performance computing. We explain

  6. Stability Analysis of Large-Scale Incompressible Flow Calculations on Massively Parallel Computers 1 Stability Analysis of Large-

    E-Print Network [OSTI]

    Stability Analysis of Large-Scale Incompressible Flow Calculations on Massively Parallel Computers 1 Stability Analysis of Large- Scale Incompressible Flow Calculations on Massively Parallel disturbances aligned with the associated eigenvectors will grow. The Cayley transformation, cou- pled

  7. Smoothed Analysis of the k-Means Method DAVID ARTHUR, Stanford University, Department of Computer Science

    E-Print Network [OSTI]

    Al Hanbali, Ahmad

    A Smoothed Analysis of the k-Means Method DAVID ARTHUR, Stanford University, Department of Computer of Bonn, Department of Computer Science The k-means method is one of the most widely used clustering analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed

  8. Data analysis using the Gnu R system for statistical computation

    SciTech Connect (OSTI)

    Simone, James; /Fermilab

    2011-07-01T23:59:59.000Z

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  9. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect (OSTI)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20T23:59:59.000Z

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  10. NALDA (Naval Aviation Logistics Data Analysis) CAI (computer aided instruction)

    SciTech Connect (OSTI)

    Handler, B.H. (Oak Ridge K-25 Site, TN (USA)); France, P.A.; Frey, S.C.; Gaubas, N.F.; Hyland, K.J.; Lindsey, A.M.; Manley, D.O. (Oak Ridge Associated Universities, Inc., TN (USA)); Hunnum, W.H. (North Carolina Univ., Chapel Hill, NC (USA)); Smith, D.L. (Memphis State Univ., TN (USA))

    1990-07-01T23:59:59.000Z

    Data Systems Engineering Organization (DSEO) personnel developed a prototype computer aided instruction CAI system for the Naval Aviation Logistics Data Analysis (NALDA) system. The objective of this project was to provide a CAI prototype that could be used as an enhancement to existing NALDA training. The CAI prototype project was performed in phases. The task undertaken in Phase I was to analyze the problem and the alternative solutions and to develop a set of recommendations on how best to proceed. The findings from Phase I are documented in Recommended CAI Approach for the NALDA System (Duncan et al., 1987). In Phase II, a structured design and specifications were developed, and a prototype CAI system was created. A report, NALDA CAI Prototype: Phase II Final Report, was written to record the findings and results of Phase II. NALDA CAI: Recommendations for an Advanced Instructional Model, is comprised of related papers encompassing research on computer aided instruction CAI, newly developing training technologies, instructional systems development, and an Advanced Instructional Model. These topics were selected because of their relevancy to the CAI needs of NALDA. These papers provide general background information on various aspects of CAI and give a broad overview of new technologies and their impact on the future design and development of training programs. The paper within have been index separately elsewhere.

  11. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect (OSTI)

    Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d'%C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)

    2011-06-01T23:59:59.000Z

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

  12. Computational Challenges for Microbial Genome and Metagenome Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Mavrommatis, Kostas

    2011-06-08T23:59:59.000Z

    Kostas Mavrommatis of the DOE JGI gives a presentation on "Computational Challenges for Microbial Genome & Metagenome Analysis" at the JGI/Argonne HPC Workshop on January 26, 2010.

  13. Computational analysis, design, and experimental validation of antibody binding affinity improvements beyond in vivo maturation

    E-Print Network [OSTI]

    Lippow, Shaun Matthew

    2007-01-01T23:59:59.000Z

    This thesis presents novel methods for the analysis and design of high-affinity protein interactions using a combination of high-resolution structural data and physics-based molecular models. First, computational analysis ...

  14. BioPig: Developing Cloud Computing Applications for Next-Generation Sequence Analysis

    SciTech Connect (OSTI)

    Bhatia, Karan; Wang, Zhong

    2011-03-22T23:59:59.000Z

    Next Generation sequencing is producing ever larger data sizes with a growth rate outpacing Moore's Law. The data deluge has made many of the current sequenceanalysis tools obsolete because they do not scale with data. Here we present BioPig, a collection of cloud computing tools to scale data analysis and management. Pig is aflexible data scripting language that uses Apache's Hadoop data structure and map reduce framework to process very large data files in parallel and combine the results.BioPig extends Pig with capability with sequence analysis. We will show the performance of BioPig on a variety of bioinformatics tasks, including screeningsequence contaminants, Illumina QA/QC, and gene discovery from metagenome data sets using the Rumen metagenome as an example.

  15. The Use of Computer Graphics in the Visual Analysis of the Proposed

    E-Print Network [OSTI]

    Standiford, Richard B.

    The Use of Computer Graphics in the Visual Analysis of the Proposed Sunshine Ski Area Expansion1 Columbia, Canada. Abstract: This paper describes the use of computer graphics in designing part, the adverse visual impacts of ski-run development. Computer graphics have proven, in this case

  16. Computational analysis of an aortic valve jet with Lagrangian coherent structures1

    E-Print Network [OSTI]

    Boyer, Edmond

    Computational analysis of an aortic valve jet with Lagrangian coherent structures1 Shawn C. Shadden valves. An important step in making these computational tools useful to clinical practice. This work focuses on flow through the aortic valve and il- lustrates how the computation of Lagrangian

  17. PERTURBATION-BASED ERROR ANALYSIS OF ITERATIVE IMAGE RECONSTRUCTION ALGORITHM FOR X-RAY COMPUTED TOMOGRAPHY

    E-Print Network [OSTI]

    Fessler, Jeffrey A.

    -ray computed tomography. The effects of the quantization error in forward-projection, back computed tomography (CT) have been proposed to improve image quality and reduce dose [1]. These methodsPERTURBATION-BASED ERROR ANALYSIS OF ITERATIVE IMAGE RECONSTRUCTION ALGORITHM FOR X-RAY COMPUTED

  18. Quantitative Computed Tomography Analysis of Local Chemotherapy in Liver Tissue After

    E-Print Network [OSTI]

    Gao, Jinming

    Quantitative Computed Tomography Analysis of Local Chemotherapy in Liver Tissue After; controlled release drug delivery; radiofrequency ablation; computed tomography. AUR, 2004 Acad Radiol 2004, MS, David L. Wilson, PhD, John R. Haaga, MD, Jinming Gao, PhD Rationale and Objectives. Computed

  19. Classification and volumetric analysis of temporal bone pneumatization using cone beam computed tomography

    E-Print Network [OSTI]

    Terasaki, Mark

    bone pneumatization in adults using cone beam computed tomography (CBCT) scans. Study Design. A total Oral Pathol Oral Radiol 2014;117:376-384) The advances in cone beam computed tomography (CBCT) overClassification and volumetric analysis of temporal bone pneumatization using cone beam computed

  20. Trace-Based Analysis and Prediction of Cloud Computing User Behavior Using the Fractal Modeling Technique

    E-Print Network [OSTI]

    Pedram, Massoud

    Trace-Based Analysis and Prediction of Cloud Computing User Behavior Using the Fractal Modeling and technology. In this paper, we investigate the characteristics of the cloud computing requests received the alpha- stable distribution. Keywords- cloud computing; alpha-stable distribution; fractional order

  1. Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis

    SciTech Connect (OSTI)

    Not Available

    1990-12-01T23:59:59.000Z

    The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributes including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF)

  2. Durability Assessment of an Arch Dam using Inverse Analysis with Neural Networks and High Performance Computing.

    E-Print Network [OSTI]

    Coutinho, Alvaro L. G. A.

    the viscoelastic parameters; 3D FEM analysis using High Performance Computing (parallel and vector features) to run Performance Computing. E. M. R. Fairbairn, E. Goulart, A. L. G. A. Coutinho, N. F. F. Ebecken COPPEDurability Assessment of an Arch Dam using Inverse Analysis with Neural Networks and High

  3. Computational Fluid Dynamics Analysis of Flexible Duct Junction Box Design

    SciTech Connect (OSTI)

    Beach, R.; Prahl, D.; Lange, R.

    2013-12-01T23:59:59.000Z

    IBACOS explored the relationships between pressure and physical configurations of flexible duct junction boxes by using computational fluid dynamics (CFD) simulations to predict individual box parameters and total system pressure, thereby ensuring improved HVAC performance. Current Air Conditioning Contractors of America (ACCA) guidance (Group 11, Appendix 3, ACCA Manual D, Rutkowski 2009) allows for unconstrained variation in the number of takeoffs, box sizes, and takeoff locations. The only variables currently used in selecting an equivalent length (EL) are velocity of air in the duct and friction rate, given the first takeoff is located at least twice its diameter away from the inlet. This condition does not account for other factors impacting pressure loss across these types of fittings. For each simulation, the IBACOS team converted pressure loss within a box to an EL to compare variation in ACCA Manual D guidance to the simulated variation. IBACOS chose cases to represent flows reasonably correlating to flows typically encountered in the field and analyzed differences in total pressure due to increases in number and location of takeoffs, box dimensions, and velocity of air, and whether an entrance fitting is included. The team also calculated additional balancing losses for all cases due to discrepancies between intended outlet flows and natural flow splits created by the fitting. In certain asymmetrical cases, the balancing losses were significantly higher than symmetrical cases where the natural splits were close to the targets. Thus, IBACOS has shown additional design constraints that can ensure better system performance.

  4. THE SAP3 COMPUTER PROGRAM FOR QUANTITATIVE MULTIELEMENT ANALYSIS BY ENERGY DISPERSIVE X-RAY FLUORESCENCE

    SciTech Connect (OSTI)

    Nielson,, K. K.; Sanders,, R. W.

    1982-04-01T23:59:59.000Z

    SAP3 is a dual-function FORTRAN computer program which performs peak analysis of energy-dispersive x-ray fluorescence spectra and then quantitatively interprets the results of the multielement analysis. It was written for mono- or bi-chromatic excitation as from an isotopic or secondary excitation source, and uses the separate incoherent and coherent backscatter intensities to define the bulk sample matrix composition. This composition is used in performing fundamental-parameter matrix corrections for self-absorption, enhancement, and particle-size effects, obviating the need for specific calibrations for a given sample matrix. The generalized calibration is based on a set of thin-film sensitivities, which are stored in a library disk file and used for all sample matrices and thicknesses. Peak overlap factors are also determined from the thin-film standards, and are stored in the library for calculating peak overlap corrections. A detailed description is given of the algorithms and program logic, and the program listing and flow charts are also provided. An auxiliary program, SPCAL, is also given for use in calibrating the backscatter intensities. SAP3 provides numerous analysis options via seventeen control switches which give flexibility in performing the calculations best suited to the sample and the user needs. User input may be limited to the name of the library, the analysis livetime, and the spectrum filename and location. Output includes all peak analysis information, matrix correction factors, and element concentrations, uncertainties and detection limits. Twenty-four elements are typically determined from a 1024-channel spectrum in one-to-two minutes using a PDP-11/34 computer operating under RSX-11M.

  5. analysis computer code: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Gray, Jeffrey G. 12 ELECTRICAL AND COMPUTER SYSTEMS ENGINEERING SC561 Error Control Codes Materials Science Websites Summary: of a channel, Shannon's Theorem, Hamming, BCH, MDS and...

  6. analysis computer codes: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Gray, Jeffrey G. 12 ELECTRICAL AND COMPUTER SYSTEMS ENGINEERING SC561 Error Control Codes Materials Science Websites Summary: of a channel, Shannon's Theorem, Hamming, BCH, MDS and...

  7. SECOND RESEARCH DISSERTATION PROJECT Computational Analysis of the

    E-Print Network [OSTI]

    Goldschmidt, Christina

    mature to influence the design of the laboratory procedure to detect the transcription factor.1 DATASET CONSTRUCTION ......................................................................................................31 3.1 TFBSS PREDICTION: THE COMPUTATIONAL PIPELINE

  8. Analytics Cloud for Computational Analysis | ornl.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    global security, transportation and finance Technical Approach Create a secure, high-end cloud computing environment (192 cores, 360 Terabytes disk and high-speed data IO) Create...

  9. IEEE TRANSACTIONS ON COMPUTERS 1 Analysis of Backward Congestion Notification

    E-Print Network [OSTI]

    Stojmenovic, Ivan

    Channel and High Performance Computing (HPC) networks over InfiniBand, not only increases the cost as the unified switch fabric for all of the TCP/IP traffic, the storage traffic and the high performance computing traffic in data centers. Backward Congestion Notification (BCN) is the basic mechanism for the end

  10. Frequency Interpolation Methods for Accelerating Parallel EMC Analysis Secure Computing Laboratory, Computer System Laboratories, Fujitsu Laboratories Ltd

    E-Print Network [OSTI]

    Strazdins, Peter

    Frequency Interpolation Methods for Accelerating Parallel EMC Analysis K. Homma Secure Computing-specific Electromagnetic Compatibility (EMC) re- quirements. Hence, minimizing the undesired radiation and avoiding electromagnetic wave radiation from these devices tends to increase. In such a situation, the estimation of EMC

  11. VISUAL WORDS, TEXT ANALYSIS CONCEPTS FOR COMPUTER VISION Wang-Juh Chen, Hoi Tin Kong, Minah Oh,

    E-Print Network [OSTI]

    VISUAL WORDS, TEXT ANALYSIS CONCEPTS FOR COMPUTER VISION By Wang-Juh Chen, Hoi Tin Kong, Minah Oh Report: Visual Words, Text Analysis Concepts for Computer Vision Wang-Juh Chen Hoi Tin Kong Minah Oh

  12. Uncertainty analysis of river flooding and dam failure risks using local sensitivity computations.

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    Uncertainty analysis of river flooding and dam failure risks using local sensitivity computations) for uncertainty analysis with respect to two major types of risk in river hydrodynamics: flash flood and dam failure. LSA is com- pared to a Global Uncertainty Analysis (GUA) consisting in running Monte Carlo

  13. OVERVIEW OF DEVELOPMENT OF P-CARES: PROBABILISTIC COMPUTER ANALYSIS FOR RAPID EVALUATION OF STRUCTURES.

    SciTech Connect (OSTI)

    NIE,J.; XU, J.; COSTANTINO, C.; THOMAS, V.

    2007-08-01T23:59:59.000Z

    Brookhaven National Laboratory (BNL) undertook an effort to revise the CARES (Computer Analysis for Rapid Evaluation of Structures) program under the auspices of the US Nuclear Regulatory Commission (NRC). The CARES program provided the NRC staff a capability to quickly check the validity and/or accuracy of the soil-structure interaction (SSI) models and associated data received from various applicants. The aim of the current revision was to implement various probabilistic simulation algorithms in CARES (referred hereinafter as P-CARES [1]) for performing the probabilistic site response and soil-structure interaction (SSI) analyses. This paper provides an overview of the development process of P-CARES, including the various probabilistic simulation techniques used to incorporate the effect of site soil uncertainties into the seismic site response and SSI analyses and an improved graphical user interface (GUI).

  14. for Computer Animation and Robotics Visual Analysis of Biomimetic Motion

    E-Print Network [OSTI]

    Hale, Joshua G.

    Joshua G. Hale & Frank E. Pollick Motion production algorithms are based on human motor production numerically integrated cost functions and solved using the Simplex method. A computationally efficient optimal

  15. Atomistic computer simulation analysis of nanocrystalline nickel-tungsten alloys

    E-Print Network [OSTI]

    Engwall, Alison Michelle

    2009-01-01T23:59:59.000Z

    Nanocrystalline nickel-tungsten alloys are harder, stronger, more resistant to degradation, and safer to electrodeposit than chromium. Atomistic computer simulations have previously met with success in replicating the ...

  16. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    assetsimagesicon-science.jpg Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of...

  17. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Oehmen, Chris [PNNL

    2011-06-08T23:59:59.000Z

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  18. Coupling of a multizone airflow simulation program with computational fluid dynamics for indoor environmental analysis

    E-Print Network [OSTI]

    Gao, Yang, 1974-

    2002-01-01T23:59:59.000Z

    Current design of building indoor environment comprises macroscopIC approaches, such as CONT AM multizone airflow analysis tool, and microscopic approaches that apply Computational Fluid Dynamics (CFD). Each has certain ...

  19. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    SciTech Connect (OSTI)

    Oehmen, Chris [PNNL] [PNNL

    2010-01-25T23:59:59.000Z

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  20. Design of a tricycle chassis using computer-aided design and finite element analysis

    E-Print Network [OSTI]

    Avila, Elliot

    2014-01-01T23:59:59.000Z

    Finite element analysis and computer-aided design are powerful tools for modeling complex systems and their responses to external stimuli. This paper explores how these techniques were employed in a highly iterative design ...

  1. Three-Dimensional Computational Analysis of Transport Phenomena in a PEM Fuel Cell

    E-Print Network [OSTI]

    Victoria, University of

    Three-Dimensional Computational Analysis of Transport Phenomena in a PEM Fuel Cell by Torsten or other means, without permission of the author. #12;Supervisor: Dr. N. Djilali Abstract Fuel cells-isothermal computational model of a proton exchange membrane fuel cell (PEMFC). The model was developed to improve

  2. A Quantitative Analysis of Disk Drive Power Management in Portable Computers

    E-Print Network [OSTI]

    Anderson, Tom

    A Quantitative Analysis of Disk Drive Power Management in Portable Computers Kester Li, Roger Kumpf Abstract With the advent and subsequent popularity of portable computers, power management of system half of the potential benefit of spinning down a disk. 1 Introduction Power management has become

  3. Computer-aided Diagnosis of Melanoma Using Border and Wavelet-based Texture Analysis

    E-Print Network [OSTI]

    Bailey, James

    1 Computer-aided Diagnosis of Melanoma Using Border and Wavelet-based Texture Analysis Rahil presents a novel computer-aided diagno- sis system for melanoma. The novelty lies in the optimised selec of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features

  4. Computational Challenges and Analysis under Increasingly Dynamic and Uncertain

    E-Print Network [OSTI]

    grid. New computational methods must be developed based on superior control architectures and new in the areas of architecture, control and optimization algorithms, data management, modeling, and security in the grid: microgrids, aggregators, buildings, homes, etc. These new actors are abstracted using a boundary

  5. Computational design and analysis of flatback airfoil wind tunnel experiment.

    SciTech Connect (OSTI)

    Mayda, Edward A. (University of California, Davis, CA); van Dam, C.P. (University of California, Davis, CA); Chao, David D. (University of California, Davis, CA); Berg, Dale E.

    2008-03-01T23:59:59.000Z

    A computational fluid dynamics study of thick wind turbine section shapes in the test section of the UC Davis wind tunnel at a chord Reynolds number of one million is presented. The goals of this study are to validate standard wind tunnel wall corrections for high solid blockage conditions and to reaffirm the favorable effect of a blunt trailing edge or flatback on the performance characteristics of a representative thick airfoil shape prior to building the wind tunnel models and conducting the experiment. The numerical simulations prove the standard wind tunnel corrections to be largely valid for the proposed test of 40% maximum thickness to chord ratio airfoils at a solid blockage ratio of 10%. Comparison of the computed lift characteristics of a sharp trailing edge baseline airfoil and derived flatback airfoils reaffirms the earlier observed trend of reduced sensitivity to surface contamination with increasing trailing edge thickness.

  6. Digital computer coupling techniques for automatic activation analysis

    E-Print Network [OSTI]

    Breen, Walter Michael

    1961-01-01T23:59:59.000Z

    isotope. Should there be no more than three channels difference between the sample peaks and the corresponding library peaks, an identification of an isotope present will result. The number of iso- topee found is increased by unity, and the correct.... The library storage registers AS are initialized to zero and the seven center channels in each library main photopeak are tabulated in the library storage registers AS. 15. The following constants are computed for each library iso- tope which has been...

  7. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    SciTech Connect (OSTI)

    Schryver, Jack C [ORNL; Begoli, Edmon [ORNL; Jose, Ajith [Missouri University of Science and Technology; Griffin, Christopher [Pennsylvania State University

    2011-02-01T23:59:59.000Z

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  8. Power System Probabilistic and Security Analysis on Commodity High Performance Computing Systems

    E-Print Network [OSTI]

    Franchetti, Franz

    power system infrastructures also requires merging of offline security analyses into on- line operationPower System Probabilistic and Security Analysis on Commodity High Performance Computing Systems tools for power system probabilistic and security analysis: 1) a high performance Monte Carlo simulation

  9. Computable General Equilibrium Models for the Analysis of Energy and Climate Policies

    E-Print Network [OSTI]

    Wing, Ian Sue

    Computable General Equilibrium Models for the Analysis of Energy and Climate Policies Ian Sue Wing of energy and environmental policies. Perhaps the most important of these applications is the analysis Change, MIT Prepared for the International Handbook of Energy Economics Abstract This chapter is a simple

  10. Routing performance analysis and optimization within a massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16T23:59:59.000Z

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  11. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    SciTech Connect (OSTI)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States)] Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)] J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01T23:59:59.000Z

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  12. Computer Modeling of Violent Intent: A Content Analysis Approach

    SciTech Connect (OSTI)

    Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

    2014-01-03T23:59:59.000Z

    We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

  13. ELECTRICAL AND COMPUTER ENGINEERING PROGRAM ASSESSMENT PLAN Program Learning Objectives

    E-Print Network [OSTI]

    Cantlon, Jessica F.

    ELECTRICAL AND COMPUTER ENGINEERING PROGRAM ASSESSMENT PLAN Program Learning underlying electrical and computer engineering analysis and design, including fundamental a sufficient foundation in the fundamental areas of electrical and computer engineering

  14. A computer-aided approach to a sediment budget analysis

    E-Print Network [OSTI]

    Capodice, Anne Marie

    1985-01-01T23:59:59.000Z

    be calculated from charts, surveys and dredging records. This method is good if these historical data are reliable and inexpensive, and if the researcher has a firm knowledge of the study area. The third method, and one most widely used by researchers today... data on site. These data would include pre- and post storm surveys for several storms for each fan and offshore wind transport data, This third method is quite costly and time consuming and not necessarily recommended, given the relative amount...

  15. Tuning and Analysis Utilities (TAU) | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over OurThe Iron Spin Transition in2,EHSS A-Zand Analysis Utilities (TAU) References TAU

  16. Technical support document: Energy efficiency standards for consumer products: Refrigerators, refrigerator-freezers, and freezers including draft environmental assessment, regulatory impact analysis

    SciTech Connect (OSTI)

    NONE

    1995-07-01T23:59:59.000Z

    The Energy Policy and Conservation Act (P.L. 94-163), as amended by the National Appliance Energy Conservation Act of 1987 (P.L. 100-12) and by the National Appliance Energy Conservation Amendments of 1988 (P.L. 100-357), and by the Energy Policy Act of 1992 (P.L. 102-486), provides energy conservation standards for 12 of the 13 types of consumer products` covered by the Act, and authorizes the Secretary of Energy to prescribe amended or new energy standards for each type (or class) of covered product. The assessment of the proposed standards for refrigerators, refrigerator-freezers, and freezers presented in this document is designed to evaluate their economic impacts according to the criteria in the Act. It includes an engineering analysis of the cost and performance of design options to improve the efficiency of the products; forecasts of the number and average efficiency of products sold, the amount of energy the products will consume, and their prices and operating expenses; a determination of change in investment, revenues, and costs to manufacturers of the products; a calculation of the costs and benefits to consumers, electric utilities, and the nation as a whole; and an assessment of the environmental impacts of the proposed standards.

  17. RISKIND: An enhanced computer code for National Environmental Policy Act transportation consequence analysis

    SciTech Connect (OSTI)

    Biwer, B.M.; LePoire, D.J.; Chen, S.Y.

    1996-03-01T23:59:59.000Z

    The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive materials. The code is intended to provide scenario-specific analyses when evaluating alternatives for environmental assessment activities, including those for major federal actions involving radioactive material transport as required by the National Environmental Policy Act (NEPA). As such, rigorous procedures have been implemented to enhance the code`s credibility and strenuous efforts have been made to enhance ease of use of the code. To increase the code`s reliability and credibility, a new version of RISKIND was produced under a quality assurance plan that covered code development and testing, and a peer review process was conducted. During development of the new version, the flexibility and ease of use of RISKIND were enhanced through several major changes: (1) a Windows{sup {trademark}} point-and-click interface replaced the old DOS menu system, (2) the remaining model input parameters were added to the interface, (3) databases were updated, (4) the program output was revised, and (5) on-line help has been added. RISKIND has been well received by users and has been established as a key component in radiological transportation risk assessments through its acceptance by the U.S. Department of Energy community in recent environmental impact statements (EISs) and its continued use in the current preparation of several EISs.

  18. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputationalEnergy Computers,Computing

  19. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

    SciTech Connect (OSTI)

    Mark A. Sippel; William C. Carrigan; Kenneth D. Luff; Lyn Canter

    2003-11-12T23:59:59.000Z

    Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). The software tools in ICS have been developed for characterization of reservoir properties and evaluation of hydrocarbon potential using a combination of inter-disciplinary data sources such as geophysical, geologic and engineering variables. The ICS tools provide a means for logical and consistent reservoir characterization and oil reserve estimates. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) file utility tools. ICS tools are extremely flexible in their approach and use, and applicable to most geologic settings. The tools are primarily designed to correlate relationships between seismic information and engineering and geologic data obtained from wells, and to convert or translate seismic information into engineering and geologic terms or units. It is also possible to apply ICS in a simple framework that may include reservoir characterization using only engineering, seismic, or geologic data in the analysis. ICS tools were developed and tested using geophysical, geologic and engineering data obtained from an exploitation and development project involving the Red River Formation in Bowman County, North Dakota and Harding County, South Dakota. Data obtained from 3D seismic surveys, and 2D seismic lines encompassing nine prospective field areas were used in the analysis. The geologic setting of the Red River Formation in Bowman and Harding counties is that of a shallow-shelf, carbonate system. Present-day depth of the Red River formation is approximately 8000 to 10,000 ft below ground surface. This report summarizes production results from well demonstration activity, results of reservoir characterization of the Red River Formation at demonstration sites, descriptions of ICS tools and strategies for their application.

  20. Computer code input for thermal hydraulic analysis of Multi-Function Waste Tank Facility Title II design

    SciTech Connect (OSTI)

    Cramer, E.R.

    1994-10-01T23:59:59.000Z

    The input files to the P/Thermal computer code are documented for the thermal hydraulic analysis of the Multi-Function Waste Tank Facility Title II design analysis.

  1. Computational Analysis and Optimization of a Chemical Vapor Deposition Reactor with

    E-Print Network [OSTI]

    modifications of reactor configurations and manual control of operating conditions becomes prohibitivelyComputational Analysis and Optimization of a Chemical Vapor Deposition Reactor with Large for the chemical vapor deposition (CVD) of silicon in a horizontal rotating disk reactor. A three

  2. Experimental Analysis of Task-based Energy Consumption in Cloud Computing Systems

    E-Print Network [OSTI]

    Schneider, Jean-Guy

    Experimental Analysis of Task-based Energy Consumption in Cloud Computing Systems Feifei Chen, John is that large cloud data centres consume large amounts of energy and produce significant carbon footprints that minimise energy consumption while guaranteeing Service Level Agreements (SLAs). In order to achieve

  3. 286 IEEE TRANSACTIONS ON COMPUTERS, VOL. 44, NO. 2. FEBRUARY 1995 Interval Availability Analysis Using Denumerable

    E-Print Network [OSTI]

    Sericola, Bruno

    286 IEEE TRANSACTIONS ON COMPUTERS, VOL. 44, NO. 2. FEBRUARY 1995 Interval Availability Analysis Gerard0 Rubino and Bruno Sericola Abstiact-Interval availability is a dependability measure de- fined availability level is high enough. The system is assumed to be modeled as a Markov process with countable state

  4. Computational analysis of the thermal conductivity of the carboncarbon composite materials

    E-Print Network [OSTI]

    Grujicic, Mica

    Computational analysis of the thermal conductivity of the carbon­carbon composite materials M Abstract Experimental data for carbon­carbon con- stituent materials are combined with a three and longitudinal thermal conductivities in carbon­carbon composites. Particular attention is given in elucidating

  5. Comments on the use of computer models for merger analysis in the electricity industry

    E-Print Network [OSTI]

    California at Berkeley. University of

    that the commission is considering, electricity market models, production cost/optimal power flow models, and hybridsComments on the use of computer models for merger analysis in the electricity industry FERC Docket for market power in electricity markets. These analyses have yielded several insights about the application

  6. A Computational Analysis Framework for Molecular Cell Dynamics: Case-Study of Exocytosis

    E-Print Network [OSTI]

    Gu, Xun

    is that in vivo system regulation is complex. Meanwhile, many kinetic rates are unknown, making global system analysis intractable in practice. In this article, we demonstrate a computational pipeline to help solve regulation) from limited in vitro experimental data, which fit well with the reports by the conventional

  7. Computational analysis of microarray gene expression profiles: clustering, classification, and beyond

    E-Print Network [OSTI]

    Dai, Yang

    Computational analysis of microarray gene expression profiles: clustering, classification) the discovery of gene clusters, and (3) the classification of biological samples. In addition, we discuss how inch, and a library of thousands of genes is placed on a single chip. To probe the global gene

  8. A versatile computer model for the design and analysis of electric and hybrid vehicles

    E-Print Network [OSTI]

    Stevens, Kenneth Michael

    1996-01-01T23:59:59.000Z

    The primary purpose of the work reported in this thesis was to develop a versatile computer model to facilitate the design and analysis of hybrid vehicle drive-trains. A hybrid vehicle is one in which power for propulsion comes from two distinct...

  9. A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1

    E-Print Network [OSTI]

    Cintra, Marcelo

    A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1 Abstract Many timed extensions for Petri nets have been proposed in the literature, but their analytical solutions impose limitations on the time distributions and the net topology. To overcome these limitations

  10. Heart sound analysis for symptom detection and computer-aided diagnosis

    E-Print Network [OSTI]

    Reed, Nancy E.

    Heart sound analysis for symptom detection and computer-aided diagnosis Todd R. Reed a,*, Nancy E Abstract Heart auscultation (the interpretation by a physician of heart sounds) is a fundamental component for the production of heart sounds, and demonstrate its utility in iden- tifying features useful in diagnosis. We

  11. A versatile computer model for the design and analysis of electric and hybrid vehicles

    E-Print Network [OSTI]

    Stevens, Kenneth Michael

    1996-01-01T23:59:59.000Z

    The primary purpose of the work reported in this thesis was to develop a versatile computer model to facilitate the design and analysis of hybrid vehicle drive-trains. A hybrid vehicle is one in which power for propulsion comes from two distinct...

  12. INTERNATIONAL JOURNAL OF c 2011 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing and Information

    E-Print Network [OSTI]

    Brger, Raimund

    -dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially one-dimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2011 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing

  13. INTERNATIONAL JOURNAL OF c 2012 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing and Information

    E-Print Network [OSTI]

    Brger, Raimund

    -dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially one-dimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2012 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing

  14. Smoothing spline analysis of variance approach for global sensitivity analysis of computer codes

    E-Print Network [OSTI]

    Boyer, Edmond

    - ing sensitivity indices. Numerical tests performed on several analytical examples and scientific applications, such as nuclear safety assessment, meteorology or oil reservoir forecasting. Simulations are performed with complex computer codes that model diverse complex real world phenomena. Inputs

  15. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 1, Mathematical models and solution method

    SciTech Connect (OSTI)

    Rector, D.R.; Wheeler, C.L.; Lombardo, N.J.

    1986-11-01T23:59:59.000Z

    COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods.

  16. The use of networking in the DIII-D data acquisition and analysis computer systems

    SciTech Connect (OSTI)

    McHarg, B.B., Jr.

    1987-10-01T23:59:59.000Z

    DIII-D is a large plasma physics and fusion research experiment funded by the Department of Energy. Each shot of the experiment is currently generating nearly 20 megabytes of data which is acquired primarily by MODCOMP Classic computer systems and analyzed by DEC VAX computer systems. Shots are repeated about once every 10 minutes with 40--50 shots per operating day. As the data size and need for data access has grown, the computer systems have evolved from distinct systems to loosely coupled systems to, in some cases, tightly coupled systems. The networking or connectivity of the systems has become an integral and necessary part of the data acquisition and analysis process. The MODCOMP systems utilize networking software to allow one computer to activate tasks on another computer and to transfer data between computers. A Network Systems Hyperchannel link is used for data transfer between MODCOMP and VAX computers. The heaviest use of networking is between the VAX systems which are all connected by DECnet. Networking software have been developed to provide transparent remote user data access, remote printing, the running of tasks on other systems, and the collection of raw and calculated results from other systems. 7 refs., 4 figs.

  17. Towards a systematic analysis of cluster computing log data: the case of IBM BlueGene/Q

    E-Print Network [OSTI]

    Towards a systematic analysis of cluster computing log data: the case of IBM BlueGene/Q Alina S^irbu, Ozalp Babaoglu Department of Computer Science and Engineering, University of Bologna Mura Anteo Zamboni and complexity of managing large comput- ing infrastructures has been on the rise. Automating management actions

  18. EMSL - Molecular Science Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    computing Resources and Techniques Molecular Science Computing - Sophisticated and integrated computational capabilities, including scientific consultants, software, Cascade...

  19. TPASS: a gamma-ray spectrum analysis and isotope identification computer code

    SciTech Connect (OSTI)

    Dickens, J.K.

    1981-03-01T23:59:59.000Z

    The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.

  20. BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III

    SciTech Connect (OSTI)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

    1981-06-01T23:59:59.000Z

    This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.

  1. Computer-aided analysis of formation pressure integrity tests used in oil well drilling

    SciTech Connect (OSTI)

    Almeida, M.A.

    1986-01-01T23:59:59.000Z

    In this study, the development of a computer simulation model for leak-off tests has been accomplished. This model is more realistic than the one currently used, but is sufficiently simple that it can be applied with data normally available during leak-off test operations in the field. The model includes the many factors that affect pressure behavior during the test, and can predict with reasonable accuracy what the pressure curve will look like. In addition, test interpretation using the computer model is easily achieved using a curve matching technique. The first step toward the development of the computer model was to subdivide the leak-off test into four phases: (1) pressure increase due to overall compressibility of the system, (2) fracture initiation, (3) fracture expansion, and (4) pressure decline and fracture closure after the pump is shut-in. The second step was the development of mathematical models for each phase separately. The mathematical model that predicts pressure increase before fracture initiation includes the most important variables affecting overall compressibility of the system. The modeling of fracture initiation is based on the classical elasticity theory. The modeling of fracture expansion and closure is based on the solution of the continuity equation for flow into a vertical-elliptical fracture with constant height. A computer program that predicts the pressure behavior during the leak-off test was written. This computer model was then verified using field data furnished by Tenneco Oil Company.

  2. UNIVERSITY OF CALIFORNIA, SANTA CRUZ COMPUTER SCIENCE

    E-Print Network [OSTI]

    California at Santa Cruz, University of

    UNIVERSITY OF CALIFORNIA, SANTA CRUZ COMPUTER SCIENCE Ongoing Lecturer Pool The Baskin School temporary instructors for the Computer Science Department. Computer Science includes: algorithms, analysis.D., or equivalent in Computer Science, Digital Arts/Media, New Media, or closely related or relevant field

  3. Preliminary Risk Analysis of Nitrate Contamination in the Salinas Valley and Tulare Lake Basin of California, Including the Implementation of POU Devices in Small Communities

    E-Print Network [OSTI]

    Lund, Jay R.

    i Preliminary Risk Analysis of Nitrate Contamination in the Salinas Valley and Tulare Lake Basin is a drinking water contaminant prevalent in the Salinas Valley and Tulare Lake Basin (the study area), mainly

  4. Coupled computational fluid dynamics and heat transfer analysis of the VHTR lower plenum.

    SciTech Connect (OSTI)

    El-Genk, Mohamed S. (University of New Mexico, Albuquerque, NM); Rodriguez, Salvador B.

    2010-12-01T23:59:59.000Z

    The very high temperature reactor (VHTR) concept is being developed by the US Department of Energy (DOE) and other groups around the world for the future generation of electricity at high thermal efficiency (> 48%) and co-generation of hydrogen and process heat. This Generation-IV reactor would operate at elevated exit temperatures of 1,000-1,273 K, and the fueled core would be cooled by forced convection helium gas. For the prismatic-core VHTR, which is the focus of this analysis, the velocity of the hot helium flow exiting the core into the lower plenum (LP) could be 35-70 m/s. The impingement of the resulting gas jets onto the adiabatic plate at the bottom of the LP could develop hot spots and thermal stratification and inadequate mixing of the gas exiting the vessel to the turbo-machinery for energy conversion. The complex flow field in the LP is further complicated by the presence of large cylindrical graphite posts that support the massive core and inner and outer graphite reflectors. Because there are approximately 276 channels in the VHTR core from which helium exits into the LP and a total of 155 support posts, the flow field in the LP includes cross flow, multiple jet flow interaction, flow stagnation zones, vortex interaction, vortex shedding, entrainment, large variation in Reynolds number (Re), recirculation, and mixing enhancement and suppression regions. For such a complex flow field, experimental results at operating conditions are not currently available. Instead, the objective of this paper is to numerically simulate the flow field in the LP of a prismatic core VHTR using the Sandia National Laboratories Fuego, which is a 3D, massively parallel generalized computational fluid dynamics (CFD) code with numerous turbulence and buoyancy models and simulation capabilities for complex gas flow fields, with and without thermal effects. The code predictions for simpler flow fields of single and swirling gas jets, with and without a cross flow, are validated using reported experimental data and theory. The key processes in the LP are identified using phenomena identification and ranking table (PIRT). It may be argued that a CFD code that accurately simulates simplified, single-effect flow fields with increasing complexity is likely to adequately model the complex flow field in the VHTR LP, subject to a future experimental validation. The PIRT process and spatial and temporal discretizations implemented in the present analysis using Fuego established confidence in the validation and verification (V and V) calculations and in the conclusions reached based on the simulation results. The performed calculations included the helicoid vortex swirl model, the dynamic Smagorinsky large eddy simulation (LES) turbulence model, participating media radiation (PMR), and 1D conjugate heat transfer (CHT). The full-scale, half-symmetry LP mesh used in the LP simulation included unstructured hexahedral elements and accounted for the graphite posts, the helium jets, the exterior walls, and the bottom plate with an adiabatic outer surface. Results indicated significant enhancements in heat transfer, flow mixing, and entrainment in the VHTR LP when using swirling inserts at the exit of the helium flow channels into the LP. The impact of using various swirl angles on the flow mixing and heat transfer in the LP is qualified, including the formation of the central recirculation zone (CRZ), and the effect of LP height. Results also showed that in addition to the enhanced mixing, the swirling inserts result in negligible additional pressure losses and are likely to eliminate the formation of hot spots.

  5. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisiting theCommercialization andComputer Simulations Indicate

  6. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputationalEnergy

  7. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOE Patents [OSTI]

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22T23:59:59.000Z

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  8. Certification process of safety analysis and risk management computer codes at the Savannah River Site

    SciTech Connect (OSTI)

    Ades, M.J. [Westinghouse Savannah River Co., Aiken, SC (United States); Toffer, H.; Lewis, C.J.; Crowe, R.D. [Westinghouse Hanford Co., Richland, WA (United States)

    1992-05-01T23:59:59.000Z

    The commitment by Westinghouse Savannah River Company (WSRC) to bring safety analysis and risk management codes into compliance with national and sitewide quality assurance requirements necessitated a systematic, structured approach. As a part of this effort, WSRC, in cooperation with the Westinghouse Hanford Company, has developed and implemented a certification process for the development and control of computer software. Safety analysis and risk management computer codes pertinent to reactor analyses were selected for inclusion in the certification process. As a first step, documented plans were developed for implementing verification and validation of the codes, and establishing configuration control. User qualification guidelines were determined. The plans were followed with an extensive assessment of the codes with respect to certification status. Detailed schedules and work plans were thus determined for completing certification of the codes considered. Although the software certification process discussed is specific to the application described, it is sufficiently general to provide useful insights and guidance for certification of other software.

  9. Certification process of safety analysis and risk management computer codes at the Savannah River Site

    SciTech Connect (OSTI)

    Ades, M.J. (Westinghouse Savannah River Co., Aiken, SC (United States)); Toffer, H.; Lewis, C.J.; Crowe, R.D. (Westinghouse Hanford Co., Richland, WA (United States))

    1992-01-01T23:59:59.000Z

    The commitment by Westinghouse Savannah River Company (WSRC) to bring safety analysis and risk management codes into compliance with national and sitewide quality assurance requirements necessitated a systematic, structured approach. As a part of this effort, WSRC, in cooperation with the Westinghouse Hanford Company, has developed and implemented a certification process for the development and control of computer software. Safety analysis and risk management computer codes pertinent to reactor analyses were selected for inclusion in the certification process. As a first step, documented plans were developed for implementing verification and validation of the codes, and establishing configuration control. User qualification guidelines were determined. The plans were followed with an extensive assessment of the codes with respect to certification status. Detailed schedules and work plans were thus determined for completing certification of the codes considered. Although the software certification process discussed is specific to the application described, it is sufficiently general to provide useful insights and guidance for certification of other software.

  10. Internship Contract (Includes Practicum)

    E-Print Network [OSTI]

    Thaxton, Christopher S.

    Internship Contract (Includes Practicum) Student's name-mail: _________________________________________ Internship Agency Contact Agency Name: ____________________________________ Address-mail: __________________________________________ Location of Internship, if different from Agency: ________________________________________________ Copies

  11. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect (OSTI)

    Krstulovich, S.F.

    1986-11-12T23:59:59.000Z

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  12. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputational Biology2If yousimulation of

  13. Computational Fluid Dynamics Best Practice Guidelines in the Analysis of Storage Dry Cask

    SciTech Connect (OSTI)

    Zigh, A.; Solis, J. [US Nuclear Regulatory Commission, Rockville, MD MS (United States)

    2008-07-01T23:59:59.000Z

    Computational fluid dynamics (CFD) methods are used to evaluate the thermal performance of a dry cask under long term storage conditions in accordance with NUREG-1536 [NUREG-1536, 1997]. A three-dimensional CFD model was developed and validated using data for a ventilated storage cask (VSC-17) collected by Idaho National Laboratory (INL). The developed Fluent CFD model was validated to minimize the modeling and application uncertainties. To address modeling uncertainties, the paper focused on turbulence modeling of buoyancy driven air flow. Similarly, in the application uncertainties, the pressure boundary conditions used to model the air inlet and outlet vents were investigated and validated. Different turbulence models were used to reduce the modeling uncertainty in the CFD simulation of the air flow through the annular gap between the overpack and the multi-assembly sealed basket (MSB). Among the chosen turbulence models, the validation showed that the low Reynolds k-{epsilon} and the transitional k-{omega} turbulence models predicted the measured temperatures closely. To assess the impact of pressure boundary conditions used at the air inlet and outlet channels on the application uncertainties, a sensitivity analysis of operating density was undertaken. For convergence purposes, all available commercial CFD codes include the operating density in the pressure gradient term of the momentum equation. The validation showed that the correct operating density corresponds to the density evaluated at the air inlet condition of pressure and temperature. Next, the validated CFD method was used to predict the thermal performance of an existing dry cask storage system. The evaluation uses two distinct models: a three-dimensional and an axisymmetrical representation of the cask. In the 3-D model, porous media was used to model only the volume occupied by the rodded region that is surrounded by the BWR channel box. In the axisymmetric model, porous media was used to model the entire region that encompasses the fuel assemblies as well as the gaps in between. Consequently, a larger volume is represented by porous media in the second model; hence, a higher frictional flow resistance is introduced in the momentum equations. The conservatism and the safety margins of these models were compared to assess the applicability and the realism of these two models. The three-dimensional model included fewer geometry simplifications and is recommended as it predicted less conservative fuel cladding temperature values, while still assuring the existence of adequate safety margins. (authors)

  14. A High-Performance Hybrid Computing Approach to Massive Contingency Analysis in the Power Grid

    SciTech Connect (OSTI)

    Gorton, Ian; Huang, Zhenyu; Chen, Yousu; Kalahar, Benson K.; Jin, Shuangshuang; Chavarra-Miranda, Daniel; Baxter, Douglas J.; Feo, John T.

    2009-12-01T23:59:59.000Z

    Operating the electrical power grid to prevent power black-outs is a complex task. An important aspect of this is contingency analysis, which involves understanding and mitigating potential failures in power grid elements such as transmission lines. When taking into account the potential for multiple simultaneous failures (known as the N-x contingency problem), contingency analysis becomes a massively computational task. In this paper we describe a novel hybrid computational approach to contingency analysis. This approach exploits the unique graph processing performance of the Cray XMT in conjunction with a conventional massively parallel compute cluster to identify likely simultaneous failures that could cause widespread cascading power failures that have massive economic and social impact on society. The approach has the potential to provide the first practical and scalable solution to the N-x contingency problem. When deployed in power grid operations, it will increase the grid operators ability to deal effectively with outages and failures with power grid components while preserving stable and safe operation of the grid. The paper describes the architecture of our solution and presents preliminary performance results that validate the efficacy of our approach.

  15. Pump apparatus including deconsolidator

    DOE Patents [OSTI]

    Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

    2014-10-07T23:59:59.000Z

    A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

  16. Evaluating and developing parameter optimization and uncertainty analysis methods for a computationally intensive distributed hydrological model

    E-Print Network [OSTI]

    Zhang, Xuesong

    2009-05-15T23:59:59.000Z

    ? weights for river stage prediction (Chau, 2006). Other evolutionary algorithms, such as Differential Evaluation (DE) (Storn and Price, 1997) and Artificial Immune Systems (AIS) (de Castro and Von Zuben, 2002a; de Castro and Von Zuben, 2002b), although... is to structure the hydrologic model as a probability model, then the confidence interval of model output can be computed (Montanari et al., 1997). Representative methods of this category include Markov Chain Monte Carlo (MCMC) and a Generalized Likelihood...

  17. The satellite gravimetry measurements from GRACE enable analysis of mass changes on Earth, including recent mass loss from ice sheets and glaciers. However, GRACE

    E-Print Network [OSTI]

    Utrecht, Universiteit

    , including recent mass loss from ice sheets and glaciers. However, GRACE resolution is coarse (~300 km), where glaciers have lost ~600 Gt of ice since 2003. CAA consists of a northern part (NCAA), where several large ice caps exist, and a southern part (SCAA), with two large ice caps, many smaller glaciers

  18. Solar energy for wood drying using direct or indirect collection with supplemental heating: a computer analysis. Forest Service research paper

    SciTech Connect (OSTI)

    Tschernitz, J.L.

    1986-01-01T23:59:59.000Z

    In order to judge solar drying on a more quantitative basis, the Forest Products Laboratory has developed a computer analysis for calculating the energy demands in the restricted cases of direct and indirect solar wood dryers using supplemental energy. Calculated energy balances are reported including percent fuel savings compared to the net energy used in conventional dryer operation. Six dryer sizes are considered. Seasonal variation of performance is noted for each of 12 months, in 96 locations throughout the United States. Also discussed is variation of cover thermal properties as these influence the effectiveness of operation. The report attempts to organize these economic elements so that the reader can make reasonable choices for any wood-drying requirements.

  19. Living Expenses (includes approximately

    E-Print Network [OSTI]

    Maroncelli, Mark

    & engineering programs All other programs Graduate: MBA/INFSY at Erie & Harrisburg (12 credits) Business Guarantee 3 (Does not include Dependents Costs4 ) Altoona, Berks, Erie, and Harrisburg 12-Month Estimated

  20. Development of a computer-aided fault tree synthesis methodology for quantitative risk analysis in the chemical process industry

    E-Print Network [OSTI]

    Wang, Yanjun

    2005-02-17T23:59:59.000Z

    analysis in the CPI including Safety Review, Checklist Analysis, Relative Ranking, ?What-if? Analysis, Preliminary Hazard Analysis, Hazard and Operability Study (HAZOP), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Event Tree.... This study is exhausted systematically by applying appropriate guidewords to each process parameter at each ?study node?. Failure Modes and Effects Analysis (FMEA) FMEA is a systematic procedure in which each equipment failure mode is examined...

  1. A l i f Eiff l T (CEE 4404 C A l i f S I)Analysis of Eiffel Tower (CEE 4404 Computer Analysis of Structures I)Analysis of Eiffel Tower (CEE 4404 Computer Analysis of Structures I) A d M h H hil P l Li M Ti GAnanda Mehta, Harshil Patel, Li Ma, Tian GaoAnan

    E-Print Network [OSTI]

    Virginia Tech

    A l i f Eiff l T (CEE 4404 C A l i f S I)Analysis of Eiffel Tower (CEE 4404 Computer Analysis of Structures I)Analysis of Eiffel Tower (CEE 4404 Computer Analysis of Structures I) A d M h H hil P l Li M Ti s s g S 000Introduction 18 4 18 4 The Eiffel tower the global icon of France is an iron Assumptions

  2. Making Computer Vision Computationally Efficient

    E-Print Network [OSTI]

    Sundaram, Narayanan

    2012-01-01T23:59:59.000Z

    Workloads 4 Parallelizing Computer Vision 4.1 Numerical9.1.1 Pattern analysis of computer vision workloads 9.1.23 Understanding Computer Vision 3.1 Patterns and

  3. Computational ligand design and analysis in protein complexes using inverse methods, combinatorial search, and accurate solvation modeling

    E-Print Network [OSTI]

    Altman, Michael Darren

    2006-01-01T23:59:59.000Z

    This thesis presents the development and application of several computational techniques to aid in the design and analysis of small molecules and peptides that bind to protein targets. First, an inverse small-molecule ...

  4. User's manual for MOCUS-BACKFIRE [i.e. MOCUS-BACFIRE] : a computer program for common cause failure analysis

    E-Print Network [OSTI]

    Heising, Carolyn D.

    1981-01-01T23:59:59.000Z

    This report is the user's manual for MOCUS-BACFIRE, a computer programme for qualitative common cause analysis. The MOCUSBACFIRE package code was developed by coupling the MOCUS code and BACFIRE code. The MOCUS code is a ...

  5. High Performance Computing in the U.S. in An Analysis on the Basis of the TOP500 List

    E-Print Network [OSTI]

    Dongarra, Jack

    . Dongarra Computer Science Department University of Tennessee Knoxville, TN 37996-1301 and MathematicalHigh Performance Computing in the U.S. in 1995 An Analysis on the Basis of the TOP500 List Jack J Science Section Oak Ridge National Laboratory Oak Ridge, TN 37831-6367 dongarra@cs.utk.edu and Horst D

  6. Analysis and selection of optimal function implementations in massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens (Rochester, MN); Peters, Amanda (Rochester, MN); Ratterman, Joseph D. (Rochester, MN)

    2011-05-31T23:59:59.000Z

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  7. Environmental impact statement/state analysis report. Cedar Bay Cogeneration Project, Jacksonville, Florida (EPA and FDER). Including Technical Appendix. Draft report. [Independent Power Generation

    SciTech Connect (OSTI)

    Not Available

    1990-05-01T23:59:59.000Z

    AES/Cedar Bay, Inc. proposes to construct and operate a cogeneration facility on and existing industrial site within the North District of Duval County, approximately eight miles north of Jacksonville, Florida. The plant will produce 225 megawatts of electricity for sale to Florida Power and Light Company. In addition, steam will be sold to the adjacent Seminole Kraft Corporation paper mill. The document, prepared pursuant to the National Environmental Policy Act, assesses the proposed project and alternatives with respect to impacts on the natural and man-made environments. Potential mitigative measures are also evaluated. The Technical Appendix includes a copy of U.S. EPA's draft National Pollutant Discharge Elimination System permit, FDER's Conditions of Power Plant Siting Certification, as well as other state agency reports pertinent to the proposed project.

  8. Methods, computer readable media, and graphical user interfaces for analysis of frequency selective surfaces

    DOE Patents [OSTI]

    Kotter, Dale K. (Shelley, ID) [Shelley, ID; Rohrbaugh, David T. (Idaho Falls, ID) [Idaho Falls, ID

    2010-09-07T23:59:59.000Z

    A frequency selective surface (FSS) and associated methods for modeling, analyzing and designing the FSS are disclosed. The FSS includes a pattern of conductive material formed on a substrate to form an array of resonance elements. At least one aspect of the frequency selective surface is determined by defining a frequency range including multiple frequency values, determining a frequency dependent permittivity across the frequency range for the substrate, determining a frequency dependent conductivity across the frequency range for the conductive material, and analyzing the frequency selective surface using a method of moments analysis at each of the multiple frequency values for an incident electromagnetic energy impinging on the frequency selective surface. The frequency dependent permittivity and the frequency dependent conductivity are included in the method of moments analysis.

  9. High-Performance Computing for Real-Time Grid Analysis and Operation

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu; Chavarra-Miranda, Daniel

    2013-10-31T23:59:59.000Z

    Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both comprehensive and real time. An even bigger challenge is how to incorporate dynamic information into real-time grid operation. Todays online grid operation is based on a static grid model and can only provide a static snapshot of current system operation status, while dynamic analysis is conducted offline because of low computational efficiency. The offline analysis uses a worst-case scenario to determine transmission limits, resulting in under-utilization of grid assets. This conservative approach does not necessarily lead to reliability. Many times, actual power grid scenarios are not studied, and they will push the grid over the edge and resulting in outages and blackouts. This chapter addresses the HPC needs in power grid analysis and operations. Example applications such as state estimation and contingency analysis are given to demonstrate the value of HPC in power grid applications. Future research directions are suggested for high performance computing applications in power grids to improve the transparency, efficiency, and reliability of power grids.

  10. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

    SciTech Connect (OSTI)

    Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.

    1990-06-01T23:59:59.000Z

    TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.

  11. THE GREEN BANK TELESCOPE 350 MHz DRIFT-SCAN SURVEY II: DATA ANALYSIS AND THE TIMING OF 10 NEW PULSARS, INCLUDING A RELATIVISTIC BINARY

    SciTech Connect (OSTI)

    Lynch, Ryan S.; Kaspi, Victoria M.; Archibald, Anne M.; Karako-Argaman, Chen [Department of Physics, McGill University, 3600 University Street, Montreal, QC H3A 2T8 (Canada)] [Department of Physics, McGill University, 3600 University Street, Montreal, QC H3A 2T8 (Canada); Boyles, Jason; Lorimer, Duncan R.; McLaughlin, Maura A.; Cardoso, Rogerio F. [Department of Physics, West Virginia University, 111 White Hall, Morgantown, WV 26506 (United States)] [Department of Physics, West Virginia University, 111 White Hall, Morgantown, WV 26506 (United States); Ransom, Scott M. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States)] [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Stairs, Ingrid H.; Berndsen, Aaron; Cherry, Angus; McPhee, Christie A. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada)] [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Hessels, Jason W. T.; Kondratiev, Vladislav I.; Van Leeuwen, Joeri [ASTRON, The Netherlands Institute for Radio Astronomy, Postbus 2, 7990-AA Dwingeloo (Netherlands)] [ASTRON, The Netherlands Institute for Radio Astronomy, Postbus 2, 7990-AA Dwingeloo (Netherlands); Epstein, Courtney R. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States)] [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Pennucci, Tim [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States)] [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Roberts, Mallory S. E. [Eureka Scientific Inc., 2452 Delmer Street, Suite 100, Oakland, CA 94602 (United States)] [Eureka Scientific Inc., 2452 Delmer Street, Suite 100, Oakland, CA 94602 (United States); Stovall, Kevin, E-mail: rlynch@physics.mcgill.ca [Center for Advanced Radio Astronomy and Department of Physics and Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)] [Center for Advanced Radio Astronomy and Department of Physics and Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)

    2013-02-15T23:59:59.000Z

    We have completed a 350 MHz Drift-scan Survey using the Robert C. Byrd Green Bank Telescope with the goal of finding new radio pulsars, especially millisecond pulsars that can be timed to high precision. This survey covered {approx}10,300 deg{sup 2} and all of the data have now been fully processed. We have discovered a total of 31 new pulsars, 7 of which are recycled pulsars. A companion paper by Boyles et al. describes the survey strategy, sky coverage, and instrumental setup, and presents timing solutions for the first 13 pulsars. Here we describe the data analysis pipeline, survey sensitivity, and follow-up observations of new pulsars, and present timing solutions for 10 other pulsars. We highlight several sources-two interesting nulling pulsars, an isolated millisecond pulsar with a measurement of proper motion, and a partially recycled pulsar, PSR J0348+0432, which has a white dwarf companion in a relativistic orbit. PSR J0348+0432 will enable unprecedented tests of theories of gravity.

  12. A three-dimensional analysis of the flow and heat transfer for the modified chemical vapor deposition process including buoyancy, variable properties, and tube rotation

    SciTech Connect (OSTI)

    Lin, Y.T.; Choi, M.; Greif, R. (Univ. of California, Berkeley (USA))

    1991-05-01T23:59:59.000Z

    A study has been made of the heat transfer, flow, and particle deposition relative to the modified chemical vapor deposition (MCVD) process. The effects of variable properties, buoyancy, and tube rotation have been included in the study. The resulting three-dimensional temperature and velocity fields have been obtained for a range of conditions. The effects of buoyancy result in asymmetric temperature and axial velocity profiles with respect to the tube axis. Variable properties cause significant variations in the axial velocity along the tube and in the secondary flow in the region near the torch. Particle trajectories are shown to be strongly dependent on the tube rotation and are helices for large rotational speeds. The component of secondary flow in the radial direction is compared to the thermophoretic velocity, which is the primary cause of particle deposition in the MCVD process. Over the central portion of the tube the radial component of the secondary flow is most important in determining the motion of the particles.

  13. MHK technologies include current energy conversion

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    research leverages decades of experience in engineering and design and analysis (D&A) of wind power technologies, and its vast research complex, including high-performance...

  14. An Analysis Framework for Investigating the Trade-offs Between System Performance and Energy Consumption in a Heterogeneous Computing Environment

    E-Print Network [OSTI]

    Maciejewski, Anthony A.

    An Analysis Framework for Investigating the Trade-offs Between System Performance and Energy of energy and earn different amounts of utility. We demonstrate our analysis framework using real data.rambharos@gmail.com Abstract--Rising costs of energy consumption and an ongo- ing effort for increases in computing performance

  15. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

    SciTech Connect (OSTI)

    Hayes, S.L.

    1993-12-01T23:59:59.000Z

    SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user`s manual. A sample calculation made with SAFE is included to highlight some of the code`s features. Complete input and output files for the sample problem are provided.

  16. VAXGAP: A code for the routine analysis of gamma-ray pulse-height spectra on a VAX computer

    SciTech Connect (OSTI)

    Killian, E.W.; Hartwell, J.K.

    1988-05-01T23:59:59.000Z

    This report describes the analysis algorithms and techniques used in the VAX Gamma-Ray Analysis Program (VAXGAP) at the Idaho National Engineering Laboratory (INEL). VAXGAP is a collection of computer programs for the analysis of gamma-ray pulse-height spectra. It operates on a Digital Equipment Corporation VAX computer, using the VMS operating system. Linear and nonlinear peak fitting techniques are used to calculate photopeak areas, which in turn are used to determine the concentration of gamma-ray emitting radionuclides in many different types of samples--from /open quotes/hot/close quotes/ samples, collected from the primary coolant water of operating reactors, to very low-level environmental samples. VAXGAP is user friendly and provides laboratory technicians and spectroscopists with analysis results in a rapid and accurate manner with a minimal investment in computer hardware. On a VAX-750 computer, VAXGAP takes less than a minute to perform a typical spectrum analysis. VAXGAP programs are menu-driven and the user interface to the analysis functions is simple. Use of VAXGAP does not require a detailed knowledge of the computer operating system or gamma-ray spectroscopy. 3 refs., 5 figs., 4 tabs.

  17. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOE Patents [OSTI]

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11T23:59:59.000Z

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  18. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Szyma?ska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

    2015-03-11T23:59:59.000Z

    We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputsmoreof the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.less

  19. Second-order adjoint sensitivity analysis procedure (SO-ASAP) for computing exactly and efficiently first- and second-order sensitivities in large-scale linear systems: I. Computational methodology

    E-Print Network [OSTI]

    Dan G. Cacuci

    2014-11-22T23:59:59.000Z

    This work presents the second-order forward and adjoint sensitivity analysis procedures (SO-FSAP and SO-ASAP) for computing exactly and efficiently the second-order functional derivatives of physical (engineering, biological, etc.) system responses to the system's model parameters.The definition of system parameters used in this work includes all computational input data, correlations, initial and/or boundary conditions, etc. For a physical system comprising N parameters and M responses, we note that the SO-FSAP requires a total of 0.5*N**2+1.5*N large-scale computations for obtaining all of the first- and second-order sensitivities, for all M system responses. On the other hand, the SO-ASAP requires a total of 2*N+1 large-scale computations for obtaining all of the first- and second-order sensitivities, for one functional-type system responses. Therefore, the SO-ASAP should be used when M is much larger than N, while the SO-ASAP should be used when N is much larger than M. The original SO-ASAP presented in this work should enable the hitherto very difficult, if not intractable, exact computation of all of the second-order response sensitivities (i.e., functional Gateaux-derivatives) for large-systems involving many parameters, as usually encountered in practice. Very importantly, the implementation of the SO-ASAP requires very little additional effort beyond the construction of the adjoint sensitivity system needed for computing the first-order sensitivities.

  20. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

    SciTech Connect (OSTI)

    Hamlet, Jason R.; Keliiaa, Curtis M.

    2010-09-01T23:59:59.000Z

    There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

  1. Multimedia Environmental Pollutant Assessment System (MEPAS) sensitivity analysis of computer codes

    SciTech Connect (OSTI)

    Doctor, P.G.; Miley, T.B.; Cowan, C.E.

    1990-04-01T23:59:59.000Z

    The Multimedia Environmental Pollutant Assessment System (MEPAS) is a computer-based methodology developed by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE) to estimate health impacts from the release of hazardous chemicals and radioactive materials. The health impacts are estimated from the environmental inventory and release or emission rate, constituent transport, constituent uptake and toxicity, and exposure route parameters. As part of MEPAS development and evaluation, PNL performed a formal parametric sensitivity analysis to determine the sensitivity of the model output to the input parameters, and to provide a systematic and objective method for determining the relative importance of the input parameters. The sensitivity analysis determined the sensitivity of the Hazard Potential Index (HPI) values to combinations of transport pathway and exposure routes important to evaluating environmental problems at DOE sites. Two combinations of transport pathways and exposure routes were evaluated. The sensitivity analysis focused on evaluating the effect of variation in user-specified parameters, such as constituent inventory, release and emission rates, and parameters describing the transport and exposure routes. The constituents used were strontium-90, yttrium-90, tritium, arsenic, mercury, polychlorinated biphenyls, toluene, and perchloroethylene. 28 refs., 3 figs., 46 tabs.

  2. An introduction to computer viruses

    SciTech Connect (OSTI)

    Brown, D.R.

    1992-03-01T23:59:59.000Z

    This report on computer viruses is based upon a thesis written for the Master of Science degree in Computer Science from the University of Tennessee in December 1989 by David R. Brown. This thesis is entitled An Analysis of Computer Virus Construction, Proliferation, and Control and is available through the University of Tennessee Library. This paper contains an overview of the computer virus arena that can help the reader to evaluate the threat that computer viruses pose. The extent of this threat can only be determined by evaluating many different factors. These factors include the relative ease with which a computer virus can be written, the motivation involved in writing a computer virus, the damage and overhead incurred by infected systems, and the legal implications of computer viruses, among others. Based upon the research, the development of a computer virus seems to require more persistence than technical expertise. This is a frightening proclamation to the computing community. The education of computer professionals to the dangers that viruses pose to the welfare of the computing industry as a whole is stressed as a means of inhibiting the current proliferation of computer virus programs. Recommendations are made to assist computer users in preventing infection by computer viruses. These recommendations support solid general computer security practices as a means of combating computer viruses.

  3. Computational modeling and analysis of airflow in a tritium storage room

    SciTech Connect (OSTI)

    Chen, Z. (Zukun); Konecni, S. (Snezana); Whicker, J. J. (Jeffrey J.)

    2003-01-01T23:59:59.000Z

    In this study, a commercial computational fluid dynamics (CFD) code, CFX-5.5, was utilized to assess flow field characteristics, and to simulate tritium gas releases and subsequent transport in a storage room in the tritium handling facility at Los Alamos. This study was done with mesh refinement and results compared. The results show a complex, ventilation-induced flow field with vortices, velocity gradients, and stagnant air pockets. This paper also explains the timedependent gas dispersion results. The numerical analysis method used in this study provides important information that is possible to be validated with an experimental technique of aerosol tracer measurement method frequently used at Los Alamos. Application of CFD can have a favorable impact on the design of ventilation systems and worker safety with consideration to facility costs.

  4. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    SciTech Connect (OSTI)

    Malony, Allen D. [Department of Computer and Information Science, University of Oregon] [Department of Computer and Information Science, University of Oregon; Wolf, Felix G. [Juelich Supercomputing Centre, Forschungszentrum Juelich] [Juelich Supercomputing Centre, Forschungszentrum Juelich

    2014-01-31T23:59:59.000Z

    The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.

  5. Computational mechanics

    SciTech Connect (OSTI)

    Raboin, P J

    1998-01-01T23:59:59.000Z

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

  6. Analysis of a Computational Biology Simulation Technique on Emerging Processing Architectures

    SciTech Connect (OSTI)

    Vetter, Jeffrey S [ORNL; Meredith, Jeremy S [ORNL; Alam, Sadaf R [ORNL

    2007-01-01T23:59:59.000Z

    Multi-paradigm, multi-threaded and multi-core computing devices available today provide several orders of magnitude performance improvement over mainstream microprocessors. These devices include the STI Cell Broadband Engine, Graphical Processing Units (GPU) and the Cray massively-multithreaded processors- available in desktop computing systems as well as proposed for supercomputing platforms. The main challenge in utilizing these powerful devices is their unique programming paradigms. GPUs and the Cell systems require code developers to manage code and data explicitly, while the Cray multithreaded architecture requires them to generate a very large number of threads or independent tasks concurrently. In this paper, we explain strategies for optimizing a molecular dynamics (MD) calculation that is used in bio-molecular simulations on three devices: Cell, GPU and MTA-2. We show that the Cray MTA-2 system requires minimal code modification and does not outperform the microprocessor runs; but it demonstrates an improved workload scaling behavior over the microprocessor implementation. On the other hand, substantial porting and optimization efforts on the Cell and the GPU systems result in a 5x to 6x improvement, respectively, over a 2.2 GHz Opteron system.

  7. Computational Fluid Dynamic Analysis of the VHTR Lower Plenum Standard Problem

    SciTech Connect (OSTI)

    Richard W. Johnson; Richard R. Schultz

    2009-07-01T23:59:59.000Z

    The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 C to perhaps 1000 C. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for safety analysis for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present report presents results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made.

  8. Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What's Possible forPortsmouth/Paducah ProjectPRE-AWARD ACCOUNTINGQuantitative hiRXandComputational

  9. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    SciTech Connect (OSTI)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30T23:59:59.000Z

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  10. User manual for INVICE 0.1-beta : a computer code for inverse analysis of isentropic compression experiments.

    SciTech Connect (OSTI)

    Davis, Jean-Paul

    2005-03-01T23:59:59.000Z

    INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.

  11. Experimental and computational analysis of toughness anisotropy in an AA2139 Al-alloy for aerospace applications

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    1 Experimental and computational analysis of toughness anisotropy in an AA2139 Al-alloy for aerospace applications T.F. Morgeneyer1,3 , J. Besson1 , H. Proudhon1 , M.J. Starink2 and I. Sinclair2 1

  12. Journal of Computational Geometry jocg.org WORST-CASE AND SMOOTHED ANALYSIS OF K-MEANS CLUSTERING

    E-Print Network [OSTI]

    Al Hanbali, Ahmad

    Journal of Computational Geometry jocg.org WORST-CASE AND SMOOTHED ANALYSIS OF K-MEANS CLUSTERING WITH BREGMAN DIVERGENCES Bodo Manthey and Heiko Roglin Abstract. The k-means method is the method of choice-case running-time. To narrow the gap between theory and practice, k-means has been studied in the semi- random

  13. Dartmouth Computer Science Technical Report TR2011-689 802.15.4/ZigBee Analysis and Security

    E-Print Network [OSTI]

    of devices for executing physical attacks against the onboard hardware. Attacks against the PHY and MACDartmouth Computer Science Technical Report TR2011-689 802.15.4/ZigBee Analysis and Security: Tools #12;Ricky A. Melgares Thesis Report 1 Introduction For the last decade or so, we have seen

  14. Comparative analysis of 11 different radioisotopes for palliative treatment of bone metastases by computational methods

    SciTech Connect (OSTI)

    Guerra Liberal, Francisco D. C., E-mail: meb12020@fe.up.pt, E-mail: adriana-tavares@msn.com; Tavares, Adriana Alexandre S., E-mail: meb12020@fe.up.pt, E-mail: adriana-tavares@msn.com; Tavares, Joo Manuel R. S., E-mail: tavares@fe.up.pt [Instituto de Engenharia Mecnica e Gesto Industrial, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias s/n, Porto 4200-465 (Portugal)

    2014-11-01T23:59:59.000Z

    Purpose: Throughout the years, the palliative treatment of bone metastases using bone seeking radiotracers has been part of the therapeutic resources used in oncology, but the choice of which bone seeking agent to use is not consensual across sites and limited data are available comparing the characteristics of each radioisotope. Computational simulation is a simple and practical method to study and to compare a variety of radioisotopes for different medical applications, including the palliative treatment of bone metastases. This study aims to evaluate and compare 11 different radioisotopes currently in use or under research for the palliative treatment of bone metastases using computational methods. Methods: Computational models were used to estimate the percentage of deoxyribonucleic acid (DNA) damage (fast Monte Carlo damage algorithm), the probability of correct DNA repair (Monte Carlo excision repair algorithm), and the radiation-induced cellular effects (virtual cell radiobiology algorithm) post-irradiation with selected particles emitted by phosphorus-32 ({sup 32}P), strontium-89 ({sup 89}Sr), yttrium-90 ({sup 90}Y ), tin-117 ({sup 117m}Sn), samarium-153 ({sup 153}Sm), holmium-166 ({sup 166}Ho), thulium-170 ({sup 170}Tm), lutetium-177 ({sup 177}Lu), rhenium-186 ({sup 186}Re), rhenium-188 ({sup 188}Re), and radium-223 ({sup 223}Ra). Results: {sup 223}Ra alpha particles, {sup 177}Lu beta minus particles, and {sup 170}Tm beta minus particles induced the highest cell death of all investigated particles and radioisotopes. The cell survival fraction measured post-irradiation with beta minus particles emitted by {sup 89}Sr and {sup 153}Sm, two of the most frequently used radionuclides in the palliative treatment of bone metastases in clinical routine practice, was higher than {sup 177}Lu beta minus particles and {sup 223}Ra alpha particles. Conclusions: {sup 223}Ra and {sup 177}Lu hold the highest potential for palliative treatment of bone metastases of all radioisotopes compared in this study. Data reported here may prompt future in vitro and in vivo experiments comparing different radionuclides for palliative treatment of bone metastases, raise the need for the careful rethinking of the current widespread clinical use of {sup 89}Sr and {sup 153}Sm, and perhaps strengthen the use of {sup 223}Ra and {sup 177}Lu in the palliative treatment of bone metastases.

  15. ICSE Workshop on Green and Sustainable Software Engineering, Zurich, Switzerland, 3rd June 2012 An Energy Consumption Model and Analysis Tool for Cloud Computing

    E-Print Network [OSTI]

    Schneider, Jean-Guy

    system-level optimisation. Keywords-green computing; Cloud computing; energy consumption; performanceIn 1st ICSE Workshop on Green and Sustainable Software Engineering, Zurich, Switzerland, 3rd June 2012 An Energy Consumption Model and Analysis Tool for Cloud Computing Environments FeiFei Chen, Jean

  16. Strategies and a computer aided package for design and analysis of induction machines for inverter-driven variable speed systems

    SciTech Connect (OSTI)

    Zhao, Z.; Xu, L. [Ohio State Univ., Columbus, OH (United States). Dept. of Electrical Engineering; El-Antably, A. [Delphi Power Propulsion Systems, Anderson, IN (United States)

    1995-12-31T23:59:59.000Z

    Induction machines designed for inverter-driven variable speed systems is different from those powered directly from utility power lines. In this paper, the design strategies of inverter-driven induction machines are discussed. This is followed by a description of a computer aided design and analysis package specifically for this purpose. The program package permits integration design of machines with inverters, comprehensive performance analysis, and system optimization, resulting in 20--30% more power density for the induction machine than that designed for direct utility power supplies by convention method. Design and performance analysis results are presented to substantiate the conclusions.

  17. Analysis of alternatives for computing backwater at bridges for free-surface, subcritical flow conditions

    E-Print Network [OSTI]

    Kaatz, Kelly Jay

    1993-01-01T23:59:59.000Z

    Downsville, Louisiana Profde Computation Results . . 46 Flagon Bayou near Libuse, Louisiana Site Characteristics . . . 51 Flagon Bayou near Libuse, Louisiana Profile Computation Results 52 Alexander Creek near St. Francisville, Louisiana Site... Characteristics . 56 Alexander Creek near St. Francisville, Louisiana Flood of September 17, 1971 Profile Computation Results . . . . . . . . . . . . . . . . . . 57 Tenmile Creek near Elizabeth, Louisiana Site Characteristics . 62 10 Tenmile Creek near...

  18. Distributed computing systems programme

    SciTech Connect (OSTI)

    Duce, D.

    1984-01-01T23:59:59.000Z

    Publication of this volume coincides with the completion of the U.K. Science and Engineering Research Council's coordinated programme of research in Distributed Computing Systems (DCS) which ran from 1977 to 1984. The volume is based on presentations made at the programme's final conference. The first chapter explains the origins and history of DCS and gives an overview of the programme and its achievements. The remaining sixteen chapters review particular research themes (including imperative and declarative languages, and performance modelling), and describe particular research projects in technical areas including local area networks, design, development and analysis of concurrent systems, parallel algorithm design, functional programming and non-von Neumann computer architectures.

  19. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect (OSTI)

    John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

    2011-02-01T23:59:59.000Z

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  20. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect (OSTI)

    Bess, John [Idaho National Laboratory (INL); Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL

    2011-01-01T23:59:59.000Z

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  1. Computer-assisted comparison of analysis and test results in transportation experiments

    SciTech Connect (OSTI)

    Knight, R.D. [Gram, Inc., Albuquerque, NM (United States); Ammerman, D.J.; Koski, J.A. [Sandia National Labs., Albuquerque, NM (United States)

    1998-05-10T23:59:59.000Z

    As a part of its ongoing research efforts, Sandia National Laboratories` Transportation Surety Center investigates the integrity of various containment methods for hazardous materials transport, subject to anomalous structural and thermal events such as free-fall impacts, collisions, and fires in both open and confined areas. Since it is not possible to conduct field experiments for every set of possible conditions under which an actual transportation accident might occur, accurate modeling methods must be developed which will yield reliable simulations of the effects of accident events under various scenarios. This requires computer software which is capable of assimilating and processing data from experiments performed as benchmarks, as well as data obtained from numerical models that simulate the experiment. Software tools which can present all of these results in a meaningful and useful way to the analyst are a critical aspect of this process. The purpose of this work is to provide software resources on a long term basis, and to ensure that the data visualization capabilities of the Center keep pace with advancing technology. This will provide leverage for its modeling and analysis abilities in a rapidly evolving hardware/software environment.

  2. Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users Manual

    SciTech Connect (OSTI)

    Dr. Bradley J Schrader

    2010-10-01T23:59:59.000Z

    The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

  3. Radiological Safety Analysis Computer (RSAC) Program Version 7.0 Users Manual

    SciTech Connect (OSTI)

    Dr. Bradley J Schrader

    2009-03-01T23:59:59.000Z

    The Radiological Safety Analysis Computer (RSAC) Program Version 7.0 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

  4. Computational Analysis of Factors Influencing Enhancement of Thermal Conductivity of Nanofluids

    E-Print Network [OSTI]

    Okeke, George; Antony, Joseph; Ding, Yulong; 10.1007/s11051-011-0389-9

    2012-01-01T23:59:59.000Z

    Numerical investigations are conducted to study the effect of factors such as particle clustering and interfacial layer thickness on thermal conductivity of nanofluids. Based on this, parameters including Kapitza radius, and fractal and chemical dimension which have received little attention by previous research are rigorously investigated. The degree of thermal enhancement is analysed for increasing aggregate size, particle concentration, interfacial thermal resistance, and fractal and chemical dimensions. This analysis is conducted for water-based nanofluids of Alumina (Al2O3), CuO and Titania (TiO2) nanoparticles where the particle concentrations are varied up to 4vol%. Results from the numerical work are validated using available experimental data. For the case of aggregate size, particle concentration and interfacial thermal resistance; the aspect ratio (ratio of radius of gyration of aggregate to radius of primary particle, Rg/a) is varied between 2 to 60. It was found that the enhancement decreases wit...

  5. Data file management in the DIII-D data acquisition and analysis computer systems

    SciTech Connect (OSTI)

    McHarg, B.B. Jr.

    1989-11-01T23:59:59.000Z

    DIII-D is a large tokamak plasma physics and fusion energy research experiment funded by the Department of Energy. Each shot of the experiment results in data files containing between 20 and 30 Mbytes of data. These shots occur about once every 10 minutes with 40 to 50 shots per operating day. Over 1.2 gigabytes have been acquired in one daily session. Most of this data is acquired by MODCOMP Classic computers and is transferred via a Network Systems Hyperchannel to the DIII-D DEC VAX cluster system which is connected via Ether-net to the User Service Center DEC VAX cluster system. Some other data is acquired by local MicroVAX based plasma diagnostic systems and is transferred via DECnet to the DIII-D cluster. A substantial part of these VAX cluster systems is devoted to handling the large data files so as to maintain availability of the data for users, provide for shot archiving and shot restoration capabilities, and at the same time allow for new data to be received into the systems. Many of these tasks are carried out in near real time in sequence with a tokamak shot while other tasks are performed periodically throughout operations or during off hours. These tasks include disk space management, data archiving to 6250 and/or 8 mm tape drives, data file migration from the DIII-D cluster to the User Service Center cluster, data file compression, and network wide data file access. 11 refs., 2 figs.

  6. Computational Fluid Dynamics Analysis of Very High Temperature Gas-Cooled Reactor Cavity Cooling System

    SciTech Connect (OSTI)

    Angelo Frisani; Yassin A. Hassan; Victor M. Ugaz

    2010-11-02T23:59:59.000Z

    The design of passive heat removal systems is one of the main concerns for the modular very high temperature gas-cooled reactors (VHTR) vessel cavity. The reactor cavity cooling system (RCCS) is a key heat removal system during normal and off-normal conditions. The design and validation of the RCCS is necessary to demonstrate that VHTRs can survive to the postulated accidents. The computational fluid dynamics (CFD) STAR-CCM+/V3.06.006 code was used for three-dimensional system modeling and analysis of the RCCS. A CFD model was developed to analyze heat exchange in the RCCS. The model incorporates a 180-deg section resembling the VHTR RCCS experimentally reproduced in a laboratory-scale test facility at Texas A&M University. All the key features of the experimental facility were taken into account during the numerical simulations. The objective of the present work was to benchmark CFD tools against experimental data addressing the behavior of the RCCS following accident conditions. Two cooling fluids (i.e., water and air) were considered to test the capability of maintaining the RCCS concrete walls' temperature below design limits. Different temperature profiles at the reactor pressure vessel (RPV) wall obtained from the experimental facility were used as boundary conditions in the numerical analyses to simulate VHTR transient evolution during accident scenarios. Mesh convergence was achieved with an intensive parametric study of the two different cooling configurations and selected boundary conditions. To test the effect of turbulence modeling on the RCCS heat exchange, predictions using several different turbulence models and near-wall treatments were evaluated and compared. The comparison among the different turbulence models analyzed showed satisfactory agreement for the temperature distribution inside the RCCS cavity medium and at the standpipes walls. For such a complicated geometry and flow conditions, the tested turbulence models demonstrated that the realizable k-epsilon model with two-layer all y+ wall treatment performs better than the other k-epsilon and k-omega turbulence models when compared to the experimental results and the Reynolds stress transport turbulence model results. A scaling analysis was developed to address the distortions introduced by the CFD model in simulating the physical phenomena inside the RCCS system with respect to the full plant configuration. The scaling analysis demonstrated that both the experimental facility and the CFD model achieve a satisfactory resemblance of the main flow characteristics inside the RCCS cavity region, and convection and radiation heat exchange phenomena are properly scaled from the actual plant.

  7. The Computational Sciences. Research

    E-Print Network [OSTI]

    Christensen, Dan

    The Computational Sciences. Research activities range from the theoretical foundations. The teaching mission of the computational sciences includes almost every student in the University for computational hardware and software. The computational sciences are undergoing explosive growth worldwide

  8. Fracture Analysis of Vessels Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    SciTech Connect (OSTI)

    Williams, P. T. [ORNL; Dickson, T. L. [ORNL; Yin, S. [ORNL

    2007-12-01T23:59:59.000Z

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

  9. Distributed scalar quantization for computing: High-resolution analysis and extensions

    E-Print Network [OSTI]

    Misra, Vinith

    Communication of quantized information is frequently followed by a computation. We consider situations of distributed functional scalar quantization: distributed scalar quantization of (possibly correlated) sources followed ...

  10. Why Computational Science and Engineering?

    E-Print Network [OSTI]

    Cengarle, Mara Victoria

    numerical, visualization, and data analysis methods. Traditional programs in computer science, mathematicsWhy Computational Science and Engineering? Computational Science and Engineering (CSE) is the multi, the CSE program is based on three pillars: applied mathematics (especially numerical analysis), computer

  11. SMACS: a system of computer programs for probabilistic seismic analysis of structures and subsystems. Volume I. User's manual

    SciTech Connect (OSTI)

    Maslenikov, O.R.; Johnson, J.J.; Tiong, L.W.; Mraz, M.J.; Bumpus, S.; Gerhard, M.A.

    1985-03-01T23:59:59.000Z

    The SMACS (Seismic Methodology Analysis Chain with Statistics) system of computer programs, one of the major computational tools of the Seismic Safety Margins Research Program (SSMRP), links the seismic input with the calculation of soil-structure interaction, major structure response, and subsystem response. The seismic input is defined by ensembles of acceleration time histories in three orthogonal directions. Soil-structure interaction and detailed structural response are then determined simultaneously, using the substructure approach to SSI as implemented in the CLASSI family of computer programs. The modus operandi of SMACS is to perform repeated deterministic analyses, each analysis simulating an earthquake occurrence. Parameter values for each simulation are sampled from assumed probability distributions according to a Latin hypercube experimental design. The user may specify values of the coefficients of variation (COV) for the distributions of the input variables. At the heart of the SMACS system is the computer program SMAX, which performs the repeated SSI response calculations for major structure and subsystem response. This report describes SMAX and the pre- and post-processor codes, used in conjunction with it, that comprise the SMACS system. (ACR)

  12. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

    SciTech Connect (OSTI)

    Sellers, C.; Fox, B.; Paulz, J.

    1996-03-01T23:59:59.000Z

    The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

  13. Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA

    SciTech Connect (OSTI)

    Carbajo, Juan J [ORNL; Qualls, A L [ORNL

    2008-01-01T23:59:59.000Z

    The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the Stirling engines. Loss of one radiator sink has also been simulated. The effects of reduced gravity on the transients have also been investigated. The transients studied have been used to demonstrate the safety and the operability of the system. The results of the transients will be used to evaluate which transients the system can survive without damage and can continue operating at nominal or reduced power levels for the intended life time of the reactor.

  14. Computational Analysis of Factors Influencing Enhancement of Thermal Conductivity of Nanofluids

    E-Print Network [OSTI]

    George Okeke; Sanjeeva Witharana; Joseph Antony; Yulong Ding

    2012-05-09T23:59:59.000Z

    Numerical investigations are conducted to study the effect of factors such as particle clustering and interfacial layer thickness on thermal conductivity of nanofluids. Based on this, parameters including Kapitza radius, and fractal and chemical dimension which have received little attention by previous research are rigorously investigated. The degree of thermal enhancement is analysed for increasing aggregate size, particle concentration, interfacial thermal resistance, and fractal and chemical dimensions. This analysis is conducted for water-based nanofluids of Alumina (Al2O3), CuO and Titania (TiO2) nanoparticles where the particle concentrations are varied up to 4vol%. Results from the numerical work are validated using available experimental data. For the case of aggregate size, particle concentration and interfacial thermal resistance; the aspect ratio (ratio of radius of gyration of aggregate to radius of primary particle, Rg/a) is varied between 2 to 60. It was found that the enhancement decreases with interfacial layer thickness. Also the rate of decrease is more significant after a given aggregate size. For a given interfacial resistance, the enhancement is mostly sensitive to Rg/a <20 indicated by the steep gradients of data plots. Predicted and experimental data for thermal conductivity enhancement are in good agreement.

  15. Journal of Machine Learning Research 7 (2006) 11831204 Submitted 12/05; Revised 3/06; Published 7/06 Computational and Theoretical Analysis of Null Space and

    E-Print Network [OSTI]

    Ye, Jieping

    2006-01-01T23:59:59.000Z

    statistical approach for supervised dimensionality reduction. It aims to maximize the ratio of the between/06 Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis Jieping Ye pre-processing step in many applications. Linear discrim- inant analysis (LDA) is a classical

  16. Cogeneration: Economic and technical analysis. (Latest citations from the INSPEC - The Database for Physics, Electronics, and Computing). Published Search

    SciTech Connect (OSTI)

    Not Available

    1993-11-01T23:59:59.000Z

    The bibliography contains citations concerning economic and technical analyses of cogeneration systems. Topics include electric power generation, industrial cogeneration, use by utilities, and fuel cell cogeneration. The citations explore steam power station, gas turbine and steam turbine technology, district heating, refuse derived fuels, environmental effects and regulations, bioenergy and solar energy conversion, waste heat and waste product recycling, and performance analysis. (Contains a minimum of 104 citations and includes a subject term index and title list.)

  17. Computational simulation is becoming a primary means of analysis and decision making in national laboratories and

    E-Print Network [OSTI]

    Fainman, Yeshaiahu

    efficiency of wind turbine blades to evaluating the surgical design for a bypass graft, complex engineering are Leading experts in finite eLement methods, high performance computing, and materiaL mechanics. Master

  18. GAMANL : a computer program applying Fourier transforms to the analysis of gamma spectral data

    E-Print Network [OSTI]

    Harper, Thomas Lawrence

    1968-01-01T23:59:59.000Z

    GAMANL, a computer code for automatically identifying the peaks in a complex spectra and determining their centers and areas, is described. The principal feature of the method is a data smoothing technique employing Fourier ...

  19. Volume Analysis Using Multimodal Surface Similarity Martin Haidacher, Stefan Bruckner, Member, IEEE Computer Society,

    E-Print Network [OSTI]

    for similarity-based classification of a dual energy CT (DECT) angiography data set. The individual steps to robustly extract features in applications such as dual energy computed tomography of parts in industrial

  20. Analysis of on-premise to cloud computing migration strategies for enterprises

    E-Print Network [OSTI]

    Dhiman, Ashok

    2011-01-01T23:59:59.000Z

    In recent years offering and maturity in Cloud Computing space has gained significant momentum. CIOs are looking at Cloud seriously because of bottom line savings and scalability advantages. According to Gartner's survey ...

  1. Functional and computational analysis of RNA-binding proteins and their roles in cancer

    E-Print Network [OSTI]

    Katz, Yarden

    2014-01-01T23:59:59.000Z

    This work is concerned with mRNA processing in mammalian cells and proceeds in two parts. In the first part, I introduce a computational framework for inferring the abundances of mRNA isoforms using high-throughput RNA ...

  2. An analysis of font styles on the screen of a computer terminal

    E-Print Network [OSTI]

    Eason, Joyce Leona

    1988-01-01T23:59:59.000Z

    (Williams, 1980). Much research has been done in the last ten years in the area of visual displays, and more specifically utilizing the cathode ray tube (CRT) computer terminals. One of the first human factors concerns was the feasibility of reading... in the Human Factors Engineering Laboratory located at Texas A&M University. The computer terminal used was an Apple Macintosh SE, and was brought to the Laboratory for this study. The reading material was configured utilizing Microsoft Word, a word...

  3. Molecular Science Computing | EMSL

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Molecular Science Computing Overview Cell Isolation and Systems Analysis Deposition and Microfabrication Mass Spectrometry Microscopy Molecular Science Computing NMR and EPR...

  4. Interfacing computer-assisted drafting and design with the building loads analysis and system thermodynamics (BLAST) program. Final report

    SciTech Connect (OSTI)

    Morton, J.D.; Pyo, C.; Choi, B.

    1992-10-01T23:59:59.000Z

    Energy efficient building design requires in-depth thermal analysis. Existing Computer Aided Design and Drafting (CADD) software packages already enhance the productivity and quality of design. Thermal analysis tools use much the same information as that contained in CADD drawings to determine the most energy efficient design configuration during the design process. To use these analysis tools, data already contained in the CADD system must be re-keyed into the analysis packages. This project created an interface to automate the migration of data from CADD to the Building Loads Analysis System and Thermodynamics (BLAST) analysis program, which is an Army-standard system for evaluating building energy performance. Two interfaces were developed, one batch-oriented (IN2BLAS7) and one interactive (the Drawing Navigator). Lessons learned from the development of IN2BLAST were carried into the development of the Drawing Navigator, and the Drawing Navigator was field tested. Feedback indicated that useful automation of the data migration is possible, and that proper application of such automation can increase productivity.... Blast, CADD, Interface, IN2BLAST, Drawing navigator.

  5. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect (OSTI)

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16T23:59:59.000Z

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  6. MATADOR: a computer code for the analysis of radionuclide behavior during degraded core accidents in light water reactors

    SciTech Connect (OSTI)

    Baybutt, P.; Raghuram, S.; Avci, H.I.

    1985-04-01T23:59:59.000Z

    A new computer code called MATADOR (Methods for the Analysis of Transport And Deposition Of Radionuclides) has been developed to replace the CORRAL computer code which was written for the Reactor Safety Study (WASH-1400). This report contains a detailed description of the models used in MATADOR. MATADOR is intended for use in system risk studies to analyze radionuclide transport and deposition in reactor containments. The principal output of the code is information on the timing and magnitude of radionuclide releases to the environment as a result of severely degraded core accidents. MATADOR considers the transport of radionuclides through the containment and their removal by natural deposition and the operation of engineered safety systems such as sprays. The code requires input data on the source term from the primary system, the geometry of the containment, and the thermal-hydraulic conditions in the containment.

  7. An Analysis Framework for Investigating the Trade-offs Between System Performance and Energy Consumption in a Heterogeneous Computing Environment

    SciTech Connect (OSTI)

    Friese, Ryan [Colorado State University, Fort Collins; Khemka, Bhavesh [Colorado State University, Fort Collins; Maciejewski, Anthony A [Colorado State University, Fort Collins; Siegel, Howard Jay [Colorado State University, Fort Collins; Koenig, Gregory A [ORNL; Powers, Sarah S [ORNL; Hilton, Marcia M [ORNL; Rambharos, Rajendra [ORNL; Okonski, Gene D [ORNL; Poole, Stephen W [ORNL

    2013-01-01T23:59:59.000Z

    Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective genetic algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.

  8. COMPUTATIONAL SCIENCE CENTER

    SciTech Connect (OSTI)

    DAVENPORT, J.

    2006-11-01T23:59:59.000Z

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

  9. INTERNATIONAL IEEE CONFERENCE ON COMPUTER SCIENCES -RIVF'06 1 Analysis of Nasopharyngeal Carcinoma Data with a

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    -based medical systems. These advances have resulted in noticeable improvements in medical care, support profile of the population under study. Index Terms-- Basyesian networks, medical decision support systems, epidemiology I. INTRODUCTION THE last twenty years have brought considerable advances in the field of computer

  10. Improvement of the computer methods for grounding analysis in layered soils by using

    E-Print Network [OSTI]

    Colominas, Ignasi

    that currently allows to analyze real grounding grids in real-time in personal computers. The ex- tension, the grounding grid usually consists of a mesh of interconnected cylindrical conductors buried to a certain depth surface that can be connected by a person must be kept under certain maximum safe limits (step, touch

  11. Cross gender-age trabecular texture analysis in cone beam computed tomography

    E-Print Network [OSTI]

    Ling, Haibin

    osteoporosis screening tools in the jaws. Keywords: cone-beam computed tomography, osteoporosis, radiology, osteoporosis afflicts 55% of Americans aged 50 and above.1 Early diagnosis of osteoporosis is very important to prevent more serious complications such as hip fracture. The current gold standard for osteoporosis

  12. An analysis of incubation effects in problem solving using a computer-administered assessment tool

    E-Print Network [OSTI]

    Yoo, Sung Ae

    2009-05-15T23:59:59.000Z

    as an incubation period. The present study examines the effect of such activities that are provided as an incubation period in computer-based problem solving tasks. In addition, this study explores the potential interaction between the type of problems and the type...

  13. International Conference on Ocean Energy, 6 October, Bilbao Computational Analysis of Ducted Turbine Performance

    E-Print Network [OSTI]

    Pedersen, Tom

    Turbine Performance M. Shives1 and C. Crawford2 Dept. of Mechanical Engineering, University of Victoria turbine designs using computational fluid dynamics (CFD) simulation. Analytical model coefficients is proposed for the base pressure coefficient. Keywords: base-pressure, CFD, diffuser-augmented turbine, tidal

  14. Preprint of the paper "High performance computing for the analysis and postprocessing of earthing

    E-Print Network [OSTI]

    Colominas, Ignasi

    in certain places of the substation site. Its main objective is the transport and dissipation of electrical been systematically reported, such as the large computational costs required in the anal- ysis of real the construction of the substation produces a stratified soil, or as a con- sequence of a chemical treatment

  15. Computability and Logic Selmer Bringsjord

    E-Print Network [OSTI]

    Bringsjord, Selmer

    logic (IML). IML includes basic computability theory (Turing Machines and other simple automata. computability (including Turing machines, Register machines, Church's Thesis); 9. uncomput

  16. IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 4, APRIL 1997 425 Parallel Signature Analysis Design

    E-Print Network [OSTI]

    Stanford University

    --Signature analysis, aliasing probability bounds, random testing, linear feedback shift registers, parallel signature is derived from pseudorandom test pattern gen- erators. A data compaction circuit using a linear feedback the response from multiple circuit out- puts [2], [3]; and, the term serial signature analysis is used for LFSR

  17. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    E-Print Network [OSTI]

    Hasenkamp, Daren

    2011-01-01T23:59:59.000Z

    M. Cusumano. Cloud computing and SaaS as new computingMarket- Oriented Cloud Computing: Vision, Hype, and RealityOpen-Source Cloud-Computing System. In Proceedings of the

  18. Development of a Computer Heating Monitoring System and Its Applications

    E-Print Network [OSTI]

    Chen, H.; Li, D.; Shen, L.

    2006-01-01T23:59:59.000Z

    This paper develops a computer heating monitoring system, introduces the components and principles of the monitoring system, and provides a study on its application to residential building heating including analysis of indoor and outdoor air...

  19. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

    SciTech Connect (OSTI)

    Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)

    1993-10-01T23:59:59.000Z

    The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

  20. In Proceedings of APSEC 2010 Cloud Workshop, Sydney, Australia, 30th An Analysis of The Cloud Computing Security Problem

    E-Print Network [OSTI]

    Grundy, John

    of The Cloud Computing Security Problem Mohamed Al Morsy, John Grundy and Ingo Mller Computer Science to adopt IT without upfront investment. Despite the potential gains achieved from the cloud computing solution. Keywords: cloud computing; cloud computing security; cloud computing security management. I

  1. The Proceedings of the 17th Symposium on Computer Arithmetic are dedicated to William M. Kahan for his lifetime contributions to Computational Mathematics, Numerical Analysis, and Standardization

    E-Print Network [OSTI]

    California at Davis, University of

    is a professor of mathematics, computer science, and electrical engineering at University of California was soon supplanted by the IBM 360 series. Kahan first began to restore to computer arithmeticDedication W. Kahan The Proceedings of the 17th Symposium on Computer Arithmetic are dedicated

  2. Experimental and computational analysis of laser melting of thin silicon films

    SciTech Connect (OSTI)

    Grigoropoulos, C.P.; Dutcher, W.E. Jr.; Emery, A.F. (Univ. of Washington, Seattle (USA))

    1991-02-01T23:59:59.000Z

    Recrystallization of thin semiconductor films can yield improved electrical and crystalline properties. The recrystallization is often effected by using a laser source to melt the semiconductor that has been deposited on an amorphous insulating substrate. This paper describes detailed experimental observations of the associated phase-change process. A computational conductive heat transfer model is presented. The numerical predictions are compared to the experimental results and good agreement is obtained.

  3. WASTE-ACC: A computer model for analysis of waste management accidents

    SciTech Connect (OSTI)

    Nabelssi, B.K.; Folga, S.; Kohout, E.J.; Mueller, C.J.; Roglans-Ribas, J.

    1996-12-01T23:59:59.000Z

    In support of the U.S. Department of Energy`s (DOE`s) Waste Management Programmatic Environmental Impact Statement, Argonne National Laboratory has developed WASTE-ACC, a computational framework and integrated PC-based database system, to assess atmospheric releases from facility accidents. WASTE-ACC facilitates the many calculations for the accident analyses necessitated by the numerous combinations of waste types, waste management process technologies, facility locations, and site consolidation strategies in the waste management alternatives across the DOE complex. WASTE-ACC is a comprehensive tool that can effectively test future DOE waste management alternatives and assumptions. The computational framework can access several relational databases to calculate atmospheric releases. The databases contain throughput volumes, waste profiles, treatment process parameters, and accident data such as frequencies of initiators, conditional probabilities of subsequent events, and source term release parameters of the various waste forms under accident stresses. This report describes the computational framework and supporting databases used to conduct accident analyses and to develop source terms to assess potential health impacts that may affect on-site workers and off-site members of the public under various DOE waste management alternatives.

  4. Waste-ACC: A computer model for radiological analysis of waste management

    SciTech Connect (OSTI)

    Nabelssi, B.K.; Folga, S.; Kohout, E. [Argonne National Laboratory, IL (United States)] [and others

    1996-06-01T23:59:59.000Z

    WASTE-ACC, a computational framework and integrated PC-based database system, has been developed by Argonne National Laboratory to assess radiological atmospheric releases from facility accidents in support of the U.S. Department of Energy`s (DOE`s) Waste Management (WM) Programmatic Environmental. Impact Statement, (PEIS). WASTE-ACC facilitates the many calculations required in the accident analyses by the numerous combinations of waste types, treatment technologies, facility locations, and site consolidation strategies in the WM PEIS alternatives for each waste type across the DOE complex. This paper focuses on the computational framework used to assess atmospheric releases and health risk impacts from potential waste management accidents that may affect on-site workers and off-site members of the public. The computational framework accesses several relational databases as needed to calculate radiological releases for the risk dominant accidents. The databases contain throughput volumes, treatment process parameters, radionuclide characteristics, radiological profiles of the waste, site-specific dose conversion factors, and accident data such as frequencies of initiators, conditional probabilities of subsequent events, and source term release parameters of the various waste forms under accident stresses.

  5. Energy Analysis and Diagnostics: A Computer Based Tool for Industrial Self Assessment

    E-Print Network [OSTI]

    Gopalakrishnan, B.; Plummer, R. W.; Nagarajan, S.; Kolluri, R.

    of recommending ECOs in areas such as boilers, motor selection, analysis of belt driven systems, destratification, insulation of heated surfaces, and air compressor operation. The system has been designed so as to query the industrial user on aspects related...

  6. Numerical Simulation/Analysis and Computer Aided Engineering for Virtual Protyping of Heavy Ground Vehicle

    E-Print Network [OSTI]

    Abd. Rahim, Mohd. Razi

    2010-08-26T23:59:59.000Z

    This doctoral project dissertation deals with the investigation of simulation/analysis in the product development process of specialized heavy ground vehicle engineering which posts some of the most challenging engineering ...

  7. Beyond analysis and representation in CAD : a new computational approach to design education

    E-Print Network [OSTI]

    Celani, Maria Gabriela Caffarena

    2002-01-01T23:59:59.000Z

    This thesis aims at changing students' attitude towards the use ofcomputer-aided design (CAD) in architecture. It starts from the premise that CAD is used mostly for analysis and representation, and not as a real design ...

  8. Countries Gasoline Prices Including Taxes

    Gasoline and Diesel Fuel Update (EIA)

    Selected Countries (U.S. dollars per gallon, including taxes) Date Belgium France Germany Italy Netherlands UK US 51115 6.15 6.08 6.28 6.83 6.96 6.75 3.06 5415 6.14 6.06...

  9. Sponsorship includes: Agriculture in the

    E-Print Network [OSTI]

    Nebraska-Lincoln, University of

    Sponsorship includes: · Agriculture in the Classroom · Douglas County Farm Bureau · Gifford Farm · University of Nebraska Agricultural Research and Development Center · University of Nebraska- Lincoln Awareness Coalition is to help youth, primarily from urban communities, become aware of agriculture

  10. Molecular Science Computing | EMSL

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Capabilities Molecular Science Computing Overview Cell Isolation and Systems Analysis Deposition and Microfabrication Mass Spectrometry Microscopy Molecular Science...

  11. The Application of Causal Analysis Techniques for Computer-Related Chris Johnson,

    E-Print Network [OSTI]

    Johnson, Chris

    complex. The plant receives crude oil, which is then separated by fractional distillation distillation unit within the plant. This led to a number of knock-on effects, including power disruption, which

  12. Experimental and computational analysis of epidermal growth factor receptor pathway phosphorylation dynamics

    E-Print Network [OSTI]

    Kleiman, Laura B

    2010-01-01T23:59:59.000Z

    The epidermal growth factor receptor (EGFR, also known as ErbB 1) is a prototypical receptor tyrosine kinase (RTK) that activates multi-kinase phosphorylation cascades to regulate diverse cellular processes, including ...

  13. The MELTSPREAD-1 computer code for the analysis of transient spreading in containments

    SciTech Connect (OSTI)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1990-01-01T23:59:59.000Z

    A one-dimensional, multicell, Eulerian finite difference computer code (MELTSPREAD-1) has been developed to provide an improved prediction of the gravity driven spreading and thermal interactions of molten corium flowing over a concrete or steel surface. In this paper, the modeling incorporated into the code is described and the spreading models are benchmarked against a simple dam break'' problem as well as water simulant spreading data obtained in a scaled apparatus of the Mk I containment. Results are also presented for a scoping calculation of the spreading behavior and shell thermal response in the full scale Mk I system following vessel meltthrough. 24 refs., 15 figs.

  14. FELIX experiments and computational needs for eddy current analysis of fusion reactors

    SciTech Connect (OSTI)

    Turner, L.R.

    1984-01-01T23:59:59.000Z

    In a fusion reactor, changing magnetic fields are closely coupled to the electrically-conducting metal structure. This coupling is particularly pronounced in a tokamak reactor in which magnetic fields are used to confine, stabilize, drive, and heat the plasma. Electromagnetic effects in future fusion reactors will have far-reaching implications in the configuration, operation, and maintenance of the reactors. This paper describes the impact of eddy-current effects on future reactors, the requirements of computer codes for analyzing those effects, and the FELIX experiments which will provide needed data for code validation.

  15. Real-time computer analysis for nuclear material detection. Part 3. Nuclear instrumentation interface

    SciTech Connect (OSTI)

    Gosnell, T.B.; Wood, R.E.; Anzelon, G.A.

    1981-12-01T23:59:59.000Z

    An electronic interface between a Digital Equipment Corporation (DEC) LSI-11 microcomputer and a LeCroy Research Systems model 3001 qVt multichannel analyzer is described in detail. This interface provides for 16-bit parallel data transfer from the memory of the analyzer to the memory of the computer. An unusual feature of the interface is a provision that allows storage of counts of logic pulses (e.g., from radiation detector discriminators) in the first 16 channels of the analyzer's memory. A further provision allows use of a LeCroy printer and display interface that is designed specifically as a companion module to the qVt analyzer.

  16. Resource-Competitive Analysis: A New Perspective on Attack-Resistant Distributed Computing

    E-Print Network [OSTI]

    Saia, Jared

    with the sight of four colossal siege towers. Laboriously constructed by Ostrogothic engineers and mobilized-competitive analysis, is concerned with the worst- case ratio of the cost incurred by an algorithm to the cost incurred by any adversarial strategy. Here, the notion of cost corresponds to any network resource

  17. Computations of cosmic ray propagation in the Earth's atmosphere, towards a GLE analysis

    E-Print Network [OSTI]

    Usoskin, Ilya G.

    -90014, Oulu, Finland 2 Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences of solar energetic particles propagation in the magnetosphere and atmosphere of the Earth is very important for ground level enhancement analysis. Detailed simulations of solar energetic particles events starting from

  18. COMPUTER PROGRAM FOR ANALYSIS OF THE HOMOGENEITY AND GOODNESS OF FIT OF

    E-Print Network [OSTI]

    from the author. Literature Cited LI, J. C. R. 1959. Introduction to statistical inference. Edward Bros of statistics in biological research. W. H. Freeman and Co., San Franc., 776 p. YaNG. M. Y. Y., AND R. ACOMPUTER PROGRAM FOR ANALYSIS OF THE HOMOGENEITY AND GOODNESS OF FIT OF FREQUENCY DISTRIBUTIONS

  19. Performance Metric Sensitivity Computation for Optimization and Trade-off Analysis in Wireless Networks

    E-Print Network [OSTI]

    Baras, John S.

    for multi- hop wireless networks, including MANETs. We introduce an approximate (throughput) loss model is the different nature of wired and wireless networks rendering the use of wired network techniques inappropriate for the case of wireless networks. Key quantities, such as the link capacity, that remain constant in a wired

  20. The PARSEC computer code for analysis of direct containment heating by dispersed debris

    SciTech Connect (OSTI)

    Sienicki, J.J.; Spencer, B.W.

    1987-01-01T23:59:59.000Z

    A multiphase flow and heat transfer, coupled Lagrangian and Eulerian computer program, PARSEC, has been developed to predict the heatup of a gas atmosphere resulting from the gas-driven dispersal of high temperature debris droplets/particles as well as the associated formation of aerosol by the oxidation enhanced vaporization of metal from the surfaces of the droplets, oxidation of reactive debris constituents, and generation of hydrogen. Predictions of the code and the fundamental modeling incorporated therein are in good agreement with available data on the essentially unimpeded dispersal of high temperature melts involving reactor materials in the Argonne CWTI-13 and CWTI-14 experiments as well as iron-alumina thermite in the Sandia DCH-1 test.

  1. The MELTSPREAD-1 computer code for the analysis of transient spreading in containments

    SciTech Connect (OSTI)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1990-01-01T23:59:59.000Z

    Transient spreading of molten core materials is important in the assessment of severe-accident sequences for Mk-I boiling water reactors (BWRs). Of interest is whether core materials are able to spread over the pedestal and drywell floors to contact the containment shell and cause thermally induced shell failure, or whether heat transfer to underlying concrete and overlying water will freeze the melt short of the shell. The development of a computational capability for the assessment of this problem was initiated by Sienicki et al. in the form of MELTSPREAD-O code. Development is continuing in the form of the MELTSPREAD-1 code, which contains new models for phenomena that were ignored in the earlier code. This paper summarizes these new models, provides benchmarking calculations of the relocation model against an analytical solution as well as simulant spreading data, and summarizes the results of a scoping calculation for the full Mk-I system.

  2. Online Social Network Analysis: A Survey of Research Applications in Computer Science

    E-Print Network [OSTI]

    Kurka, David Burth; Von Zuben, Fernando J

    2015-01-01T23:59:59.000Z

    The emergence and popularization of online social networks suddenly made available a large amount of data from social organization, interaction and human behaviour. All this information opens new perspectives and challenges to the study of social systems, being of interest to many fields. Although most online social networks are recent (less than fifteen years old), a vast amount of scientific papers was already published on this topic, dealing with a broad range of analytical methods and applications. This work describes how computational researches have approached this subject and the methods used to analyse such systems. Founded on a wide though non-exaustive review of the literature, a taxonomy is proposed to classify and describe different categories of research. Each research category is described and the main works, discoveries and perspectives are highlighted.

  3. Intelligent Computing System for Reservoir Analysis and Risk Assessment of Red River Formation, Class Revisit

    SciTech Connect (OSTI)

    Sippel, Mark A.

    2002-09-24T23:59:59.000Z

    Integrated software was written that comprised the tool kit for the Intelligent Computing System (ICS). The software tools in ICS are for evaluating reservoir and hydrocarbon potential from various seismic, geologic and engineering data sets. The ICS tools provided a means for logical and consistent reservoir characterization. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) combining tools. A flexible approach can be used with the ICS tools. They can be used separately or in a series to make predictions about a desired reservoir objective. The tools in ICS are primarily designed to correlate relationships between seismic information and data obtained from wells; however, it is possible to work with well data alone.

  4. Computer Systems and Network Manager

    E-Print Network [OSTI]

    Computer Systems and Network Manager Fort Collins, Colorado POSITION A Computers Systems activities. RESPONSIBILITIES The successful candidate will perform computer systems and network administration, including computer hardware, systems software, applications software, and all configurations

  5. Rapid Detection of Biological and Chemical Threat Agents Using Physical Chemistry, Active Detection, and Computational Analysis

    SciTech Connect (OSTI)

    Chung, Myung; Dong, Li; Fu, Rong; Liotta, Lance; Narayanan, Aarthi; Petricoin, Emanuel; Ross, Mark; Russo, Paul; Zhou, Weidong; Luchini, Alessandra; Manes, Nathan; Chertow, Jessica; Han, Suhua; Kidd, Jessica; Senina, Svetlana; Groves, Stephanie

    2007-01-01T23:59:59.000Z

    Basic technologies have been successfully developed within this project: rapid collection of aerosols and a rapid ultra-sensitive immunoassay technique. Water-soluble, humidity-resistant polyacrylamide nano-filters were shown to (1) capture aerosol particles as small as 20 nm, (2) work in humid air and (3) completely liberate their captured particles in an aqueous solution compatible with the immunoassay technique. The immunoassay technology developed within this project combines electrophoretic capture with magnetic bead detection. It allows detection of as few as 150-600 analyte molecules or viruses in only three minutes, something no other known method can duplicate. The technology can be used in a variety of applications where speed of analysis and/or extremely low detection limits are of great importance: in rapid analysis of donor blood for hepatitis, HIV and other blood-borne infections in emergency blood transfusions, in trace analysis of pollutants, or in search of biomarkers in biological fluids. Combined in a single device, the water-soluble filter and ultra-sensitive immunoassay technique may solve the problem of early ??warning type? detection of aerosolized pathogens. These two technologies are protected with five patent applications and are ready for commercialization.

  6. Computer hardware fault administration

    DOE Patents [OSTI]

    Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

    2010-09-14T23:59:59.000Z

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  7. The Use Of Computational Human Performance Modeling As Task Analysis Tool

    SciTech Connect (OSTI)

    Jacuqes Hugo; David Gertman

    2012-07-01T23:59:59.000Z

    During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

  8. A study of a method of cluster analysis applicable to a digital computer

    E-Print Network [OSTI]

    Hedderman, Chester William

    1966-01-01T23:59:59.000Z

    for scientific relationships in a large set of data is fundamental to basic research, and it is time-consuming as well as difficult. As an example, 1~ Darwin spent twenty-two years studying data he collected on a trip around the world before he published his... in astronomy, ocean- ography, and even Census Bureau data. Brief Literature Survey Tanimoto introduced the concept of clustering 1 based upon the similarity of elements. Edwards and Sfoza described a method of clustering based upon an 2 analysis...

  9. Development of one-dimensional computational fluid dynamics code 'GFLOW' for groundwater flow and contaminant transport analysis

    SciTech Connect (OSTI)

    Rahatgaonkar, P. S.; Datta, D.; Malhotra, P. K.; Ghadge, S. G. [Nuclear Power Corporation of India Ltd., R-2, Ent. Block, Nabhikiya Urja Bhavan, Anushakti Nagar, Mumbai - 400 094 (India)

    2012-07-01T23:59:59.000Z

    Prediction of groundwater movement and contaminant transport in soil is an important problem in many branches of science and engineering. This includes groundwater hydrology, environmental engineering, soil science, agricultural engineering and also nuclear engineering. Specifically, in nuclear engineering it is applicable in the design of spent fuel storage pools and waste management sites in the nuclear power plants. Ground water modeling involves the simulation of flow and contaminant transport by groundwater flow. In the context of contaminated soil and groundwater system, numerical simulations are typically used to demonstrate compliance with regulatory standard. A one-dimensional Computational Fluid Dynamics code GFLOW had been developed based on the Finite Difference Method for simulating groundwater flow and contaminant transport through saturated and unsaturated soil. The code is validated with the analytical model and the benchmarking cases available in the literature. (authors)

  10. SIMMER-II: A computer program for LMFBR disrupted core analysis

    SciTech Connect (OSTI)

    Bohl, W.R.; Luck, L.B.

    1990-06-01T23:59:59.000Z

    SIMMER-2 (Version 12) is a computer program to predict the coupled neutronic and fluid-dynamics behavior of liquid-metal fast reactors during core-disruptive accident transients. The modeling philosophy is based on the use of general, but approximate, physics to represent interactions of accident phenomena and regimes rather than a detailed representation of specialized situations. Reactor neutronic behavior is predicted by solving space (r,z), energy, and time-dependent neutron conservation equations (discrete ordinates transport or diffusion). The neutronics and the fluid dynamics are coupled via temperature- and background-dependent cross sections and the reactor power distribution. The fluid-dynamics calculation solves multicomponent, multiphase, multifield equations for mass, momentum, and energy conservation in (r,z) or (x,y) geometry. A structure field with nine density and five energy components; a liquid field with eight density and six energy components; and a vapor field with six density and on energy component are coupled by exchange functions representing a modified-dispersed flow regime with a zero-dimensional intra-cell structure model.

  11. Analysis of BIOMOVS II Uranium Mill Tailings scenario 1.07 with the RESRAD computer code

    SciTech Connect (OSTI)

    Gnanapragasam, E.K.; Yu, C.

    1997-08-01T23:59:59.000Z

    The residual radioactive material guidelines (RESRAD) computer code developed at Argonne National Laboratory was selected for participation in the model intercomparison test scenario, version 1.07, conducted by the Uranium Mill Tailings Working Group in the second phase of the international Biospheric Model Validation Study. The RESRAD code was enhanced to provide an output attributing radiological dose to the nuclide at the point of exposure, in addition to the existing output attributing radiological dose to the nuclide in the contaminated zone. A conceptual model to account for off-site accumulation following atmospheric deposition was developed and showed the importance of considering this process for this off-site scenario. The RESRAD predictions for the atmospheric release compared well with most of the other models. The peak and steady-state doses and concentrations predicted by RESRAD for the groundwater release also agreed well with most of the other models participating in the study; however, the RESRAD plots shows a later breakthrough time and sharp changes compared with the plots of the predictions of other models. These differences were due to differences in the formulation for the retardation factor and to not considering the effects of longitudinal dispersion.

  12. Computational Study and Analysis of Structural Imperfections in 1D and 2D Photonic Crystals

    SciTech Connect (OSTI)

    K.R. Maskaly

    2005-06-01T23:59:59.000Z

    Dielectric reflectors that are periodic in one or two dimensions, also known as 1D and 2D photonic crystals, have been widely studied for many potential applications due to the presence of wavelength-tunable photonic bandgaps. However, the unique optical behavior of photonic crystals is based on theoretical models of perfect analogues. Little is known about the practical effects of dielectric imperfections on their technologically useful optical properties. In order to address this issue, a finite-difference time-domain (FDTD) code is employed to study the effect of three specific dielectric imperfections in 1D and 2D photonic crystals. The first imperfection investigated is dielectric interfacial roughness in quarter-wave tuned 1D photonic crystals at normal incidence. This study reveals that the reflectivity of some roughened photonic crystal configurations can change up to 50% at the center of the bandgap for RMS roughness values around 20% of the characteristic periodicity of the crystal. However, this reflectivity change can be mitigated by increasing the index contrast and/or the number of bilayers in the crystal. In order to explain these results, the homogenization approximation, which is usually applied to single rough surfaces, is applied to the quarter-wave stacks. The results of the homogenization approximation match the FDTD results extremely well, suggesting that the main role of the roughness features is to grade the refractive index profile of the interfaces in the photonic crystal rather than diffusely scatter the incoming light. This result also implies that the amount of incoherent reflection from the roughened quarterwave stacks is extremely small. This is confirmed through direct extraction of the amount of incoherent power from the FDTD calculations. Further FDTD studies are done on the entire normal incidence bandgap of roughened 1D photonic crystals. These results reveal a narrowing and red-shifting of the normal incidence bandgap with increasing RMS roughness. Again, the homogenization approximation is able to predict these results. The problem of surface scratches on 1D photonic crystals is also addressed. Although the reflectivity decreases are lower in this study, up to a 15% change in reflectivity is observed in certain scratched photonic crystal structures. However, this reflectivity change can be significantly decreased by adding a low index protective coating to the surface of the photonic crystal. Again, application of homogenization theory to these structures confirms its predictive power for this type of imperfection as well. Additionally, the problem of a circular pores in 2D photonic crystals is investigated, showing that almost a 50% change in reflectivity can occur for some structures. Furthermore, this study reveals trends that are consistent with the 1D simulations: parameter changes that increase the absolute reflectivity of the photonic crystal will also increase its tolerance to structural imperfections. Finally, experimental reflectance spectra from roughened 1D photonic crystals are compared to the results predicted computationally in this thesis. Both the computed and experimental spectra correlate favorably, validating the findings presented herein.

  13. An Analysis of Computational Workloads on the Jaguar Cray XT System

    SciTech Connect (OSTI)

    Joubert, Wayne [ORNL; Su, Shi-Quan [ORNL

    2012-01-01T23:59:59.000Z

    This study presents an analysis of science application workloads for the Jaguar Cray XT5 system during its tenure as a 2.3 petaflop supercomputer at Oak Ridge National Laboratory. Jaguar was the first petascale system to be deployed for open science and has been one of the world's top three supercomputers for six releases of the TOP500 list. Its workload is investigated here as a representative of the growing worldwide install base of petascale systems and also as a foreshadowing of science workloads to be expected for future systems. The study considers characteristics of the Jaguar workload such as resource utilization, typical job characteristics, most heavily used applications, application scaling and application usage patterns. Implications of these findings are considered for current petascale workflows and for exascale systems to be deployed later this decade.

  14. Computer resources Computer resources

    E-Print Network [OSTI]

    Yang, Zong-Liang

    Computer resources 1 Computer resources available to the LEAD group Cédric David 30 September 2009 #12;Ouline · UT computer resources and services · JSG computer resources and services · LEAD computers· LEAD computers 2 #12;UT Austin services UT EID and Password 3 https://utdirect.utexas.edu #12;UT Austin

  15. 64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization

    E-Print Network [OSTI]

    Perkins, Richard A.

    64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization Research and Development in Visual Analysis Judith Devaney Terrence Griffin John

  16. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    SciTech Connect (OSTI)

    Saffer, Shelley (Sam) I.

    2014-12-01T23:59:59.000Z

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  17. Study and Analysis 100-car Naturalistic Driving Data Amanda Justiniano (Dr. Eliza Y. Du), Department of Electrical and Computer Engineering, Purdue

    E-Print Network [OSTI]

    Zhou, Yaoqi

    Study and Analysis 100-car Naturalistic Driving Data Amanda Justiniano (Dr. Eliza Y. Du), Department of Electrical and Computer Engineering, Purdue School of Engineering, Indianapolis, IN 46202 Every uses facilities such as car simulators, Drive Safety DS-600c, directed towards the research

  18. Computer Simulated Transient Analysis of a Polyimide V-Groove Leg Actuator with Serpentine Heater for a Walking Micro-Robot

    E-Print Network [OSTI]

    Kandlikar, Satish

    Computer Simulated Transient Analysis of a Polyimide V-Groove Leg Actuator with Serpentine Heater simulations done on a polyimide V-groove leg actuator for a walking micro-robot, to investigate the optimal of Technology, Sweden. The principle of actuation is based on the thermal expansion of polyimide inside a V

  19. Title: From data analysis to network modeling, with applications in systems biology Author: Fabian J. Theis, Computational Modeling in Biology, Institute of Bioinformatics and

    E-Print Network [OSTI]

    Gerkmann, Ralf

    J. Theis, Computational Modeling in Biology, Institute of Bioinformatics and Systems BiologyTitle: From data analysis to network modeling, with applications in systems biology Author: Fabian at detailed models of the system of interest. Our application focus are biological networks, namely gene

  20. Argonne's Laboratory computing center - 2007 annual report.

    SciTech Connect (OSTI)

    Bair, R.; Pieper, G. W.

    2008-05-28T23:59:59.000Z

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

  1. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    SciTech Connect (OSTI)

    Susmikanti, Mike, E-mail: mike@batan.go.id [Center for Development of Nuclear Informatics, National Nuclear Energy Agency, PUSPIPTEK, Tangerang (Indonesia); Dewayatna, Winter, E-mail: winter@batan.go.id [Center for Nuclear Fuel Technology, National Nuclear Energy Agency, PUSPIPTEK, Tangerang (Indonesia); Sulistyo, Yos, E-mail: soj@batan.go.id [Center for Nuclear Equipment and Engineering, National Nuclear Energy Agency, PUSPIPTEK, Tangerang (Indonesia)

    2014-09-30T23:59:59.000Z

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo{sup 99} used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g ( 10{sup 6} cm{sup ?1}) in a tube, their delta reactivities are the still within safety limits; however, for 7.9542 g and 8.838 g ( 10{sup 6} cm{sup ?1}) the limits were exceeded.

  2. Polymorphous computing fabric

    DOE Patents [OSTI]

    Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

    2011-01-18T23:59:59.000Z

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  3. COMPUTATIONAL ECONOMICS AT THE COMPUTATION INSTITUTE

    E-Print Network [OSTI]

    discussed the modern economic theory of incentives. Asymmetric information is common in economic relationsCOMPUTATIONAL ECONOMICS AT THE COMPUTATION INSTITUTE Summary of 3-D Discussions Prepared by Ken and economists to discuss a variety of topics on how computational methods can advance economic analysis

  4. A Rationale for Using Computers in Science Education

    E-Print Network [OSTI]

    Ellis, James D.

    1984-01-01T23:59:59.000Z

    tool or tutee is more interesting to science educators. Use of the com puter as a tool includes computer managed instruc tion (CMI), computer based testing (CBT), word processing, information retrieval, data gathering, and data analysis. Ragsdale... procedures. Data will be summarized and analyzed with statistical pro grams, d isplayed with graphing programs , and reported with word processing programs. Activities will include individualized, small group, and large group work and will be based upon...

  5. Scientific computations section monthly report, November 1993

    SciTech Connect (OSTI)

    Buckner, M.R.

    1993-12-30T23:59:59.000Z

    This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

  6. Text analysis methods, text analysis apparatuses, and articles of manufacture

    DOE Patents [OSTI]

    Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

    2014-10-28T23:59:59.000Z

    Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

  7. Models of Procyon A including seismic constraints

    E-Print Network [OSTI]

    P. Eggenberger; F. Carrier; F. Bouchy

    2005-01-14T23:59:59.000Z

    Detailed models of Procyon A based on new asteroseismic measurements by Eggenberger et al (2004) have been computed using the Geneva evolution code including shellular rotation and atomic diffusion. By combining all non-asteroseismic observables now available for Procyon A with these seismological data, we find that the observed mean large spacing of 55.5 +- 0.5 uHz favours a mass of 1.497 M_sol for Procyon A. We also determine the following global parameters of Procyon A: an age of t=1.72 +- 0.30 Gyr, an initial helium mass fraction Y_i=0.290 +- 0.010, a nearly solar initial metallicity (Z/X)_i=0.0234 +- 0.0015 and a mixing-length parameter alpha=1.75 +- 0.40. Moreover, we show that the effects of rotation on the inner structure of the star may be revealed by asteroseismic observations if frequencies can be determined with a high precision. Existing seismological data of Procyon A are unfortunately not accurate enough to really test these differences in the input physics of our models.

  8. Typologies of Computation and Computational Models

    E-Print Network [OSTI]

    Mark Burgin; Gordana Dodig-Crnkovic

    2013-12-09T23:59:59.000Z

    We need much better understanding of information processing and computation as its primary form. Future progress of new computational devices capable of dealing with problems of big data, internet of things, semantic web, cognitive robotics and neuroinformatics depends on the adequate models of computation. In this article we first present the current state of the art through systematization of existing models and mechanisms, and outline basic structural framework of computation. We argue that defining computation as information processing, and given that there is no information without (physical) representation, the dynamics of information on the fundamental level is physical/ intrinsic/ natural computation. As a special case, intrinsic computation is used for designed computation in computing machinery. Intrinsic natural computation occurs on variety of levels of physical processes, containing the levels of computation of living organisms (including highly intelligent animals) as well as designed computational devices. The present article offers a typology of current models of computation and indicates future paths for the advancement of the field; both by the development of new computational models and by learning from nature how to better compute using different mechanisms of intrinsic computation.

  9. Multi-processor including data flow accelerator module

    DOE Patents [OSTI]

    Davidson, George S. (Albuquerque, NM); Pierce, Paul E. (Albuquerque, NM)

    1990-01-01T23:59:59.000Z

    An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

  10. International Journal of Computer Science

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    International Journal of Computer Science & Information Security IJCSIS PUBLICATION 2010 IJCSIS Journal of Computer Science and Information Security (IJCSIS) provides a major venue for rapid publication of high quality computer science research, including multimedia, information science, security, mobile

  11. Transmutation Performance Analysis for Inert Matrix Fuels in Light Water Reactors and Computational Neutronics Methods Capabilities at INL

    SciTech Connect (OSTI)

    Michael A. Pope; Samuel E. Bays; S. Piet; R. Ferrer; Mehdi Asgari; Benoit Forget

    2009-05-01T23:59:59.000Z

    The urgency for addressing repository impacts has grown in the past few years as a result of Spent Nuclear Fuel (SNF) accumulation from commercial nuclear power plants. One path that has been explored by many is to eliminate the transuranic (TRU) inventory from the SNF, thus reducing the need for additional long term repository storage sites. One strategy for achieving this is to burn the separated TRU elements in the currently operating U.S. Light Water Reactor (LWR) fleet. Many studies have explored the viability of this strategy by loading a percentage of LWR cores with TRU in the form of either Mixed Oxide (MOX) fuels or Inert Matrix Fuels (IMF). A task was undertaken at INL to establish specific technical capabilities to perform neutronics analyses in order to further assess several key issues related to the viability of thermal recycling. The initial computational study reported here is focused on direct thermal recycling of IMF fuels in a heterogeneous Pressurized Water Reactor (PWR) bundle design containing Plutonium, Neptunium, Americium, and Curium (IMF-PuNpAmCm) in a multi-pass strategy using legacy 5 year cooled LWR SNF. In addition to this initial high-priority analysis, three other alternate analyses with different TRU vectors in IMF pins were performed. These analyses provide comparison of direct thermal recycling of PuNpAmCmCf, PuNpAm, PuNp, and Pu. The results of this infinite lattice assembly-wise study using SCALE 5.1 indicate that it may be feasible to recycle TRU in this manner using an otherwise typical PWR assembly without violating peaking factor limits.

  12. Quantum Computing Computer Scientists

    E-Print Network [OSTI]

    Yanofsky, Noson S.

    of Vector Spaces 3 The Leap From Classical to Quantum 3.1 Classical Deterministic Systems 3.2 ClassicalQuantum Computing for Computer Scientists Noson S. Yanofsky and Mirco A. Mannucci #12; May 2007 Noson S. Yanofsky Mirco A. Mannucci #12;Quantum Computing for Computer Scientists Noson S. Yanofsky

  13. Area of cooperation includes: Joint research and development on

    E-Print Network [OSTI]

    Buyya, Rajkumar

    Technologies August 2, 2006: HCL Technologies Ltd (HCL), India's leading global IT services company, has signed projects that are using this technology currently such as BioGrid in Japan, National Grid Service in UKArea of cooperation includes: · Joint research and development on Grid computing technologies

  14. DO NOT INCLUDE: flatten cardboard staples, tape & envelope windows ok

    E-Print Network [OSTI]

    Wolfe, Patrick J.

    / bottles Metal items other than cans/foil Napkins Paper towels Plastic bags Plastic films Plastic utensilsDO NOT INCLUDE: flatten cardboard staples, tape & envelope windows ok Aerosol cans Books Bottle, PDAs, inkjet cartridges, CFL bulbs (cushioned, sealed in plastic) computers, printers, printer

  15. Analysis of Automotive Turbocharger Nonlinear Response Including Bifurcations

    E-Print Network [OSTI]

    Vistamehr, Arian

    2010-10-12T23:59:59.000Z

    Automotive turbochargers (TCs) increase internal combustion engine power and efficiency in passenger and commercial vehicles. TC rotors are usually supported on floating ring bearings (FRBs) or semi-floating ring bearings (SFRBs), both of which...

  16. Analysis of Automotive Turbocharger Nonlinear Response Including Bifurcations

    E-Print Network [OSTI]

    Vistamehr, Arian

    2010-10-12T23:59:59.000Z

    ........................................................... 1 Figure 2 Waterfalls of TC center housing acceleration for 30?C oil inlet temperature and 4bar oil inlet pressure...................................... 6 Figure 3 Waterfalls of TC center housing... ........................................................ 8 Figure 6 Waterfalls of TC center housing acceleration for 30?C oil inlet temperature and 4bar oil inlet pressure; (a) low imbalance amount, (b) high...

  17. Search for Earth-like planets includes LANL star analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of ScienceandMesa del Sol HomeFacebook TwitterSearch-Comments Sign In About | Careers | ContactSearch for

  18. Engineering Analysis | ornl.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Home | Science & Discovery | Supercomputing and Computation | Research Areas | Engineering Analysis SHARE Engineering Analysis Engineering analysis involves the application...

  19. ELECTRICAL & COMPUTER ENGINEERING

    E-Print Network [OSTI]

    ELECTRICAL & COMPUTER ENGINEERING SEMINAR "Agile Sensing Systems: Analysis, Design and Implementation" by Prof. Jun (Jason) Zhang Electrical and Computer Engineering University of Denver Tuesday of Electrical and Computer Engineering at the University of Denver. He was with the School of Electrical

  20. Computer Science Undergraduate Programs 1 02/18/13 COMPUTER SCIENCE

    E-Print Network [OSTI]

    Thomas, Andrew

    Computer Science Undergraduate Programs 1 02/18/13 COMPUTER SCIENCE at the UNIVERSITY OF MAINE http://www.umcs.maine.edu BACHELOR OF SCIENCE DEGREE IN COMPUTER SCIENCE BACHELOR OF ARTS DEGREE IN COMPUTER SCIENCE Computer science of computer science include databases, high-performance computing, artificial intelligence, computer networks

  1. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    E-Print Network [OSTI]

    Laboratory, USA {dhasenkamp, asim, mfwehner, kwu}@lbl.gov ABSTRACT Extensive computing power has been used temperatures. The National Oceanographic Data Center, for example, compiled data from a survey buoy in the Gulf

  2. Analysis of the use of open archives in the fields of mathematics and computer science Anna Wojciechowska*

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    Wojciechowska* * GERSIC : LVIC - University Aix-Marseille 3, France annaw@cmi.univ-mrs.fr Open access), these fields are of direct concern for the author of this report a librarian in a mathematics and computer

  3. These charges include students that have

    E-Print Network [OSTI]

    Behmer, Spencer T.

    Access/EIS Fee $4.00 $8.00 $12.00 $16.00 $20.00 $24.00 $28.00 $32.00 $36.00 Computer Access/Inst Tech $4.25 $71.25 $71.25 $71.25 $71.25 $71.25 $71.25 Library Access Fee $158.00 $173.80 $237.00 $237.00 $237.00 Computer Access Fee $163.30 $179.63 $244.95 $244.95 $244.95 $244.95 $244.95 $244.95 $244.95 Computer Access/EIS

  4. The fifth generation computer

    SciTech Connect (OSTI)

    Moto-Oka, T.; Kitsuregawa, M.

    1985-01-01T23:59:59.000Z

    The leader of Japan's Fifth Generation computer project, known as the 'Apollo' project, and a young computer scientist elucidate in this book the process of how the idea came about, international reactions, the basic technology, prospects for realization, and the abilities of the Fifth Generation computer. Topics considered included forecasting, research programs, planning, and technology impacts.

  5. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    SciTech Connect (OSTI)

    Gore, Brooklin [Morgridge Institute for Research] [Morgridge Institute for Research

    2011-10-12T23:59:59.000Z

    This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

  6. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema (OSTI)

    Gore, Brooklin [Morgridge Institute for Research

    2013-01-22T23:59:59.000Z

    This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

  7. Chem. Eng. 4E03/6E03 Page 1 CHEM. ENGINEERING 4E03/6E03: DIGITAL COMPUTER PROCESS CONTROL

    E-Print Network [OSTI]

    Thompson, Michael

    Chem. Eng. 4E03/6E03 Page 1 CHEM. ENGINEERING 4E03/6E03: DIGITAL COMPUTER PROCESS CONTROL COURSE: Digital Computer Process Control. Dynamic Models, Analysis Tools and Control Algorithms. Available-based), suitable for implementation on digital computers. Material covered will include continuous- and discrete

  8. COMPUTATIONAL SCIENCE CENTER

    SciTech Connect (OSTI)

    DAVENPORT,J.

    2004-11-01T23:59:59.000Z

    The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security.

  9. COMMIX-1AR/P: A three-dimensional transient single-phase computer program for thermal hydraulic analysis of single and multicomponent systems

    SciTech Connect (OSTI)

    Garner, P.L.; Blomquist, R.N.; Gelbard, E.M.

    1992-09-01T23:59:59.000Z

    The COMMIX-LAR/P computer program is designed for analyzing the steady-state and transient aspects of single-phase fluid flow and heat transfer in three spatial dimensions. This version is an extension of the modeling in COMMIX-lA to include multiple fluids in physically separate regions of the computational domain, modeling descriptions for pumps, radiation heat transfer between surfaces of the solids which are embedded in or surround the fluid, a keg model for fluid turbulence, and improved numerical techniques. The porous-medium formulation in COMMIX allows the program to be applied to a wide range of problems involving both simple and complex geometrical arrangements. The internal aspects of the COMMIX-LAR/P program are presented, covering descriptions of subprograms, variables, and files.

  10. COMMIX-1AR/P: A three-dimensional transient single-phase computer program for thermal hydraulic analysis of single and multicomponent systems

    SciTech Connect (OSTI)

    Garner, P.L.; Blomquist, R.N.; Gelbard, E.M.

    1992-09-01T23:59:59.000Z

    The COMMIX-1AR/P computer program is designed for analyzing the steady-state and transient aspects of single-phase fluid flow and heat transfer in three spatial dimensions. This version is an extension of the modeling in COMMIX-1A to include multiple fluids in physically separate regions of the computational domain, modeling descriptions for pumps, radiation heat transfer between surfaces of the solids which are embedded in or surround the fluid, a k-[var epsilon] model for fluid turbulence, and improved numerical techniques. The porous-medium formulation in COMMIX allows the program to be applied to a wide range of problems involving both simple and complex geometrical arrangements. The input preparation and execution procedures are presented for the COMMIX-1AR/P program and several postprocessor programs which produce graphical displays of the calculated results.

  11. These charges include students that have

    E-Print Network [OSTI]

    Behmer, Spencer T.

    Fee $16.33 $32.66 $48.99 $65.32 $81.65 $97.98 $114.31 $130.64 $146.97 Computer Access/EIS Fee $4.00 $8.00 $25.00 $25.00 Library Access Fee $158.00 $173.80 $237.00 $237.00 $237.00 $237.00 $237.00 $237.00 $237.95 $244.95 $244.95 $244.95 $244.95 Computer Access/EIS Fee $40.00 $44.00 $60.00 $60.00 $60.00 $60.00 $60

  12. Dale P. Bentz' and Paul E. Stutzmanl SEM ANALYSIS AND COMPUTER MODELLING OF HYDRATION OF PORTLAND CEMENT

    E-Print Network [OSTI]

    Bentz, Dale P.

    CEMENT PARTICLES REPERENCE: Bentz, D. P. and Stutzman, P. E., "S)314Anslysisand Computer Modelling of Hydration of Portland Cement Particles,* petrov~ ~* lmMSLuu* Sharon M. DeHayes and David Stark, Eds., American Society for Testing and Materials, Philadelphia, 1994, ASS'J!RACT: Characterization of cement

  13. PVP-Vol.246/AMD-Vol. 143,New Methods in Transient Analysis COMPUTATION OF UNSTEADY INCOMPRESSIBLE FLOWS

    E-Print Network [OSTI]

    Mittal, Sanjay

    element computation of unsteady in- compressible flows, with emphasis on the space-time formulations, itSST approach the frequency of remeshing is minimized to minimize the projection errors involved in remeshing fluid-structure interaction prob- lems such as vortex-induced oscillations of a cylinder and flow past

  14. Procedia Computer Science 00 (2010) 18 Procedia Computer

    E-Print Network [OSTI]

    Geddes, Cameron Guy Robinson

    Procedia Computer Science 00 (2010) 1­8 Procedia Computer Science International Conference on Computational Science, ICCS 2010 Coupling visualization and data analysis for knowledge discovery from multi , Gunther H. Webera , Kesheng Wua aComputational Research Division, Lawrence Berkeley National Laboratory

  15. Cloud Computing and Validation of Expandable In Silico Livers

    E-Print Network [OSTI]

    Ropella, Glen EP; Hunt, C Anthony

    2010-01-01T23:59:59.000Z

    benefit analysis of cloud computing versus desktop grids.as: Ropella and Hunt: Cloud computing and validation ofCloud computing and validation of expandable in silico

  16. Real-time interactions between cortical neurons and computational models : synaptic conductance analysis and digital compensation of electrode artifacts.

    E-Print Network [OSTI]

    Destexhe, Alain

    of the electrode, considered as an arbitrary linear circuit. This circuit's impulse response is first established analysis and digital compensation of electrode artifacts. Zuzanna Piwkowska PhD thesis defended at the UNIC conductance, each generated by a stochastic process. We used this model as a basis for analysis tools allowing

  17. Research topics will include Faculty Leader

    E-Print Network [OSTI]

    Scannell, Kevin Patrick

    Engineering on-site, while participating in computer-related service learning projects. The software Study Abroad Coordinator johnsonl13@xavier.edu Colombia: Service Learning and Software Engineering {credit in software engineering and Spanish language} Study Abroad with Xavier University Pre

  18. 2nd Workshop on Computations in Nanotechnology

    E-Print Network [OSTI]

    Adler, Joan

    2nd Workshop on Computations in Nanotechnology Keynote Speakers: Mark J. Biggs (Adelaide), Mark nanotechnology researchers Goal: Exposing computational analysis experts and nanotechnology experimentalists

  19. Faculty of Science Computer Science

    E-Print Network [OSTI]

    Faculty of Science Computer Science Computer software engineering, network and system analysis.uwindsor.ca/computerscience The University of Windsor offers a variety of computer science programs to prepare students for a career in the technology industry or in research and academia. A computer science degree provides an in-depth understanding

  20. Argonne's Laboratory computing resource center : 2006 annual report.

    SciTech Connect (OSTI)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31T23:59:59.000Z

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

  1. An analysis of the trade-off between spatial and temporal resources for measurement-based quantum computation

    E-Print Network [OSTI]

    Jisho Miyazaki; Michal Hajduek; Mio Murao

    2014-09-29T23:59:59.000Z

    In measurement-based quantum computation (MBQC), elementary quantum operations can be more parallelized than the quantum circuit model by employing a larger Hilbert space of graph states used as the resource. Thus MBQC can be regarded as a method of quantum computation where the temporal resource described by the depth of quantum operations can be reduced compared to the quantum circuit model by using the extra spatial resource described by graph states. To analyze the trade-off relationship of the spatial and temporal resources, we consider a method to obtain quantum circuit decompositions of general unitary transformations represented by MBQC on graph states with a certain underlying geometry called generalized flow. We present a method to translate any MBQC with generalized flow into quantum circuits without extra spatial resource. We also show an explicit way to unravel acausal gates that appear in the quantum circuit decomposition derived by a translation method presented in [V. Danos and E. Kashefi, Phys. Rev. A {\\bf 74}, 052310 (2006)] and that represent an effect of the reduction of the temporal resource in MBQC. Finally, by considering a way to deterministically simulate these acausal gates, we investigate a general framework to analyze the trade-off between the spacial and temporal resources for quantum computation.

  2. UNCORRECTEDPROOF Please cite this article in press as: Arinaminpathy, Y. et al. Computational analysis of membrane proteins: the largest class of drug targets, Drug Discov Today (2009), doi:10.1016/

    E-Print Network [OSTI]

    Gerstein, Mark

    2009-01-01T23:59:59.000Z

    analysis of membrane proteins: the largest class of drug targets, Drug Discov Today (2009), doi:10.1016/ j.drudis.2009.08.006 Drug Discovery Today Volume 00, Number 00 September 2006 REVIEWS Computational analysis disorders, such as hypertension, congestive heart failure, stroke and cancer. On a similar scale, genetic

  3. M. Toubin, C. Dumont, E. P. Verrechia, O. Lalligant, A. Diou, F. Truchetet, and M. A. Abidi, "A Multi-scale Analysis of shell growth increments using wavelet transform," Computers & Geosciences, Journal of the International Association for Mathematical Ge

    E-Print Network [OSTI]

    Abidi, Mongi A.

    been tried (Dolman, 1975) using a Fourier transform. This method, based on power spectra analysis Multi-scale Analysis of shell growth increments using wavelet transform," Computers & Geosciences of environments). The search for these two types of information inside accretionary shells of living or fossil

  4. Optical computing Damien Woods a

    E-Print Network [OSTI]

    Woods, Damien

    Optical computing Damien Woods a aDepartment of Computer Science and Artificial Intelligence Institute, Vierimaantie 5, 84100 Ylivieska, Finland Abstract In this survey we consider optical computers of such optical computing archi- tectures, including descriptions of the type of hardware commonly used in optical

  5. Dedicated heterogeneous node scheduling including backfill scheduling

    DOE Patents [OSTI]

    Wood, Robert R. (Livermore, CA); Eckert, Philip D. (Livermore, CA); Hommes, Gregg (Pleasanton, CA)

    2006-07-25T23:59:59.000Z

    A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

  6. SAFETY AND HEALTH PROGRAM Including the Chemical Hygiene Plan

    E-Print Network [OSTI]

    Evans, Paul G.

    SAFETY AND HEALTH PROGRAM Including the Chemical Hygiene Plan Wisconsin Center for Applied, Technical Staff & Chemical Hygiene Officer kakupcho@wisc.edu 262-2982 Lab Facility Website http..........................................................................................................3 CHEMICAL HYGIENE PLAN III. Work-site Analysis and Hazard Identification 3.1 Hazardous Chemical

  7. Individual patient data meta-analysis for the clinical assessment of coronary computed tomography angiography: protocol of the Collaborative Meta-Analysis of Cardiac CT (CoMe-CCT)

    E-Print Network [OSTI]

    2013-01-01T23:59:59.000Z

    Feyter PJ: 64-slice computed tomography coronary angiographyof 64-slice computed tomography coronary angiography inmultislice spiral computed tomography: impact of heart rate.

  8. Research Profile Our research activities include

    E-Print Network [OSTI]

    Sandoghdar, Vahid

    and biological soil interaction analysis Geomechanics and Environmental Geotechnics CONTACT Prof. Dr. Alexander Zurich Institute for Geotechnical Engineering Geomechanics Wolfgang-Pauli-Str. 15 CH-8093 Zrich www.geomechanics

  9. Computer, Computational, and Statistical Sciences Division

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing CCS Division Computer, Computational, and Statistical Sciences Division Computational physics, computer science, applied mathematics, statistics and the integration of...

  10. Special Issue on Human Computing

    E-Print Network [OSTI]

    Nijholt, Anton

    The seven articles in this special issue focus on human computing. Most focus on two challenging issues in human computing, namely, machine analysis of human behavior in group interactions and context-sensitive modeling.

  11. EE Regional Technology Roadmap Includes comparison

    E-Print Network [OSTI]

    EE Regional Technology Roadmap Includes comparison against 6th Power Plan (Update cyclically Data Clearinghouse BPA/RTF NEEA/Regional Programs Group Update Regional EE Technology Roadmap Lighting

  12. DIDACTICAL HOLOGRAPHIC EXHIBIT INCLUDING (HOLOGRAPHIC TELEVISION)

    E-Print Network [OSTI]

    de Aguiar, Marcus A. M.

    DIDACTICAL HOLOGRAPHIC EXHIBIT INCLUDING HoloTV (HOLOGRAPHIC TELEVISION) José J. Lunazzi , DanielCampinasSPBrasil Abstract: Our Institute of Physics exposes since 1980 didactical exhibitions of holography in Brazil where

  13. Sessions include: Beginning Farmer and Rancher

    E-Print Network [OSTI]

    Watson, Craig A.

    Sessions include: Beginning Farmer and Rancher New Markets and Regulations Food Safety Good Bug, Bad Bug ID Horticulture Hydroponics Livestock and Pastured Poultry Mushrooms Organic Live animal exhibits Saturday evening social, and Local foods Florida Small Farms and Alternative

  14. Status of the MELTSPREAD-1 computer code for the analysis of transient spreading of core debris melts

    SciTech Connect (OSTI)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.

    1992-01-01T23:59:59.000Z

    A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.

  15. Status of the MELTSPREAD-1 computer code for the analysis of transient spreading of core debris melts

    SciTech Connect (OSTI)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.

    1992-04-01T23:59:59.000Z

    A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.

  16. Gas storage materials, including hydrogen storage materials

    DOE Patents [OSTI]

    Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

    2014-11-25T23:59:59.000Z

    A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

  17. Gas storage materials, including hydrogen storage materials

    DOE Patents [OSTI]

    Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

    2013-02-19T23:59:59.000Z

    A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

  18. Engineering and Computing Undergraduate Courses

    E-Print Network [OSTI]

    Low, Robert

    Faculty of Engineering and Computing Undergraduate Courses (including Architecture, Aerospace, Building, Civil Engineering and Mathematics) #12;2 Contents Coventry University 4 About Coventry 5 Facilities 6 Department of Computing 8 Department of Mathematics and Control Engineering 15 Department

  19. Computability and Logic Selmer Bringsjord

    E-Print Network [OSTI]

    Bringsjord, Selmer

    as intermediate mathematical logic (IML). IML includes basic computability theory (Turing Machines and other/Readings o Computability & Logic (CL), Boolos and Jeffrey (3rd ed.) o Turing's World 3.0 (TW), Barwise

  20. Computational analysis of whole body CT documents a bone structure alteration in adult advanced chronic lymphocytic leukemia

    E-Print Network [OSTI]

    Piana, Michele

    progression. PET/CT images were analyzed using dedicated software, able to recognize an external 2-pixel bone ring whose Hounsfield coefficient served as cut off to recognize trabecular and compact bone. PET/CT of the disease. Keywords: Image Analysis, Bone Marrow, Skeletal Structure, ACLL, PET/CT #12;3 Introduction

  1. Analysis of the Reactor Cavity Cooling System for Very High Temperature Gas-cooled Reactors Using Computational Fluid Dynamics Tools

    E-Print Network [OSTI]

    Frisani, Angelo

    2011-08-08T23:59:59.000Z

    the VHTR performance and safety analysis, one-dimensional (1-D) system type codes, like RELAP5 or MELCOR, and multi-dimensional CFD codes can be used. The choice of 1-D over multi-dimensional codes first involves identifying the main phenomena, and from...

  2. Computer Forensics In Forensis

    E-Print Network [OSTI]

    Peisert, Sean; Bishop, Matt; Marzullo, Keith

    2008-01-01T23:59:59.000Z

    special agent for the FBIs computer analysis and responseght against high-tech crimes: FBI to open $2 million centerworld. As early as 2002 the FBI stated that ?fty percent of

  3. Electric Power Monthly, August 1990. [Glossary included

    SciTech Connect (OSTI)

    Not Available

    1990-11-29T23:59:59.000Z

    The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included.

  4. Computational Models for Image Guided, Robot-Assisted and Simulated Medical Interventions

    E-Print Network [OSTI]

    Paris-Sud XI, Universit de

    their potential use in a number of advanced medical applications including image guided, robot and force feedback. Such procedures require the use of advanced medical image analysis methods and have brought many advances in several medical fields includ- ing computer-aided diagnosis, therapy

  5. Numerical evaluation of propeller noise, including non-linear effects

    E-Print Network [OSTI]

    White, Terence Alan

    1985-01-01T23:59:59.000Z

    University Chairman of Advisor y Commitee: Dr. Kenneth Korkan Using the transonic flow field(s) generated by the NASPROP-E computer code for an eight blade SR3-series propeller, a method is investigated to calculate the total noise values and frequency... in three dimensions, and the influence of the damping on the calculated noise values is investigated. Since the flow field includes the wave systems near the blade surface, the quadr upole noise sour ce term is accounted for as are the monopole...

  6. Parallel computing in enterprise modeling.

    SciTech Connect (OSTI)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01T23:59:59.000Z

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  7. Communication in automation, including networking and wireless

    E-Print Network [OSTI]

    Antsaklis, Panos

    Communication in automation, including networking and wireless Nicholas Kottenstette and Panos J and networking in automation is given. Digital communication fundamentals are reviewed and networked control are presented. 1 Introduction 1.1 Why communication is necessary in automated systems Automated systems use

  8. Electrochemical cell including ribbed electrode substrates

    SciTech Connect (OSTI)

    Breault, R.D.; Goller, G.J.; Roethlein, R.J.; Sprecher, G.C.

    1981-07-21T23:59:59.000Z

    An electrochemical cell including an electrolyte retaining matrix layer located between and in contact with cooperating anode and cathode electrodes is disclosed herein. Each of the electrodes is comprised of a ribbed (or grooved) substrate including a gas porous body as its main component and a catalyst layer located between the substrate and one side of the electrolyte retaining matrix layer. Each substrate body includes a ribbed section for receiving reactant gas and lengthwise side portions on opposite sides of the ribbed section. Each of the side portions includes a channel extending along its entire length from one surface thereof (e.g., its outer surface) to but stopping short of an opposite surface (e.g., its inner surface) so as to provide a web directly between the channel and the opposite surface. Each of the channels is filled with a gas impervious substance and each of the webs is impregnated with a gas impervious substance so as to provide a gas impervious seal along the entire length of each side portion of each substrate and between the opposite faces thereof (e.g., across the entire thickness thereof).

  9. Prices include compostable serviceware and linen tablecloths

    E-Print Network [OSTI]

    California at Davis, University of

    & BLACK BEAN ENCHILADAS Fresh corn tortillas stuffed with tender brown butter sauted butternut squash, black beans and yellow on- ions, garnished with avocado and sour cream. $33 per person EDAMAME & CORN SQUASH & BLACK BEAN ENCHILADA FREE RANGE CHICK- EN SANDWICH PLATED ENTREES All plated entrees include

  10. CONTRIBUTED Green Cloud Computing

    E-Print Network [OSTI]

    Tucker, Rod

    to manage energy consumption across the entire information and communications technology (ICT) sector. While considers both public and private clouds, and includes energy consumption in switching and transmission to energy consumption and cloud computing seems to be an alternative to office-based computing. By Jayant

  11. Natively probabilistic computation

    E-Print Network [OSTI]

    Mansinghka, Vikash Kumar

    2009-01-01T23:59:59.000Z

    I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify ...

  12. Ligand Migration and Cavities within Scapharca Dimeric HbI: Studies by Time-Resolved Crystallo- graphy, Xe Binding, and Computational Analysis

    SciTech Connect (OSTI)

    Knapp, James E.; Pahl, Reinhard; Cohen, Jordi; Nichols, Jeffry C.; Schulten, Klaus; Gibson, Quentin H.; Š rajer, Vukica; Royer, Jr., William E.; (UMASS MED); (UIUC); (UC_

    2009-12-01T23:59:59.000Z

    As in many other hemoglobins, no direct route for migration of ligands between solvent and active site is evident from crystal structures of Scapharca inaequivalvis dimeric HbI. Xenon (Xe) and organic halide binding experiments, along with computational analysis presented here, reveal protein cavities as potential ligand migration routes. Time-resolved crystallographic experiments show that photodissociated carbon monoxide (CO) docks within 5 ns at the distal pocket B site and at more remote Xe4 and Xe2 cavities. CO rebinding is not affected by the presence of dichloroethane within the major Xe4 protein cavity, demonstrating that this cavity is not on the major exit pathway. The crystal lattice has a substantial influence on ligand migration, suggesting that significant conformational rearrangements may be required for ligand exit. Taken together, these results are consistent with a distal histidine gate as one important ligand entry and exit route, despite its participation in the dimeric interface.

  13. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    SciTech Connect (OSTI)

    Brown, D L

    2009-05-01T23:59:59.000Z

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems, and (4) design, situational awareness and control of complex networks. The program elements consist of a group of Complex Networked Systems Research Institutes (CNSRI), tightly coupled to an associated individual-investigator-based Complex Networked Systems Basic Research (CNSBR) program. The CNSRI's will be principally located at the DOE National Laboratories and are responsible for identifying research priorities, developing and maintaining a networked systems modeling and simulation software infrastructure, operating summer schools, workshops and conferences and coordinating with the CNSBR individual investigators. The CNSBR individual investigator projects will focus on specific challenges for networked systems. Relevancy of CNSBR research to DOE needs will be assured through the strong coupling provided between the CNSBR grants and the CNSRI's.

  14. Computing Frontier: Distributed Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputationalEnergyEvents Computing

  15. Subterranean barriers including at least one weld

    DOE Patents [OSTI]

    Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.

    2007-01-09T23:59:59.000Z

    A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.

  16. Power generation method including membrane separation

    DOE Patents [OSTI]

    Lokhandwala, Kaaeid A. (Union City, CA)

    2000-01-01T23:59:59.000Z

    A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.

  17. Rotor assembly including superconducting magnetic coil

    DOE Patents [OSTI]

    Snitchler, Gregory L. (Shrewsbury, MA); Gamble, Bruce B. (Wellesley, MA); Voccio, John P. (Somerville, MA)

    2003-01-01T23:59:59.000Z

    Superconducting coils and methods of manufacture include a superconductor tape wound concentrically about and disposed along an axis of the coil to define an opening having a dimension which gradually decreases, in the direction along the axis, from a first end to a second end of the coil. Each turn of the superconductor tape has a broad surface maintained substantially parallel to the axis of the coil.

  18. Electric power monthly, September 1990. [Glossary included

    SciTech Connect (OSTI)

    Not Available

    1990-12-17T23:59:59.000Z

    The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)

  19. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    SciTech Connect (OSTI)

    Joseph, Earl C.; Conway, Steve; Dekate, Chirag

    2013-09-30T23:59:59.000Z

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. ? A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  20. Economic Impacts of Potential Foot and Mouth Disease Agro-terrorism in the United States: A Computable General Equilibrium Analysis

    SciTech Connect (OSTI)

    Oladosu, Gbadebo A [ORNL] [ORNL; Rose, Adam [University of Southern California, Los Angeles] [University of Southern California, Los Angeles; Bumsoo, Lee [University of Illinois] [University of Illinois

    2013-01-01T23:59:59.000Z

    The foot and mouth disease (FMD) virus has high agro-terrorism potential because it is contagious, can be easily transmitted via inanimate objects and can be spread by wind. An outbreak of FMD in developed countries results in massive slaughtering of animals (for disease control) and disruptions in meat supply chains and trade, with potentially large economic losses. Although the United States has been FMD-free since 1929, the potential of FMD as a deliberate terrorist weapon calls for estimates of the physical and economic damage that could result from an outbreak. This paper estimates the economic impacts of three alternative scenarios of potential FMD attacks using a computable general equilibrium (CGE) model of the US economy. The three scenarios range from a small outbreak successfully contained within a state to a large multi-state attack resulting in slaughtering of 30 percent of the national livestock. Overall, the value of total output losses in our simulations range between $37 billion (0.15% of 2006 baseline economic output) and $228 billion (0.92%). Major impacts stem from the supply constraint on livestock due to massive animal slaughtering. As expected, the economic losses are heavily concentrated in agriculture and food manufacturing sectors, with losses ranging from $23 billion to $61 billion in the two industries.

  1. Computational Nuclear Forensics Analysis of Weapons-grade Plutonium Separated from Fuel Irradiated in a Thermal Reactor

    E-Print Network [OSTI]

    Coles, Taylor Marie

    2014-04-27T23:59:59.000Z

    .23 The channels of the reactor, represented by number 8, are completely encased by the outer boundary of the reactor or 'Calandria shell' which is represented by number 1. The main design features of the Indian 220 MWe PHWR include natural uranium dioxide fuel..., heavy-water moderation, and heavy water at a high temperature and pressure in a separate circuit for heat removal purposes. The moderator heavy water is contained in the low-pressure horizontal reactor vessel or 'Calandria'. This heavy water is near...

  2. HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra

    E-Print Network [OSTI]

    Dongarra, Jack

    1 HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra Computer Science Department, University detailed and well-founded analysis of the state of high performance computing. This paper summarizes some of systems available for performing grid based computing. Keywords High performance computing, Parallel

  3. Multiverse rate equation including bubble collisions

    E-Print Network [OSTI]

    Michael P. Salem

    2013-02-19T23:59:59.000Z

    The volume fractions of vacua in an eternally inflating multiverse are described by a coarse-grain rate equation, which accounts for volume expansion and vacuum transitions via bubble formation. We generalize the rate equation to account for bubble collisions, including the possibility of classical transitions. Classical transitions can modify the details of the hierarchical structure among the volume fractions, with potential implications for the staggering and Boltzmann-brain issues. Whether or not our vacuum is likely to have been established by a classical transition depends on the detailed relationships among transition rates in the landscape.

  4. A Computational Framework for Uncertainty Quantification and ...

    E-Print Network [OSTI]

    2010-03-05T23:59:59.000Z

    and Computer Science Division, Argonne National Laboratory, Argonne, IL. 60439, USA. E-mail: .... atmospheric physics that includes cloud parameterization

  5. Power throttling of collections of computing elements

    DOE Patents [OSTI]

    Bellofatto, Ralph E. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Crumley, Paul G. (Yorktown Heights, NY); Gara, Alan G. (Mount Kidsco, NY); Giampapa, Mark E. (Irvington, NY); Gooding; Thomas M. (Rochester, MN); Haring, Rudolf A. (Cortlandt Manor, NY); Megerian, Mark G. (Rochester, MN); Ohmacht, Martin (Yorktown Heights, NY); Reed, Don D. (Mantorville, MN); Swetz, Richard A. (Mahopac, NY); Takken, Todd (Brewster, NY)

    2011-08-16T23:59:59.000Z

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  6. Computer System,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    undergraduate summer institute http:institutes.lanl.govistisummer-school 2015 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

  7. Computational Transportation

    E-Print Network [OSTI]

    Illinois at Chicago, University of

    ), in-vehicle computers, and computers in the transportation infrastructure are integrated ride- sharing, real-time multi-modal routing and navigation, to autonomous/assisted driving

  8. Seepage Model for PA Including Dift Collapse

    SciTech Connect (OSTI)

    G. Li; C. Tsang

    2000-12-20T23:59:59.000Z

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M&O 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M&O 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in niches and in the cross drift to stochastically simulate the 3D flow of water in the fractured host rock (in the vicinity of potential emplacement drifts) under ambient conditions. The Disturbed Drift Seepage Submodel evaluates the impact of the partial collapse of a drift on seepage. Drainage in rock below the emplacement drift is also evaluated.

  9. Faculty of Science Computer Science

    E-Print Network [OSTI]

    Faculty of Science Computer Science Software engineering, network and system analysis continue a variety of computer science programs to prepare students for a career in the technology industry or in research and academia. A computer science degree provides an in-depth understanding of the fundamentals

  10. Determination Of Ph Including Hemoglobin Correction

    DOE Patents [OSTI]

    Maynard, John D. (Albuquerque, NM); Hendee, Shonn P. (Albuquerque, NM); Rohrscheib, Mark R. (Albuquerque, NM); Nunez, David (Albuquerque, NM); Alam, M. Kathleen (Cedar Crest, NM); Franke, James E. (Franklin, TN); Kemeny, Gabor J. (Madison, WI)

    2005-09-13T23:59:59.000Z

    Methods and apparatuses of determining the pH of a sample. A method can comprise determining an infrared spectrum of the sample, and determining the hemoglobin concentration of the sample. The hemoglobin concentration and the infrared spectrum can then be used to determine the pH of the sample. In some embodiments, the hemoglobin concentration can be used to select an model relating infrared spectra to pH that is applicable at the determined hemoglobin concentration. In other embodiments, a model relating hemoglobin concentration and infrared spectra to pH can be used. An apparatus according to the present invention can comprise an illumination system, adapted to supply radiation to a sample; a collection system, adapted to collect radiation expressed from the sample responsive to the incident radiation; and an analysis system, adapted to relate information about the incident radiation, the expressed radiation, and the hemoglobin concentration of the sample to pH.

  11. Automotive Underhood Thermal Management Analysis Using 3-D Coupled Thermal-Hydrodynamic Computer Models: Thermal Radiation Modeling

    SciTech Connect (OSTI)

    Pannala, S; D'Azevedo, E; Zacharia, T

    2002-02-26T23:59:59.000Z

    The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of work in section F.

  12. Teaching in computer security and privacy The Computer Laboratory's undergraduate and masters programmes

    E-Print Network [OSTI]

    Crowcroft, Jon

    computing security Economics of cybercrime Economics of information security Formal methods Hardware security Location and positioning systems Malware analysis Medical information security MobileTeaching in computer security and privacy The Computer Laboratory's undergraduate and masters

  13. Synchronizing compute node time bases in a parallel computer

    DOE Patents [OSTI]

    Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

    2014-12-30T23:59:59.000Z

    Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

  14. COMPUTER WORKSTATION ERGONOMIC CHECKLIST COMPUTER WORKSTATION ERGONOMIC CHECKLIST

    E-Print Network [OSTI]

    Saskatchewan, University of

    COMPUTER WORKSTATION ERGONOMIC CHECKLIST 1 COMPUTER WORKSTATION ERGONOMIC CHECKLIST people, task, equipment and work environment. If required, a detailed ergonomic evaluation, including to be in-line with forearms when using keyboard and/or mouse. #12;COMPUTER WORKSTATION ERGONOMIC CHECKLIST

  15. On Continuous Models of Computation: Towards Computing the Distance Between

    E-Print Network [OSTI]

    Schellekens, Michel P.

    with building formal, mathematical models both for aspects of the computational process and for features discuss this issue in Section 3.1. 6th Irish Workshop on Formal Methods (IWFM'03), eWiC, British Computer traditionally associated with computer science are logic and discrete mathematics, the latter including set theo

  16. Optical panel system including stackable waveguides

    DOE Patents [OSTI]

    DeSanto, Leonard (Dunkirk, MD); Veligdan, James T. (Manorville, NY)

    2007-11-20T23:59:59.000Z

    An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

  17. Optical panel system including stackable waveguides

    DOE Patents [OSTI]

    DeSanto, Leonard; Veligdan, James T.

    2007-03-06T23:59:59.000Z

    An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

  18. Thermovoltaic semiconductor device including a plasma filter

    DOE Patents [OSTI]

    Baldasaro, Paul F. (Clifton Park, NY)

    1999-01-01T23:59:59.000Z

    A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.

  19. Computing at Scale Technion Computer

    E-Print Network [OSTI]

    Schuster, Assaf

    Interdisciplinary Center for Life Sciences & Engineering COMPUTER SCIENCE ELECTRICAL ENGINEERING IBM HRL Yahoo Interdisciplinary Center for Life Sciences & Engineering COMPUTER SCIENCE ELECTRICAL ENGINEERING IBM HRL Yah oo! Mi Sciences & Engineering COMPUTER SCIENCE ELECTRICAL ENGINEERING IBM HRL Yahoo! Microsoft Google Mellanox

  20. DiaSim: A Parameterized Simulator for Pervasive Computing Applications

    E-Print Network [OSTI]

    Consel, Charles

    computing systems tar- get a variety of application areas, including home automation, building surveillance

  1. High-Precision Computation and Mathematical Physics

    SciTech Connect (OSTI)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03T23:59:59.000Z

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  2. A new approach and computational algorithm for sensitivity/uncertainty analysis for SED and SAD with applications to beryllium integral experiments

    SciTech Connect (OSTI)

    Song, P.M.; Youssef, M.Z.; Abdou, M.A. (Univ. of California, Los Angeles (United States))

    1993-04-01T23:59:59.000Z

    A new approach for treating the sensitivity and uncertainty in the secondary energy distribution (SED) and the secondary angular distribution (SAD) has been developed, and the existing two-dimensional sensitivity/uncertainty analysis code, FORSS, was expanded to incorporate the new approach. The calculational algorithm was applied to the [sup 9]Be(n,2n) cross section to study the effect of the current uncertainties in the SED and SAD of neutrons emitted from this reaction on the prediction accuracy of the tritium production rate from [sup 6]Li(T[sub 6]) and [sup 7]Li(T[sub 7]) in an engineering-oriented fusion integral experiment of the US Department of Energy/Japan Atomic Energy Research Institute Collaborative Program on Fusion Neutronics in which beryllium was used as a neutron multiplier. In addition, the analysis was extended to include the uncertainties in the integrated smooth cross sections of beryllium and other materials that constituted the test assembly used in the experiment. This comprehensive two-dimensional cross-section sensitivity/uncertainty analysis aimed at identifying the sources of discrepancies between calculated and measured values for T[sub 6] and T[sub 7].

  3. A Computational Analysis of Smart Timing Decisions for Heating Based on an Air-to-Water Heat pump SMARTER EUROPE E-world energy & water 2014 Proceedings page 1

    E-Print Network [OSTI]

    Treur, Jan

    A Computational Analysis of Smart Timing Decisions for Heating Based on an Air-to-Water Heat pump Decisions for Heating Based on an Air-to-Water Heat pump Jan Treur VU University Amsterdam, Agent Systems be most efficient to use this energy in these periods. For air to water heat pumps a similar issue occurs

  4. anu including biomedical: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    futurestudentsbonuspoints Other pathways are also available through the College of Engineering & Computer Science SpecialAchievements Computing (or equivalent) major or...

  5. Engine lubrication circuit including two pumps

    DOE Patents [OSTI]

    Lane, William H.

    2006-10-03T23:59:59.000Z

    A lubrication pump coupled to the engine is sized such that the it can supply the engine with a predetermined flow volume as soon as the engine reaches a peak torque engine speed. In engines that operate predominately at speeds above the peak torque engine speed, the lubrication pump is often producing lubrication fluid in excess of the predetermined flow volume that is bypassed back to a lubrication fluid source. This arguably results in wasted power. In order to more efficiently lubricate an engine, a lubrication circuit includes a lubrication pump and a variable delivery pump. The lubrication pump is operably coupled to the engine, and the variable delivery pump is in communication with a pump output controller that is operable to vary a lubrication fluid output from the variable delivery pump as a function of at least one of engine speed and lubrication flow volume or system pressure. Thus, the lubrication pump can be sized to produce the predetermined flow volume at a speed range at which the engine predominately operates while the variable delivery pump can supplement lubrication fluid delivery from the lubrication pump at engine speeds below the predominant engine speed range.

  6. Protoplanetary disks including radiative feedback from accreting planets

    E-Print Network [OSTI]

    Montesinos, Matias; Perez, Sebastian; Baruteau, Clement; Casassus, Simon

    2015-01-01T23:59:59.000Z

    While recent observational progress is converging on the detection of compact regions of thermal emission due to embedded protoplanets, further theoretical predictions are needed to understand the response of a protoplanetary disk to the planet formation radiative feedback. This is particularly important to make predictions for the observability of circumplanetary regions. In this work we use 2D hydrodynamical simulations to examine the evolution of a viscous protoplanetary disk in which a luminous Jupiter-mass planet is embedded. We use an energy equation which includes the radiative heating of the planet as an additional mechanism for planet formation feedback. Several models are computed for planet luminosities ranging from $10^{-5}$ to $10^{-3}$ Solar luminosities. We find that the planet radiative feedback enhances the disk's accretion rate at the planet's orbital radius, producing a hotter and more luminous environement around the planet, independently of the prescription used to model the disk's turbul...

  7. MS Degree Program in Computer Engineering College of Engineering and Computer Science

    E-Print Network [OSTI]

    de Lijser, Peter

    of computer-based systems, along with an in-depth knowledge in engineering analysis, design, implementationMS Degree Program in Computer Engineering College of Engineering and Computer Science California State University, Fullerton The Computer Engineering Program in the College of Engineering and Computer

  8. Computer Science The computer science program offers a foundation in the

    E-Print Network [OSTI]

    Miles, Will

    Computer Science The computer science program offers a foundation in the fundamentals of computer, the entertainment industry or government, the computer science major, which emphasizes the theory and application in computer science, students develop expertise in systems analysis, software development, networking

  9. Physics, Computer Science and Mathematics Division. Annual report, January 1-December 31, 1980

    SciTech Connect (OSTI)

    Birge, R.W.

    1981-12-01T23:59:59.000Z

    Research in the physics, computer science, and mathematics division is described for the year 1980. While the division's major effort remains in high energy particle physics, there is a continually growing program in computer science and applied mathematics. Experimental programs are reported in e/sup +/e/sup -/ annihilation, muon and neutrino reactions at FNAL, search for effects of a right-handed gauge boson, limits on neutrino oscillations from muon-decay neutrinos, strong interaction experiments at FNAL, strong interaction experiments at BNL, particle data center, Barrelet moment analysis of ..pi..N scattering data, astrophysics and astronomy, earth sciences, and instrument development and engineering for high energy physics. In theoretical physics research, studies included particle physics and accelerator physics. Computer science and mathematics research included analytical and numerical methods, information analysis techniques, advanced computer concepts, and environmental and epidemiological studies. (GHT)

  10. Proposal for grid computing for nuclear applications

    SciTech Connect (OSTI)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni [Malaysian Nuclear Agency, Bangi, 43000 Kajang, Selangor (Malaysia); Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri [Physics Department, University of Malaya, 56003 Kuala Lumpur (Malaysia); and others

    2014-02-12T23:59:59.000Z

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  11. Community computation

    E-Print Network [OSTI]

    Li, Fulu, 1970-

    2009-01-01T23:59:59.000Z

    In this thesis we lay the foundations for a distributed, community-based computing environment to tap the resources of a community to better perform some tasks, either computationally hard or economically prohibitive, or ...

  12. Cloud Computing

    SciTech Connect (OSTI)

    Pete Beckman and Ian Foster

    2009-12-04T23:59:59.000Z

    Chicago Matters: Beyond Burnham (WTTW). Chicago has become a world center of "cloud computing." Argonne experts Pete Beckman and Ian Foster explain what "cloud computing" is and how you probably already use it on a daily basis.

  13. Progress report No. 56, October 1, 1979-September 30, 1980. [Courant Mathematics and Computing Lab. , New York Univ

    SciTech Connect (OSTI)

    None

    1980-10-01T23:59:59.000Z

    Research during the period is sketched in a series of abstract-length summaries. The forte of the Laboratory lies in the development and analysis of mathematical models and efficient computing methods for the rapid solution of technological problems of interest to DOE, in particular, the detailed calculation on large computers of complicated fluid flows in which reactions and heat conduction may be taking place. The research program of the Laboratory encompasses two broad categories: analytical and numerical methods, which include applied analysis, computational mathematics, and numerical methods for partial differential equations, and advanced computer concepts, which include software engineering, distributed systems, and high-performance systems. Lists of seminars and publications are included. (RWR)

  14. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    SciTech Connect (OSTI)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30T23:59:59.000Z

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

  15. Taught degrees MSc in Computational Mathematics

    E-Print Network [OSTI]

    Sussex, University of

    Taught degrees MSc in Computational Mathematics 1 year full time Computation has become and simulation algorithms allows ever greater details and realism in computer output. Mathematics at Sussex has a very strong numerical analysis and computational mathematics component, and our faculty introduce you

  16. Light Computing

    E-Print Network [OSTI]

    Gordon Chalmers

    2006-10-13T23:59:59.000Z

    A configuration of light pulses is generated, together with emitters and receptors, that allows computing. The computing is extraordinarily high in number of flops per second, exceeding the capability of a quantum computer for a given size and coherence region. The emitters and receptors are based on the quantum diode, which can emit and detect individual photons with high accuracy.

  17. Computational Bioinformatics

    E-Print Network [OSTI]

    Gross, Louis J.

    Computational Ecology Bioinformatics The biological sciences have become increasingly quantitative:252:15 Location: TBA Section Number: 59692 Computational Biology Spring 1998 Text: Models in Biology: Mathematics with entirely new subdisciplines having developed recently which apply modern computational methods to basic

  18. Organizational Analysis in Computer Science

    E-Print Network [OSTI]

    Kling, Rob

    1993-01-01T23:59:59.000Z

    Energy of Canada Limited (AECL), as an advanced medicals software and hardware. AECL's engineers tried to patch the

  19. Computational analysis of kidney scintigrams

    SciTech Connect (OSTI)

    Vrincianu, D.; Puscasu, E.; Creanga, D. [University Al. I. Cuza, Faculty of Physics, 11 Blvd. Carol I, 700506, Iasi (Romania); Stefanescu, C. [University of Medicine and Pharmacy Gr. T. Popa, Iasi (Romania)

    2013-11-13T23:59:59.000Z

    The scintigraphic investigation of normal and pathological kidneys was carried out using specialized gamma-camera device from nuclear medicine hospital department. Technetium 90m isotope with gamma radiation emission, coupled with vector molecules for kidney tissues was introduced into the subject body, its dynamics being recorded as data source for kidney clearance capacity. Two representative data series were investigated, corresponding to healthy and pathological organs respectively. The semi-quantitative tests applied for the comparison of the two distinct medical situations were: the shape of probability distribution histogram, the power spectrum, the auto-correlation function and the Lyapunov exponent. While power spectrum led to similar results in both cases, significant differences were revealed by means of distribution probability, Lyapunov exponent and correlation time, recommending these numerical tests as possible complementary tools in clinical diagnosis.

  20. Strong permanent magnets provide a backbone technology required many products, including computers, electric cars, and

    E-Print Network [OSTI]

    McQuade, D. Tyler

    , electric cars, and wind-powered generators. Currently, the strongest permanent magnets contain rare earth

  1. Computing High Accuracy Power Spectra with Pico

    E-Print Network [OSTI]

    William A. Fendt; Benjamin D. Wandelt

    2007-12-02T23:59:59.000Z

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be done using massively parallel computing resources, including distributed computing projects such as Cosmology@Home. On the homepage for Pico, located at http://cosmos.astro.uiuc.edu/pico, we provide new sets of regression coefficients and make the training code available for public use.

  2. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    SciTech Connect (OSTI)

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16T23:59:59.000Z

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the present work we consider the limiting case of stiff discontinuities that only affect the shear strength of the material.

  3. Internode data communications in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-02-11T23:59:59.000Z

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  4. Internode data communications in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03T23:59:59.000Z

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  5. Balanced Decomposition for Power System Simulation on Parallel Computers

    E-Print Network [OSTI]

    Catholic University of Chile (Universidad Catlica de Chile)

    industry and the associated academic research are requiring complex de- velopments in high performance computing tools, such as parallel computers, e cient compilers, graphic interfaces and algorithms including

  6. Parallel Computing Research at Illinois The UPCRC Agenda

    E-Print Network [OSTI]

    @Illinois(www.parallel.illinois.edu)isthecollectiverepresentationofIllinois'currenteffortsin parallel computing research and education. These include: UniversalParallelComputingResearchCenter BlueWaters

  7. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

    SciTech Connect (OSTI)

    Huber, H.D.; Brown, D.R.; Reilly, R.W.

    1982-04-01T23:59:59.000Z

    A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

  8. Distributed analysis in ATLAS

    E-Print Network [OSTI]

    Dewhurst, Alastair; The ATLAS collaboration

    2015-01-01T23:59:59.000Z

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are daily running on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We r...

  9. Guest editorial: Special issue on human computing

    E-Print Network [OSTI]

    Pantic, Maja

    The seven articles in this special issue focus on human computing. Most focus on two challenging issues in human computing, namely, machine analysis of human behavior in group interactions and context-sensitive modeling.

  10. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad (Rochester, MN)

    2012-04-17T23:59:59.000Z

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  11. Combinatorial evaluation of systems including decomposition of a system representation into fundamental cycles

    DOE Patents [OSTI]

    Oliveira, Joseph S. (Richland, WA); Jones-Oliveira, Janet B. (Richland, WA); Bailey, Colin G. (Wellington, NZ); Gull, Dean W. (Seattle, WA)

    2008-07-01T23:59:59.000Z

    One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.

  12. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    SciTech Connect (OSTI)

    Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Mechanical and Thermal Environments Dept.

    1998-03-01T23:59:59.000Z

    LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

  13. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu

    2012-07-06T23:59:59.000Z

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  14. C -parameter distribution at N 3 LL ' including power corrections

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hoang, Andr H.; Kolodrubetz, Daniel W.; Mateu, Vicent; Stewart, Iain W.

    2015-05-01T23:59:59.000Z

    We compute the e?e? C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(?3s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments ?n. To eliminate an O(?QCD) renormalon ambiguity in the soft function, we switch from the MS to a short distance Rgap scheme to define the leading power correction parameter ?1. We show how to simultaneously account for running effects in ?1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for ?s(mZ) and ?1, the perturbative uncertainty in our cross section is ? 2.5% at Q=mZ.

  15. TRAC-PF1/MOD1: an advanced best-estimate computer program for pressurized water reactor thermal-hydraulic analysis

    SciTech Connect (OSTI)

    Liles, D.R.; Mahaffy, J.H.

    1986-07-01T23:59:59.000Z

    The Los Alamos National Laboratory is developing the Transient Reactor Analysis Code (TRAC) to provide advanced best-estimate predictions of postulated accidents in light-water reactors. The TRAC-PF1/MOD1 program provides this capability for pressurized water reactors and for many thermal-hydraulic test facilities. The code features either a one- or a three-dimensional treatment of the pressure vessel and its associated internals, a two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field and solute tracking, flow-regime-dependent constitutive equation treatment, optional reflood tracking capability for bottom-flood and falling-film quench fronts, and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The stability-enhancing two-step (SETS) numerical algorithm is used in the one-dimensional hydrodynamics and permits this portion of the fluid dynamics to violate the material Courant condition. This technique permits large time steps and, hence, reduced running time for slow transients.

  16. MAS.630 Affective Computing, Spring 2002

    E-Print Network [OSTI]

    Picard, Rosalind

    Explores computing that relates to, arises from, or deliberately influences emotion. Topics include the interaction of emotion with cognition and perception, the role of emotion in human-computer interaction, the communication ...

  17. Numerical solutions of differential equations on FPGA-enhanced computers

    E-Print Network [OSTI]

    He, Chuan

    2009-05-15T23:59:59.000Z

    . Their sustained computational performances are compared with pure software programs operating on commodity CPUbased general-purpose computers. Quantitative analysis is performed from a hierarchical set of aspects as customized/extraordinary computer arithmetic...

  18. Computational University of Leeds

    E-Print Network [OSTI]

    Berzins, M.

    of the key application areas is reactive fluid flow, including atmospheric chemistry, combustion, hydraulics reality systems. Software The Unit has an extensive and evolving library of multi-purpose PDE software times. Overview University of Leeds Computational PDEs Unit http://www.comp.leeds.ac.uk/cpde/ #12

  19. National Level Computing at UTK

    E-Print Network [OSTI]

    Tennessee, University of

    at NICS q NSF TeraGrid XD center: Remote Data Analysis & Visualization (RDAV). $10M/3-year award to UT) Computing hardware Computing infrastructure (space, network, power, cooling) Community organization cluster · 4200 processors, 8TB RAM, 40 Gbit/sec network · 50 TB High-performance storage · Remote data

  20. Controlling data transfers from an origin compute node to a target compute node

    DOE Patents [OSTI]

    Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

    2011-06-21T23:59:59.000Z

    Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.

  1. Broadcasting a message in a parallel computer

    DOE Patents [OSTI]

    Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

    2011-08-02T23:59:59.000Z

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  2. Link failure detection in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Smith, Brian E. (Rochester, MN)

    2010-11-09T23:59:59.000Z

    Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

  3. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.

    SciTech Connect (OSTI)

    Lottes, S.A.; Kulak, R.F.; Bojanowski, C. (Energy Systems)

    2011-08-26T23:59:59.000Z

    The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of April through June 2011.

  4. Analysis of a research reactor under anticipated transients without scram events using the RELAP5/MOD3.2 computer program

    E-Print Network [OSTI]

    Hari, Sridhar

    1998-01-01T23:59:59.000Z

    Simulations for two series of anticipated transients phics. without scram (ATWS) events have been carried out for a small, hypothetical, research reactor based on the High Flux Australian Reador HIFAR using the RELAPS/MOD3.Z computer program...

  5. Fernando Flores-Mangas Computer Science Department

    E-Print Network [OSTI]

    Jepson, Allan D.

    .D Area of study: machine learning, computer vision Instituto Tecnologico Autonomo de Mexico, (ITAM on ultrasonic signal analysis using neural networks. ITAM, Mexico City, Mexico. Research Assistant, CANNES

  6. Scientific Computations section monthly report September 1993

    SciTech Connect (OSTI)

    Buckner, M.R.

    1993-11-01T23:59:59.000Z

    This progress report is computational work that is being performed in the areas of thermal analysis, applied statistics, applied physics, and thermal hydraulics.

  7. Sandia National Laboratories: PMTF Computer System

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sandia National Laboratories The PMTF computer system can perform theoretical modeling and analysis, experimental control and data acquisition, and post-test data...

  8. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

    DOE Patents [OSTI]

    Hively, Lee M. (Philadelphia, TN)

    2011-07-12T23:59:59.000Z

    The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

  9. School of Electronic, Electrical and Computer Engineering MSc Programmes

    E-Print Network [OSTI]

    Pycock, David

    School of Electronic, Electrical and Computer Engineering MSc Programmes including programmes with Industrial Studies and MRes in Electronic, Electrical and Computer Engineering Smart electronics, smart devices, smart networks... smart people. Dr Tim Collins Electronic, Electrical and Computer Engineering

  10. External-Memory Computational Geometry

    E-Print Network [OSTI]

    Goodrich, Michael T.; Tsay, Jyh-Jong; Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1993-01-01T23:59:59.000Z

    the rst known optimal al- gorithms for a wide range of two-level and hierarchical multilevel memory models, including parallel models. The algorithms are optimal both in terms of I/O cost and internal computation....

  11. Modelling energy efficiency for computation

    E-Print Network [OSTI]

    Reams, Charles

    2012-11-13T23:59:59.000Z

    In the last decade, efficient use of energy has become a topic of global significance, touching almost every area of modern life, including computing. From mobile to desktop to server, energy efficiency concerns are now ubiquitous. However...

  12. Computability and Logic Selmer Bringsjord

    E-Print Network [OSTI]

    Bringsjord, Selmer

    mathematical logic (IML). IML includes basic computability theory (Turing Machines and other simple automata Turing's World 3.0 (TW), Barwise & Etchemendy ffl Some papers of mine, to be announced and made available

  13. agricultural systems including: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Yorke, James 259 Testing the INCA model in a small agricultural catchment in southern Finland Hydrology and Earth System Sciences, 8(4), 717728 (2004) EGU Computer Technologies...

  14. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

    SciTech Connect (OSTI)

    NONE

    1997-12-31T23:59:59.000Z

    The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

  15. : Computer Aided Learning in Computer

    E-Print Network [OSTI]

    Milenkovi, Aleksandar

    , sensors, security, medicine, will lead to ``smart'' homes, ``smart'' cars, ``smart'' appliances engineering, and computer science programs. Dramatic changes in technology, markets, and computer applications and at home during self-study. The CAL2 allows students to write and execute their own assembly language

  16. Laboratory Studies of the Reactive Chemistry and Changing CCN Properties of Secondary Organic Aerosol, Including Model Development

    SciTech Connect (OSTI)

    Scot Martin

    2013-01-31T23:59:59.000Z

    The chemical evolution of secondary-organic-aerosol (SOA) particles and how this evolution alters their cloud-nucleating properties were studied. Simplified forms of full Koehler theory were targeted, specifically forms that contain only those aspects essential to describing the laboratory observations, because of the requirement to minimize computational burden for use in integrated climate and chemistry models. The associated data analysis and interpretation have therefore focused on model development in the framework of modified kappa-Koehler theory. Kappa is a single parameter describing effective hygroscopicity, grouping together several separate physicochemical parameters (e.g., molar volume, surface tension, and van't Hoff factor) that otherwise must be tracked and evaluated in an iterative full-Koehler equation in a large-scale model. A major finding of the project was that secondary organic materials produced by the oxidation of a range of biogenic volatile organic compounds for diverse conditions have kappa values bracketed in the range of 0.10 +/- 0.05. In these same experiments, somewhat incongruently there was significant chemical variation in the secondary organic material, especially oxidation state, as was indicated by changes in the particle mass spectra. Taken together, these findings then support the use of kappa as a simplified yet accurate general parameter to represent the CCN activation of secondary organic material in large-scale atmospheric and climate models, thereby greatly reducing the computational burden while simultaneously including the most recent mechanistic findings of laboratory studies.

  17. CROSS-SITE COMPUTATIONS ON THE TERAGRID

    E-Print Network [OSTI]

    2005-08-16T23:59:59.000Z

    COMPUTING IN SCIENCE & ENGINEERING. CROSS-SITE ...... of the Institute's Society of Physics Students, which includes the honor society Sigma Pi Sigma.

  18. Nuclear Arms Control R&D Consortium includes Los Alamos

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Nuclear Arms Control R&D Consortium includes Los Alamos Nuclear Arms Control R&D Consortium includes Los Alamos A consortium led by the University of Michigan that includes LANL as...

  19. Sandia National Laboratories: Modeling & Analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    On September 19, 2013, in Computational Modeling & Simulation, Distribution Grid Integration, Energy, Facilities, Grid Integration, Modeling, Modeling & Analysis, News, News &...

  20. A Roadmap to Success: Hiring, Retaining, and Including People...

    Broader source: Energy.gov (indexed) [DOE]

    A Roadmap to Success: Hiring, Retaining, and Including People with Disabilities A Roadmap to Success: Hiring, Retaining, and Including People with Disabilities December 5, 2014...

  1. [Article 1 of 7: Motivates and Includes the Consumer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and include the consumer exist. Some examples include advanced two-way metering (AMI), demand response (DR), and distributed energy resources (DER). A common misconception is...

  2. Including Retro-Commissioning in Federal Energy Savings Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Including Retro-Commissioning in Federal Energy Savings Performance Contracts Including Retro-Commissioning in Federal Energy Savings Performance Contracts Document describes...

  3. Investigations into the Nature of Halogen Bonding Including Symmetry...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    into the Nature of Halogen Bonding Including Symmetry Adapted Perturbation Theory Analyses. Investigations into the Nature of Halogen Bonding Including Symmetry Adapted...

  4. Computer gardening

    E-Print Network [OSTI]

    Faught, Robert Townes

    1980-01-01T23:59:59.000Z

    This report documents the initial development of a computer-controlled system for the production of three-dimensional forms. The project involved the design and construction of a carving device which was attached to an ...

  5. Computational biology and high performance computing

    E-Print Network [OSTI]

    Shoichet, Brian

    2011-01-01T23:59:59.000Z

    Biology and High Performance Computing Manfred Zorn, TeresaBiology and High Performance Computing Presenters: Manfred99-Portland High performance computing has become one of the

  6. Process-based Management of Cloud Computing Infrastructure

    E-Print Network [OSTI]

    Krause, Rolf

    Process-based Management of Cloud Computing Infrastructure Background Cloud Computing with minimal management effort. Examples of modern cloud computing solutions include (but are not limited to is an emerging computing capability that provides an abstraction between the computing resource and its

  7. COMMIX-1AR/P: A three-dimensional transient single-phase computer program for thermal hydraulic analysis of single and multicomponent systems. Volume 3, Programmer`s guide

    SciTech Connect (OSTI)

    Garner, P.L.; Blomquist, R.N.; Gelbard, E.M.

    1992-09-01T23:59:59.000Z

    The COMMIX-LAR/P computer program is designed for analyzing the steady-state and transient aspects of single-phase fluid flow and heat transfer in three spatial dimensions. This version is an extension of the modeling in COMMIX-lA to include multiple fluids in physically separate regions of the computational domain, modeling descriptions for pumps, radiation heat transfer between surfaces of the solids which are embedded in or surround the fluid, a keg model for fluid turbulence, and improved numerical techniques. The porous-medium formulation in COMMIX allows the program to be applied to a wide range of problems involving both simple and complex geometrical arrangements. The internal aspects of the COMMIX-LAR/P program are presented, covering descriptions of subprograms, variables, and files.

  8. COMMIX-1AR/P: A three-dimensional transient single-phase computer program for thermal hydraulic analysis of single and multicomponent systems. Volume 2, User`s guide

    SciTech Connect (OSTI)

    Garner, P.L.; Blomquist, R.N.; Gelbard, E.M.

    1992-09-01T23:59:59.000Z

    The COMMIX-1AR/P computer program is designed for analyzing the steady-state and transient aspects of single-phase fluid flow and heat transfer in three spatial dimensions. This version is an extension of the modeling in COMMIX-1A to include multiple fluids in physically separate regions of the computational domain, modeling descriptions for pumps, radiation heat transfer between surfaces of the solids which are embedded in or surround the fluid, a k-{var_epsilon} model for fluid turbulence, and improved numerical techniques. The porous-medium formulation in COMMIX allows the program to be applied to a wide range of problems involving both simple and complex geometrical arrangements. The input preparation and execution procedures are presented for the COMMIX-1AR/P program and several postprocessor programs which produce graphical displays of the calculated results.

  9. A computer music instrumentarium

    E-Print Network [OSTI]

    Oliver La Rosa, Jaime Eduardo

    2011-01-01T23:59:59.000Z

    Chapter 6. COMPUTERS: To Solder or Not toMusic Models : A Computer Music Instrumentarium . . . . .Interactive Computer Systems . . . . . . . . . . . . . . 101

  10. Rowan University Department of Computer Science

    E-Print Network [OSTI]

    Kay, Jennifer S.

    Rowan University Department of Computer Science Minor Curricular Change Changing Prerequisites for Computer Science Senior Project 1. Details a. Change requested: Add the course Design and Analysis of Algorithms 0707.340 as a prerequisite for Computer Science Senior Project 0704.400 and reflect that change

  11. ON BROWNIAN COMPUTATION JOHN D. NORTON

    E-Print Network [OSTI]

    fluctuations allows. 1. Introduction Brownian computation was introduced in papers by Bennett1-2 and Bennett with an assumption used in the Bennett and Landauer analysis. The proposed repair leads to a Brownian computer that will compute extremely slowly. Bennett (Ref. 2, pp. 905-906, 922-23; Ref. 1, pp. 531-32) has urged

  12. Computer Methods and Programs in Biomedicine 65 (2001) 191200 A step-by-step guide to non-linear regression analysis of

    E-Print Network [OSTI]

    Clement, Prabhakar

    2001-01-01T23:59:59.000Z

    of this present study was to introduce a simple, easily understood method for carrying out non-linear regression: Microsoft Excel; Non-linear regression; Least squares; Iteration; Goodness of fit; Curve fit wwwComputer Methods and Programs in Biomedicine 65 (2001) 191200 A step-by-step guide to non-linear

  13. The Third International Conference on Computability and Complexity in Analysis, CCA 2006, took place on November 15, 2006 at the University

    E-Print Network [OSTI]

    Cenzer, Douglas

    ;Scientific Program Committee · Andrej Bauer (Ljubljana, Slovenia) · Arthur Chou (Worcester, USA) · Rod Downey) · Jeff Remmel (San Diego, USA) · Robert Rettinger (Hagen, Germany) · Klaus Weihrauch, Chair (Hagen Ruth Dillhage Tanja Grubba Klaus Weihrauch Preface / Electronic Notes in Theoretical Computer Science

  14. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 24, NO. 6, JUNE 2005 849 Modeling and Analysis of Nonuniform Substrate

    E-Print Network [OSTI]

    heating (self-heating) in global lines is an addi- tional con. A. H. Ajami was with the Department of Electrical Engineering and Sys- tems, University of Southern (e-mail: amira@magma-da.com). K. Banerjee is with the Department of Electrical and Computer

  15. Computing at JLab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

  16. Sandia Energy - Computational Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

  17. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect (OSTI)

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes (FIG)

    2005-01-01T23:59:59.000Z

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  18. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    SciTech Connect (OSTI)

    Dragt, A.J.; Gluckstern, R.L.

    1990-11-01T23:59:59.000Z

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high behavior of longitudinal and transverse coupling impendances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides.

  19. Computing trends using graphic processor in high energy physics

    E-Print Network [OSTI]

    Mihai Niculescu; Sorin-Ion Zgura

    2011-06-30T23:59:59.000Z

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  20. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect (OSTI)

    Drummond, L.A.; Marques, O.

    2002-05-21T23:59:59.000Z

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS.

  1. Computational mechanics

    SciTech Connect (OSTI)

    Goudreau, G.L.

    1993-03-01T23:59:59.000Z

    The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

  2. Blind Quantum Computation

    E-Print Network [OSTI]

    Pablo Arrighi; Louis Salvail

    2006-06-06T23:59:59.000Z

    We investigate the possibility of "having someone carry out the work of executing a function for you, but without letting him learn anything about your input". Say Alice wants Bob to compute some known function f upon her input x, but wants to prevent Bob from learning anything about x. The situation arises for instance if client Alice has limited computational resources in comparison with mistrusted server Bob, or if x is an inherently mobile piece of data. Could there be a protocol whereby Bob is forced to compute f(x) "blindly", i.e. without observing x? We provide such a blind computation protocol for the class of functions which admit an efficient procedure to generate random input-output pairs, e.g. factorization. The cheat-sensitive security achieved relies only upon quantum theory being true. The security analysis carried out assumes the eavesdropper performs individual attacks. Keywords: Secure Circuit Evaluation, Secure Two-party Computation, Information Hiding, Information gain vs disturbance.

  3. Computer Science 1 Department of Computer Science

    E-Print Network [OSTI]

    Steinberg, Louis

    Computer Science 1 Department of Computer Science School of Arts and Science www.cs.rutgers.edu Presented by Prof. Louis Steinberg www.cs.rutgers.edu/~lou #12;Computer Science 2 It's NOT just programming, maintenance devising computing solutions for cutting edge problems What is Computer Science? #12;Computer

  4. Computer Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputational Biology2If yousimulationComputer

  5. Computing Events

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration would like submit theInnovationComputationalEnergyEvents Computing Events

  6. BP8.00119 Solar Coronal Heating and Magnetic Energy Build-Up in a Tectonics Model1 , M. GILSON, C.S. NG, A. BHATTACHARJEE, Center for Integrated Computation and Analysis of Reconnection and Turbulence and Center for Magnetic Self-

    E-Print Network [OSTI]

    Ng, Chung-Sang

    BP8.00119 Solar Coronal Heating and Magnetic Energy Build-Up in a Tectonics Model1 , M. GILSON, C.S. NG, A. BHATTACHARJEE, Center for Integrated Computation and Analysis of Reconnection and Turbulence. Title, Astrophys. J. 576, 533 (2002)] and shown, based on analysis and numerical simulations

  7. Biomarkers Core Lab Price List Does NOT Include

    E-Print Network [OSTI]

    Grishok, Alla

    v3102014 Biomarkers Core Lab Price List Does NOT Include Kit Cost PURCHASED by INVESTIGATOR/1/2013 Page 1 of 5 #12;Biomarkers Core Lab Price List Does NOT Include Kit Cost PURCHASED by INVESTIGATOR

  8. Example Retro-Commissioning Scope of Work to Include Services...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Retro-Commissioning Scope of Work to Include Services as Part of an ESPC Investment-Grade Audit Example Retro-Commissioning Scope of Work to Include Services as Part of an ESPC...

  9. COMPUTER ENGINEERING EECS Department

    E-Print Network [OSTI]

    COMPUTER ENGINEERING EECS Department The Electrical Engineering and Computer Science (EECS) Department at WSU offers undergraduate degrees in electrical engineering, computer engineering and computer science. The EECS Department offers Master of Science degrees in computer science, electrical engineering

  10. Computer Systems Administrator

    E-Print Network [OSTI]

    Computer Systems Administrator Fort Collins, CO POSITION A Computer Systems Administrator (Non activities. RESPONSIBILITIES The System Administrator will provide Unix/Linux, Windows computer system or computer science, and three years computer systems administration experience. DURATION The work is planned

  11. Computer Science UNDERGRADUATE

    E-Print Network [OSTI]

    Suzuki, Masatsugu

    447 Computer Science UNDERGRADUATE PROGRAMS The Department of Computer Science provides undergraduate instruction leading to the bachelor's degree in computer science. This program in computer science is accredited by the Computer Science Accreditation Board (CSAB), a specialized accrediting body recognized

  12. COMPUTER SCIENCE INFORMATION TECHNOLOGY

    E-Print Network [OSTI]

    Dunstan, Neil

    COMPUTER SCIENCE and INFORMATION TECHNOLOGY POSTGRADUATE STUDIES 2006 School of Mathematics of Information Systems with Honours Master of Science (Computer Science) Professional Doctorate in Science (Computer Science) PhD (Computer Science) The postgraduate programs in Computer Science and Information

  13. COMPUTER SCIENCE EECS Department

    E-Print Network [OSTI]

    COMPUTER SCIENCE EECS Department The Electrical Engineering and Computer Science (EECS) Department at WSU offers undergraduate degrees in electrical engineering, computer engineering and computer science. The EECS Department offers master of science degrees in computer science, electrical engineering

  14. Computer, Computational, and Statistical Sciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisiting theCommercialization andComputer Simulations Indicate CalciumComputer

  15. Monitoring system including an electronic sensor platform and an interrogation transceiver

    DOE Patents [OSTI]

    Kinzel, Robert L.; Sheets, Larry R.

    2003-09-23T23:59:59.000Z

    A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.

  16. Numerical uncertainty in computational engineering and physics

    SciTech Connect (OSTI)

    Hemez, Francois M [Los Alamos National Laboratory

    2009-01-01T23:59:59.000Z

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

  17. Duality and Recycling Computing in Quantum Computers

    E-Print Network [OSTI]

    Gui Lu Long; Yang Liu

    2007-08-15T23:59:59.000Z

    Quantum computer possesses quantum parallelism and offers great computing power over classical computer \\cite{er1,er2}. As is well-know, a moving quantum object passing through a double-slit exhibits particle wave duality. A quantum computer is static and lacks this duality property. The recently proposed duality computer has exploited this particle wave duality property, and it may offer additional computing power \\cite{r1}. Simply put it, a duality computer is a moving quantum computer passing through a double-slit. A duality computer offers the capability to perform separate operations on the sub-waves coming out of the different slits, in the so-called duality parallelism. Here we show that an $n$-dubit duality computer can be modeled by an $(n+1)$-qubit quantum computer. In a duality mode, computing operations are not necessarily unitary. A $n$-qubit quantum computer can be used as an $n$-bit reversible classical computer and is energy efficient. Our result further enables a $(n+1)$-qubit quantum computer to run classical algorithms in a $O(2^n)$-bit classical computer. The duality mode provides a natural link between classical computing and quantum computing. Here we also propose a recycling computing mode in which a quantum computer will continue to compute until the result is obtained. These two modes provide new tool for algorithm design. A search algorithm for the unsorted database search problem is designed.

  18. Local entropy generation analysis

    SciTech Connect (OSTI)

    Drost, M.K.; White, M.D.

    1991-02-01T23:59:59.000Z

    Second law analysis techniques have been widely used to evaluate the sources of irreversibility in components and systems of components but the evaluation of local sources of irreversibility in thermal processes has received little attention. While analytical procedures for evaluating local entropy generation have been developed, applications have been limited to fluid flows with analytical solutions for the velocity and temperature fields. The analysis of local entropy generation can be used to evaluate more complicated flows by including entropy generation calculations in a computational fluid dynamics (CFD) code. The research documented in this report consists of incorporating local entropy generation calculations in an existing CFD code and then using the code to evaluate the distribution of thermodynamic losses in two applications: an impinging jet and a magnetic heat pump. 22 refs., 13 figs., 9 tabs.

  19. Modeling pure methane hydrate dissociation using a numerical simulator from a novel combination of X-ray computed tomography and macroscopic data

    E-Print Network [OSTI]

    Gupta, A.

    2010-01-01T23:59:59.000Z

    Combination of X-ray Computed Tomography and Macroscopicdissociation data. X-ray computed tomography (CT) was usedobtained from x-ray computed tomography analysis combined

  20. Clay Templeton, Kenneth R. Fleischmann, and Jordan Boyd-Graber. Simulating Audiences: Automating Analysis of Values, Attitudes, and Sentiment. IEEE International Conference on Social Computing, 2011.

    E-Print Network [OSTI]

    Boyd-Graber, Jordan

    Clay Templeton, Kenneth R. Fleischmann, and Jordan Boyd-Graber. Simulating Audiences: Automating. @inproceedings{Templeton:Fleischmann:Boyd-Graber-2011, Author = {Clay Templeton and Kenneth R. Fleischmann, and Sentiment}, } 1 #12;Simulating Audiences Automating Analysis of Values, Attitudes, and Sentiment Thomas Clay