National Library of Energy BETA

Sample records for large eddy simulation

  1. Large Eddy Simulations: Where

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Eddy Simulations: Where observations and modeling collides July 18, 2015 Cascade of Models ⌅ General Circulation Models ⌅ Regional Models ⌅ Large-Eddy Simulations ⌅ Direct Numerical Simulations LES GCM vs LES History Theory What if? Using LES together with Observations Testbed LES 2 / 37 Cascade of Models General Circulation Models ⌅ Domain size: Entire Earth ⌅ Horizontal Boundary conditions: None ⌅ Horizontal grid spacing: 50km ⌅ Total number of points: about 400 ⇥ 400 ⇥ 100

  2. Sandia Energy - Large Eddy Simulation (LES) of Engines

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Eddy Simulation (LES) of Engines Home Transportation Energy Predictive Simulation of Engines Engine Combustion Modeling Large Eddy Simulation (LES) of Engines Large Eddy...

  3. Large Eddy Simulation (LES) of Engines

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reacting Flow/Modeling/Large Eddy Simulation (LES) of Engines Large Eddy Simulation (LES) of Enginesadmin2015-10-30T01:57:44+00:00 The combination of high-performance computing (HPC) and the large eddy simulation (LES) technique has significant potential to provide new insights into the dynamics of many types of turbulent combustion processes. The objective of LES development at the CRF is to fully integrate the combined merits of HPC and LES in a manner that provides some of the

  4. Large Eddy Simulation (LES) Applied to Advanced Engine Combustion...

    Broader source: Energy.gov (indexed) [DOE]

    Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research Large Eddy Simulation (LES) Applied to Low-Temperature and Diesel Engine Combustion Research Vehicle ...

  5. Vehicle Technologies Office Merit Review 2015: Large Eddy Simulation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research Vehicle Technologies Office Merit Review 2015: Large Eddy Simulation (LES) Applied to Advanced Engine ...

  6. Vehicle Technologies Office Merit Review 2015: Large Eddy Simulation (LES)

    Energy Savers [EERE]

    Applied to Advanced Engine Combustion Research | Department of Energy Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research Vehicle Technologies Office Merit Review 2015: Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research Presentation given by Sandia National Laboratories at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about Large Eddy Simulation applied to advanced engine

  7. Large-Eddy Simulation for Green Energy and Propulsion Systems...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Eddy Simulation for Green Energy and Propulsion Systems PI Name: Umesh Paliath PI Email: paliath@ge.com Institution: General Electric Allocation Program: INCITE Allocation...

  8. Large Eddy Simulations of Combustor Liner Flows | Argonne Leadership...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    zone and turbine, current simulations will use wall-modeled large-eddy simulations (LES) to analyze flow in single and multi-cup combustors. An in-depth study of the detailed...

  9. Large Eddy Simulation (LES) Applied to Low-Temperature and Diesel...

    Broader source: Energy.gov (indexed) [DOE]

    More Documents & Publications Large Eddy Simulation (LES) Applied to LTCDieselHydrogen Engine Combustion Research Large Eddy Simulation (LES) Applied to Advanced Engine ...

  10. Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine...

    Broader source: Energy.gov (indexed) [DOE]

    More Documents & Publications Large Eddy Simulation (LES) Applied to Low-Temperature and Diesel Engine Combustion Research Large Eddy Simulation (LES) Applied to LTCDiesel...

  11. Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ace007oefelein2010o.pdf More Documents & Publications Large Eddy Simulation (LES) Applied to Low-Temperature and Diesel Engine Combustion Research Large Eddy Simulation (LES)...

  12. Large-Eddy Simulation of Wind-Plant Aerodynamics: Preprint

    SciTech Connect (OSTI)

    Churchfield, M. J.; Lee, S.; Moriarty, P. J.; Martinez, L. A.; Leonardi, S.; Vijayakumar, G.; Brasseur, J. G.

    2012-01-01

    In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.

  13. Large-eddy simulation of turbulent circular jet flows

    SciTech Connect (OSTI)

    Jones, S. C.; Sotiropoulos, F.; Sale, M. J.

    2002-07-01

    This report presents a numerical method for carrying out large-eddy simulations (LES) of turbulent free shear flows and an application of a method to simulate the flow generated by a nozzle discharging into a stagnant reservoir. The objective of the study was to elucidate the complex features of the instantaneous flow field to help interpret the results of recent biological experiments in which live fish were exposed to the jet shear zone. The fish-jet experiments were conducted at the Pacific Northwest National Laboratory (PNNL) under the auspices of the U.S. Department of Energys Advanced Hydropower Turbine Systems program. The experiments were designed to establish critical thresholds of shear and turbulence-induced loads to guide the development of innovative, fish-friendly hydropower turbine designs.

  14. Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine

    Energy Savers [EERE]

    Combustion Research | Department of Energy Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine Combustion Research Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine Combustion Research 2009 DOE Hydrogen Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting, May 18-22, 2009 -- Washington D.C. PDF icon ace_07_oefelein.pdf More Documents & Publications Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine

  15. Mesoscale and Large-Eddy Simulations for Wind Energy

    SciTech Connect (OSTI)

    Marjanovic, N

    2011-02-22

    Operational wind power forecasting, turbine micrositing, and turbine design require high-resolution simulations of atmospheric flow over complex terrain. The use of both Reynolds-Averaged Navier Stokes (RANS) and large-eddy (LES) simulations is explored for wind energy applications using the Weather Research and Forecasting (WRF) model. To adequately resolve terrain and turbulence in the atmospheric boundary layer, grid nesting is used to refine the grid from mesoscale to finer scales. This paper examines the performance of the grid nesting configuration, turbulence closures, and resolution (up to as fine as 100 m horizontal spacing) for simulations of synoptically and locally driven wind ramping events at a West Coast North American wind farm. Interestingly, little improvement is found when using higher resolution simulations or better resolved turbulence closures in comparison to observation data available for this particular site. This is true for week-long simulations as well, where finer resolution runs show only small changes in the distribution of wind speeds or turbulence intensities. It appears that the relatively simple topography of this site is adequately resolved by all model grids (even as coarse as 2.7 km) so that all resolutions are able to model the physics at similar accuracy. The accuracy of the results is shown in this paper to be more dependent on the parameterization of the land-surface characteristics such as soil moisture rather than on grid resolution.

  16. Intercomparison of Large-eddy Simulations of Arctic Mixed-phase...

    Office of Scientific and Technical Information (OSTI)

    Intercomparison of Large-eddy Simulations of Arctic Mixed-phase Clouds: Importance of Ice Size Distribution Assumptions Citation Details In-Document Search Title: Intercomparison ...

  17. Large eddy simulation of unsteady lean stratified premixed combustion

    SciTech Connect (OSTI)

    Duwig, C.; Fureby, C.

    2007-10-15

    Premixed turbulent flame-based technologies are rapidly growing in importance, with applications to modern clean combustion devices for both power generation and aeropropulsion. However, the gain in decreasing harmful emissions might be canceled by rising combustion instabilities. Unwanted unsteady flame phenomena that might even destroy the whole device have been widely reported and are subject to intensive studies. In the present paper, we use unsteady numerical tools for simulating an unsteady and well-documented flame. Computations were performed for nonreacting, perfectly premixed and stratified premixed cases using two different numerical codes and different large-eddy-simulation-based flamelet models. Nonreacting simulations are shown to agree well with experimental data, with the LES results capturing the mean features (symmetry breaking) as well as the fluctuation level of the turbulent flow. For reacting cases, the uncertainty induced by the time-averaging technique limited the comparisons. Given an estimate of the uncertainty, the numerical results were found to reproduce well the experimental data in terms both of mean flow field and of fluctuation levels. In addition, it was found that despite relying on different assumptions/simplifications, both numerical tools lead to similar predictions, giving confidence in the results. Moreover, we studied the flame dynamics and particularly the response to a periodic pulsation. We found that above a certain excitation level, the flame dynamic changes and becomes rather insensitive to the excitation/instability amplitude. Conclusions regarding the self-growth of thermoacoustic waves were drawn. (author)

  18. Nesting large-eddy simulations within mesoscale simulations for wind energy applications

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-09-08

    With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.

  19. Large Eddy Simulation of PBL Stratocumulus: Comparison of Multi...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... assumption is unnecessary using SHDOM, since scattering is calculated by simple (and numerically inexpensive) multiplication in spherical harmonic space. Two cases are simulated. ...

  20. Vehicle Technologies Office Merit Review 2014: Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research

    Broader source: Energy.gov [DOE]

    Presentation given by Sandia National Laboratories at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about large eddy...

  1. Mean-state acceleration of cloud-resolving models and large eddy simulations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.

    2015-10-29

    In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate themore » evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.« less

  2. Mean-state acceleration of cloud-resolving models and large eddy simulations

    SciTech Connect (OSTI)

    Jones, C. R.; Bretherton, C. S.; Pritchard, M. S.

    2015-10-29

    In this study, large eddy simulations and cloud-resolving models (CRMs) are routinely used to simulate boundary layer and deep convective cloud processes, aid in the development of moist physical parameterization for global models, study cloud-climate feedbacks and cloud-aerosol interaction, and as the heart of superparameterized climate models. These models are computationally demanding, placing practical constraints on their use in these applications, especially for long, climate-relevant simulations. In many situations, the horizontal-mean atmospheric structure evolves slowly compared to the turnover time of the most energetic turbulent eddies. We develop a simple scheme to reduce this time scale separation to accelerate the evolution of the mean state. Using this approach we are able to accelerate the model evolution by a factor of 2–16 or more in idealized stratocumulus, shallow and deep cumulus convection without substantial loss of accuracy in simulating mean cloud statistics and their sensitivity to climate change perturbations. As a culminating test, we apply this technique to accelerate the embedded CRMs in the Superparameterized Community Atmosphere Model by a factor of 2, thereby showing that the method is robust and stable to realistic perturbations across spatial and temporal scales typical in a GCM.

  3. Large-eddy simulations of turbulent flow for grid-to-rod fretting in nuclear reactors

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Bakosi, J.; Christon, M. A.; Lowrie, R. B.; Pritchett-Sheats, L. A.; Nourgaliev, R. R.

    2013-07-12

    The grid-to-rod fretting (GTRF) problem in pressurized water reactors is a flow-induced vibration problem that results in wear and failure of the fuel rods in nuclear assemblies. In order to understand the fluid dynamics of GTRF and to build an archival database of turbulence statistics for various configurations, implicit large-eddy simulations of time-dependent single-phase turbulent flow have been performed in 3 × 3 and 5 × 5 rod bundles with a single grid spacer. To assess the computational mesh and resolution requirements, a method for quantitative assessment of unstructured meshes with no-slip walls is described. The calculations have been carriedmore » out using Hydra-TH, a thermal-hydraulics code developed at Los Alamos for the Consortium for Advanced Simulation of Light water reactors, a United States Department of Energy Innovation Hub. Hydra-TH uses a second-order implicit incremental projection method to solve the singlephase incompressible Navier-Stokes equations. The simulations explicitly resolve the large scale motions of the turbulent flow field using first principles and rely on a monotonicity-preserving numerical technique to represent the unresolved scales. Each series of simulations for the 3 × 3 and 5 × 5 rod-bundle geometries is an analysis of the flow field statistics combined with a mesh-refinement study and validation with available experimental data. Our primary focus is the time history and statistics of the forces loading the fuel rods. These hydrodynamic forces are believed to be the key player resulting in rod vibration and GTRF wear, one of the leading causes for leaking nuclear fuel which costs power utilities millions of dollars in preventive measures. As a result, we demonstrate that implicit large-eddy simulation of rod-bundle flows is a viable way to calculate the excitation forces for the GTRF problem.« less

  4. Large-eddy simulations of surface roughness parameter sensitivity to canopy-structure characteristics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Maurer, K. D.; Bohrer, G.; Kenny, W. T.; Ivanov, V. Y.

    2015-04-30

    Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction.more » We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site from meteorological observations. We found that the classical representation of constant roughness parameters (in space and time) as a fraction of canopy height performed relatively well. Nonetheless, of the approaches we tested, most of the empirical approaches that incorporate seasonal and interannual variation of roughness length and displacement height as a function of the dynamics of canopy structure produced more precise and less biased estimates for friction velocity than models with temporally invariable parameters.« less

  5. Applications of large-eddy simulation: Synthesis of neutral boundary layer models

    SciTech Connect (OSTI)

    Ohmstede, W.D.

    1987-12-01

    The object of this report is to describe progress made towards the application of large-eddy simulation (LES), in particular, to the study of the neutral boundary layer (NBL). The broad purpose of the study is to provide support to the LES project currently underway at LLNL. The specific purpose of this study is to lay the groundwork for the simulation of the SBL through the establishment and implementation of model criteria for the simulation of the NBL. The idealistic NBL is never observed in the atmosphere and therefore has little practical significance. However, it is of considerable theoretical interest for several reasons. The report discusses the concept of Rossby-number similarity theory as it applies to the NBL. A particular implementation of the concept is described. Then, the results from prior simulations of the NBL are summarized. Model design criteria for two versions of the Brost LES (BLES) model are discussed. The general guidelines for the development of Version 1 of the Brost model (BV1) were to implement the model with a minimum of modifications which would alter the design criteria as established by Brost. Two major modifications of BLES incorporated into BV1 pertain to the initialization/parameterization of the model and the generalization of the boundary conditions at the air/earth interface. 18 refs., 4 figs.

  6. Large eddy simulation of forced ignition of an annular bluff-body burner

    SciTech Connect (OSTI)

    Subramanian, V.; Domingo, P.; Vervisch, L.

    2010-03-15

    The optimization of the ignition process is a crucial issue in the design of many combustion systems. Large eddy simulation (LES) of a conical shaped bluff-body turbulent nonpremixed burner has been performed to study the impact of spark location on ignition success. This burner was experimentally investigated by Ahmed et al. [Combust. Flame 151 (2007) 366-385]. The present work focuses on the case without swirl, for which detailed measurements are available. First, cold-flow measurements of velocities and mixture fractions are compared with their LES counterparts, to assess the prediction capabilities of simulations in terms of flow and turbulent mixing. Time histories of velocities and mixture fractions are recorded at selected spots, to probe the resolved probability density function (pdf) of flow variables, in an attempt to reproduce, from the knowledge of LES-resolved instantaneous flow conditions, the experimentally observed reasons for success or failure of spark ignition. A flammability map is also constructed from the resolved mixture fraction pdf and compared with its experimental counterpart. LES of forced ignition is then performed using flamelet fully detailed tabulated chemistry combined with presumed pdfs. Various scenarios of flame kernel development are analyzed and correlated with typical flow conditions observed in this burner. The correlations between, velocities and mixture fraction values at the sparking time and the success or failure of ignition, are then further discussed and analyzed. (author)

  7. Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Xiao, Heng; Endo, Satoshi; Wong, May; Skamarock, William C.; Klemp, Joseph B.; Fast, Jerome D.; Gustafson, Jr., William I.; Vogelmann, Andrew; Wang, Hailong; Liu, Yangang; et al

    2015-10-29

    Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  8. Investigation of Rossby-number similarity in the neutral boundary layer using large-eddy simulation

    SciTech Connect (OSTI)

    Ohmstede, W.D.; Cederwall, R.T.; Meyers, R.E.

    1988-01-01

    One special case of particular interest, especially to theoreticians, is the steady-state, horizontally homogeneous, autobarotropic (PLB), hereafter referred to as the neutral boundary layer (NBL). The NBL is in fact a 'rare' atmospheric phenomenon, generally associated with high-wind situations. Nevertheless, there is a disproportionate interest in this problem because Rossby-number similarity theory provides a sound approach for addressing this issue. Rossby-number similarity theory has rather wide acceptance, but because of the rarity of the 'true' NBL state, there remains an inadequate experimental database for quantifying constants associated with the Rossby-number similarity concept. Although it remains a controversial issue, it has been proposed that large-eddy simulation (LES) is an alternative to physical experimentation for obtaining basic atmospherc 'data'. The objective of the study reported here is to investigate Rossby-number similarity in the NBL using LES. Previous studies have not addressed Rossby-number similarity explicitly, although they made use of it in the interpretation of their results. The intent is to calculate several sets of NBL solutions that are ambiguous relative to the their respective Rossby numbers and compare the results for similarity, or the lack of it. 14 refs., 1 fig.

  9. Practical application of large eddy simulation to film cooling flow analysis on gas turbine airfoils

    SciTech Connect (OSTI)

    Takata, T.; Takeishi, K.; Kawata, Y.; Tsuge, A.

    1999-07-01

    Large eddy simulation (LES) using body-fitted coordinates is applied to solve film cooling flow on turbine blades. The turbulent model was tuned using the experimental flow field and adiabatic film cooling effectiveness measurements for a single row of holes on a flat plate surface. The results show the interaction between the main stream boundary layer and injected film cooling air generates kidney and horseshoe shaped vortices. Comparison of the temperature distribution between experimental results and present analysis has been conducted. The non-dimensional temperature distribution at x/d = 1 is dome style and quantitatively agrees with experimental results. LES was also applied to solve film cooling on a turbine airfoil. If LES was applied to solve whole flow field domain large CPU time would make the solution impractical. LES, using body-fitted coordinates, is applied to solve the non-isotropic film cooling flow near the turbine blade. The cascade flow domain, with a pitch equal to one film cooling hole spacing, is solved using {kappa}-{epsilon} model. By using such a hybrid numerical method, CPU time is reduced and numerical accuracy is insured. The analytical results show the interaction between the flow blowing through film cooling holes and mainstream on the suction and pressure surfaces of the turbine airfoil. They also show the fundamental structure of the film cooling air flow is governed by arch internal secondary flow and horseshoe vortices which have a similar structure to film cooling air flow blowing through a cooling hole on a flat plate. In the flow field, the effect of turbulent structure on curvature (relaminarization) and flow pattern, involving the interaction between main flow and the cooling jet, are clearly shown. Film cooling effectiveness on the blade surface is predicted from the results of the thermal field calculation and is compared with the test result.

  10. Large eddy simulations of surface roughness parameter sensitivity to canopy-structure characteristics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Maurer, K. D.; Bohrer, G.; Ivanov, V. Y.

    2014-11-27

    Surface roughness parameters are at the core of every model representation of the coupling and interactions between land-surface and atmosphere, and are used in every model of surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and do not vary them in response to spatial or temporal changes to canopy structure. In part, this is due to the difficulty of reducing the complexity of canopy structure and its spatiotemporal dynamic and heterogeneity to less than a handful of parameters describing its effects of atmosphere–surface interactions. In this study we use large-eddy simulationsmore » to explore, in silico, the effects of canopy structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction. We found roughness parameters to be highly variable, but were able to find positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, and between eddy-penetration depth and maximum canopy height and leaf area index. Using a decade of wind and canopy structure observations in a site in Michigan, we tested the effectiveness of our model-resolved parameters in predicting the frictional velocity over heterogeneous and disturbed canopies. We compared it with three other semi-empirical models and with a decade of meteorological observations. We found that parameterizations with fixed representations of roughness performed relatively well. Nonetheless, some empirical approaches that incorporate seasonal and inter-annual changes to the canopy structure performed even better than models with temporally fixed parameters.« less

  11. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  12. Large-Eddy Simulation Study of Wake Propagation and Power Production in an Array of Tidal-Current Turbines: Preprint

    SciTech Connect (OSTI)

    Churchfield, M. J.; Li, Y.; Moriarty, P. J.

    2012-07-01

    This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally-periodic precursor simulation is performed to create turbulent flow data. Then that data is used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modeled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. Staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement, such as the use of a larger precursor domain to better capture elongated turbulent structures, the inclusion of salinity and temperature equations to account for density stratification and its effect on turbulence, improved wall shear stress modelling, and the examination of more array configurations.

  13. Using Mesoscale Weather Model Output as Boundary Conditions for Atmospheric Large-Eddy Simulations and Wind-Plant Aerodynamic Simulations (Presentation)

    SciTech Connect (OSTI)

    Churchfield, M. J.; Michalakes, J.; Vanderwende, B.; Lee, S.; Sprague, M. A.; Lundquist, J. K.; Moriarty, P. J.

    2013-10-01

    Wind plant aerodynamics are directly affected by the microscale weather, which is directly influenced by the mesoscale weather. Microscale weather refers to processes that occur within the atmospheric boundary layer with the largest scales being a few hundred meters to a few kilometers depending on the atmospheric stability of the boundary layer. Mesoscale weather refers to large weather patterns, such as weather fronts, with the largest scales being hundreds of kilometers wide. Sometimes microscale simulations that capture mesoscale-driven variations (changes in wind speed and direction over time or across the spatial extent of a wind plant) are important in wind plant analysis. In this paper, we present our preliminary work in coupling a mesoscale weather model with a microscale atmospheric large-eddy simulation model. The coupling is one-way beginning with the weather model and ending with a computational fluid dynamics solver using the weather model in coarse large-eddy simulation mode as an intermediary. We simulate one hour of daytime moderately convective microscale development driven by the mesoscale data, which are applied as initial and boundary conditions to the microscale domain, at a site in Iowa. We analyze the time and distance necessary for the smallest resolvable microscales to develop.

  14. Large Eddy Simulation of a Wind Turbine Airfoil at High Freestream-Flow Angle

    SciTech Connect (OSTI)

    2015-04-13

    A simulation of the airflow over a section of a wind turbine blade, run on the supercomputer Mira at the Argonne Leadership Computing Facility. Simulations like these help identify ways to make turbine blades more efficient.

  15. Visualization and analysis of eddies in a global ocean simulation

    SciTech Connect (OSTI)

    Williams, Sean J; Hecht, Matthew W; Petersen, Mark; Strelitz, Richard; Maltrud, Mathew E; Ahrens, James P; Hlawitschka, Mario; Hamann, Bernd

    2010-10-15

    Eddies at a scale of approximately one hundred kilometers have been shown to be surprisingly important to understanding large-scale transport of heat and nutrients in the ocean. Due to difficulties in observing the ocean directly, the behavior of eddies below the surface is not very well understood. To fill this gap, we employ a high-resolution simulation of the ocean developed at Los Alamos National Laboratory. Using large-scale parallel visualization and analysis tools, we produce three-dimensional images of ocean eddies, and also generate a census of eddy distribution and shape averaged over multiple simulation time steps, resulting in a world map of eddy characteristics. As expected from observational studies, our census reveals a higher concentration of eddies at the mid-latitudes than the equator. Our analysis further shows that mid-latitude eddies are thicker, within a range of 1000-2000m, while equatorial eddies are less than 100m thick.

  16. RACORO continental boundary layer cloud investigations. 2. Large-eddy

    Office of Scientific and Technical Information (OSTI)

    simulations of cumulus clouds and evaluation with in-situ and ground-based observations (Journal Article) | SciTech Connect RACORO continental boundary layer cloud investigations. 2. Large-eddy simulations of cumulus clouds and evaluation with in-situ and ground-based observations Citation Details In-Document Search This content will become publicly available on June 19, 2016 Title: RACORO continental boundary layer cloud investigations. 2. Large-eddy simulations of cumulus clouds and

  17. Intercomparison of Large-eddy Simulations of Arctic Mixed-phase Clouds: Importance of Ice Size Distribution Assumptions

    SciTech Connect (OSTI)

    Ovchinnikov, Mikhail; Ackerman, Andrew; Avramov, Alex; Cheng, Anning; Fan, Jiwen; Fridlind, Ann; Ghan, Steven J.; Harrington, Jerry Y.; Hoose, Corinna; Korolev, Alexei; McFarquhar, Greg; Morrison, H.; Paukert, Marco; Savre, Julien; Shipway, Ben; Shupe, Matthew D.; Solomon, Amy; Sulia, Kara

    2014-03-14

    Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP) and potential cloud dissipation, in agreement with earlier studies. By comparing simulations with the same microphysics coupled to different dynamical cores as well as the same dynamics coupled to different microphysics schemes, it is found that the ice water path (IWP) is mainly controlled by ice microphysics, while the inter-model differences in LWP are largely driven by physics and numerics of the dynamical cores. In contrast to previous intercomparisons, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSD) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case.

  18. Development of Quality Assessment Techniques for Large Eddy Simulation of Propulsion and Power Systems in Complex Geometries

    SciTech Connect (OSTI)

    Lacaze, Guilhem; Oefelein, Joseph

    2015-03-01

    Large-eddy-simulation (LES) is quickly becoming a method of choice for studying complex thermo-physics in a wide range of propulsion and power systems. It provides a means to study coupled turbulent combustion and flow processes in parameter spaces that are unattainable using direct-numerical-simulation (DNS), with a degree of fidelity that can be far more accurate than conventional engineering methods such as the Reynolds-averaged Navier-Stokes (RANS) approx- imation. However, development of predictive LES is complicated by the complex interdependence of different type of errors coming from numerical methods, algorithms, models and boundary con- ditions. On the other hand, control of accuracy has become a critical aspect in the development of predictive LES for design. The objective of this project is to create a framework of metrics aimed at quantifying the quality and accuracy of state-of-the-art LES in a manner that addresses the myriad of competing interdependencies. In a typical simulation cycle, only 20% of the computational time is actually usable. The rest is spent in case preparation, assessment, and validation, because of the lack of guidelines. This work increases confidence in the accuracy of a given solution while min- imizing the time obtaining the solution. The approach facilitates control of the tradeoffs between cost, accuracy, and uncertainties as a function of fidelity and methods employed. The analysis is coupled with advanced Uncertainty Quantification techniques employed to estimate confidence in model predictions and calibrate model's parameters. This work has provided positive conse- quences on the accuracy of the results delivered by LES and will soon have a broad impact on research supported both by the DOE and elsewhere.

  19. Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

    SciTech Connect (OSTI)

    Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.; Hewson, John C.; Kumar, Pritvi Raj; Ling, Julia; Najm, Habib N.; Ruiz, Anthony; Safta, Cosmin; Sargsyan, Khachik; Stewart, Alessia; Wagner, Gregory

    2015-09-01

    The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

  20. Comparison of the Dynamic Wake Meandering Model, Large-Eddy Simulation, and Field Data at the Egmond aan Zee Offshore Wind Plant: Preprint

    SciTech Connect (OSTI)

    Churchfield, M. J.; Moriarty, P. J.; Hao, Y.; Lackner, M. A.; Barthelmie, R.; Lundquist, J.; Oxley, G. S.

    2014-12-01

    The focus of this work is the comparison of the dynamic wake meandering model and large-eddy simulation with field data from the Egmond aan Zee offshore wind plant composed of 36 3-MW turbines. The field data includes meteorological mast measurements, SCADA information from all turbines, and strain-gauge data from two turbines. The dynamic wake meandering model and large-eddy simulation are means of computing unsteady wind plant aerodynamics, including the important unsteady meandering of wakes as they convect downstream and interact with other turbines and wakes. Both of these models are coupled to a turbine model such that power and mechanical loads of each turbine in the wind plant are computed. We are interested in how accurately different types of waking (e.g., direct versus partial waking), can be modeled, and how background turbulence level affects these loads. We show that both the dynamic wake meandering model and large-eddy simulation appear to underpredict power and overpredict fatigue loads because of wake effects, but it is unclear that they are really in error. This discrepancy may be caused by wind-direction uncertainty in the field data, which tends to make wake effects appear less pronounced.

  1. Implementation and assessment of turbine wake models in the Weather Research and Forecasting model for both mesoscale and large-eddy simulation

    SciTech Connect (OSTI)

    Singer, M; Mirocha, J; Lundquist, J; Cleve, J

    2010-03-03

    Flow dynamics in large wind projects are influenced by the turbines located within. The turbine wakes, regions characterized by lower wind speeds and higher levels of turbulence than the surrounding free stream flow, can extend several rotor diameters downstream, and may meander and widen with increasing distance from the turbine. Turbine wakes can also reduce the power generated by downstream turbines and accelerate fatigue and damage to turbine components. An improved understanding of wake formation and transport within wind parks is essential for maximizing power output and increasing turbine lifespan. Moreover, the influence of wakes from large wind projects on neighboring wind farms, agricultural activities, and local climate are all areas of concern that can likewise be addressed by wake modeling. This work describes the formulation and application of an actuator disk model for studying flow dynamics of both individual turbines and arrays of turbines within wind projects. The actuator disk model is implemented in the Weather Research and Forecasting (WRF) model, which is an open-source atmospheric simulation code applicable to a wide range of scales, from mesoscale to large-eddy simulation. Preliminary results demonstrate the applicability of the actuator disk model within WRF to a moderately high-resolution large-eddy simulation study of a small array of turbines.

  2. Coupling a Mesoscale Numerical Weather Prediction Model with Large-Eddy Simulation for Realistic Wind Plant Aerodynamics Simulations (Poster)

    SciTech Connect (OSTI)

    Draxl, C.; Churchfield, M.; Mirocha, J.; Lee, S.; Lundquist, J.; Michalakes, J.; Moriarty, P.; Purkayastha, A.; Sprague, M.; Vanderwende, B.

    2014-06-01

    Wind plant aerodynamics are influenced by a combination of microscale and mesoscale phenomena. Incorporating mesoscale atmospheric forcing (e.g., diurnal cycles and frontal passages) into wind plant simulations can lead to a more accurate representation of microscale flows, aerodynamics, and wind turbine/plant performance. Our goal is to couple a numerical weather prediction model that can represent mesoscale flow [specifically the Weather Research and Forecasting model] with a microscale LES model (OpenFOAM) that can predict microscale turbulence and wake losses.

  3. Validation/Uncertainty Quantification for Large Eddy Simulations of the heat flux in the Tangentially Fired Oxy-Coal Alstom Boiler Simulation Facility

    SciTech Connect (OSTI)

    Smith, P.J.; Eddings, E.G.; Ring, T.; Thornock, J.; Draper, T.; Isaac, B.; Rezeai, D.; Toth, P.; Wu, Y.; Kelly, K.

    2014-08-01

    The objective of this task is to produce predictive capability with quantified uncertainty bounds for the heat flux in commercial-scale, tangentially fired, oxy-coal boilers. Validation data came from the Alstom Boiler Simulation Facility (BSF) for tangentially fired, oxy-coal operation. This task brings together experimental data collected under Alstom’s DOE project for measuring oxy-firing performance parameters in the BSF with this University of Utah project for large eddy simulation (LES) and validation/uncertainty quantification (V/UQ). The Utah work includes V/UQ with measurements in the single-burner facility where advanced strategies for O2 injection can be more easily controlled and data more easily obtained. Highlights of the work include: • Simulations of Alstom’s 15 megawatt (MW) BSF, exploring the uncertainty in thermal boundary conditions. A V/UQ analysis showed consistency between experimental results and simulation results, identifying uncertainty bounds on the quantities of interest for this system (Subtask 9.1) • A simulation study of the University of Utah’s oxy-fuel combustor (OFC) focused on heat flux (Subtask 9.2). A V/UQ analysis was used to show consistency between experimental and simulation results. • Measurement of heat flux and temperature with new optical diagnostic techniques and comparison with conventional measurements (Subtask 9.3). Various optical diagnostics systems were created to provide experimental data to the simulation team. The final configuration utilized a mid-wave infrared (MWIR) camera to measure heat flux and temperature, which was synchronized with a high-speed, visible camera to utilize two-color pyrometry to measure temperature and soot concentration. • Collection of heat flux and temperature measurements in the University of Utah’s OFC for use is subtasks 9.2 and 9.3 (Subtask 9.4). Several replicates were carried to better assess the experimental error. Experiments were specifically designed for the generation of high-fidelity data from a turbulent oxy-coal flame for the validation of oxy-coal simulation models. Experiments were also conducted on the OFC to determine heat flux profiles using advanced strategies for O2 injection. This is important when considering retrofit of advanced O2 injection in retrofit configurations.

  4. RACORO continental boundary layer cloud investigations. 2. Large-eddy simulations of cumulus clouds and evaluation with in-situ and ground-based observations

    SciTech Connect (OSTI)

    Endo, Satoshi; Fridlind, Ann M.; Lin, Wuyin; Vogelmann, Andrew M.; Toto, Tami; Ackerman, Andrew S.; McFarquhar, Greg M.; Jackson, Robert C.; Jonsson, Haflidi H.; Liu, Yangang

    2015-06-19

    A 60-hour case study of continental boundary layer cumulus clouds is examined using two large-eddy simulation (LES) models. The case is based on observations obtained during the RACORO Campaign (Routine Atmospheric Radiation Measurement [ARM] Aerial Facility [AAF] Clouds with Low Optical Water Depths [CLOWD] Optical Radiative Observations) at the ARM Climate Research Facility's Southern Great Plains site. The LES models are driven by continuous large-scale and surface forcings, and are constrained by multi-modal and temporally varying aerosol number size distribution profiles derived from aircraft observations. We compare simulated cloud macrophysical and microphysical properties with ground-based remote sensing and aircraft observations. The LES simulations capture the observed transitions of the evolving cumulus-topped boundary layers during the three daytime periods, and generally reproduce variations of droplet number concentration with liquid water content (LWC), corresponding to the gradient between the cloud centers and cloud edges at given heights. The observed LWC values fall within the range of simulated values; the observed droplet number concentrations are commonly higher than simulated, but differences remain on par with potential estimation errors in the aircraft measurements. Sensitivity studies examine the influences of bin microphysics versus bulk microphysics, aerosol advection, supersaturation treatment, and aerosol hygroscopicity. Simulated macrophysical cloud properties are found to be insensitive in this non-precipitating case, but microphysical properties are especially sensitive to bulk microphysics supersaturation treatment and aerosol hygroscopicity.

  5. RACORO continental boundary layer cloud investigations. 2. Large-eddy simulations of cumulus clouds and evaluation with in-situ and ground-based observations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Endo, Satoshi; Fridlind, Ann M.; Lin, Wuyin; Vogelmann, Andrew M.; Toto, Tami; Ackerman, Andrew S.; McFarquhar, Greg M.; Jackson, Robert C.; Jonsson, Haflidi H.; Liu, Yangang

    2015-06-19

    A 60-hour case study of continental boundary layer cumulus clouds is examined using two large-eddy simulation (LES) models. The case is based on observations obtained during the RACORO Campaign (Routine Atmospheric Radiation Measurement [ARM] Aerial Facility [AAF] Clouds with Low Optical Water Depths [CLOWD] Optical Radiative Observations) at the ARM Climate Research Facility's Southern Great Plains site. The LES models are driven by continuous large-scale and surface forcings, and are constrained by multi-modal and temporally varying aerosol number size distribution profiles derived from aircraft observations. We compare simulated cloud macrophysical and microphysical properties with ground-based remote sensing and aircraft observations.more » The LES simulations capture the observed transitions of the evolving cumulus-topped boundary layers during the three daytime periods, and generally reproduce variations of droplet number concentration with liquid water content (LWC), corresponding to the gradient between the cloud centers and cloud edges at given heights. The observed LWC values fall within the range of simulated values; the observed droplet number concentrations are commonly higher than simulated, but differences remain on par with potential estimation errors in the aircraft measurements. Sensitivity studies examine the influences of bin microphysics versus bulk microphysics, aerosol advection, supersaturation treatment, and aerosol hygroscopicity. Simulated macrophysical cloud properties are found to be insensitive in this non-precipitating case, but microphysical properties are especially sensitive to bulk microphysics supersaturation treatment and aerosol hygroscopicity.« less

  6. Investigating wind turbine impacts on near-wake flow using profiling Lidar data and large-eddy simulations with an actuator disk model

    SciTech Connect (OSTI)

    Mirocha, Jeffrey D.; Rajewski, Daniel A.; Marjanovic, Nikola; Lundquist, Julie K.; Kosovic, Branko; Draxl, Caroline; Churchfield, Matthew J.

    2015-08-27

    In this study, wind turbine impacts on the atmospheric flow are investigated using data from the Crop Wind Energy Experiment (CWEX-11) and large-eddy simulations (LESs) utilizing a generalized actuator disk (GAD) wind turbine model. CWEX-11 employed velocity-azimuth display (VAD) data from two Doppler lidar systems to sample vertical profiles of flow parameters across the rotor depth both upstream and in the wake of an operating 1.5 MW wind turbine. Lidar and surface observations obtained during four days of July 2011 are analyzed to characterize the turbine impacts on wind speed and flow variability, and to examine the sensitivity of these changes to atmospheric stability. Significant velocity deficits (VD) are observed at the downstream location during both convective and stable portions of four diurnal cycles, with large, sustained deficits occurring during stable conditions. Variances of the streamwise velocity component, σu, likewise show large increases downstream during both stable and unstable conditions, with stable conditions supporting sustained small increases of σu , while convective conditions featured both larger magnitudes and increased variability, due to the large coherent structures in the background flow. Two representative case studies, one stable and one convective, are simulated using LES with a GAD model at 6 m resolution to evaluate the compatibility of the simulation framework with validation using vertically profiling lidar data in the near wake region. Virtual lidars were employed to sample the simulated flow field in a manner consistent with the VAD technique. Simulations reasonably reproduced aggregated wake VD characteristics, albeit with smaller magnitudes than observed, while σu values in the wake are more significantly underestimated. The results illuminate the limitations of using a GAD in combination with coarse model resolution in the simulation of near wake physics, and validation thereof using VAD data.

  7. Investigating wind turbine impacts on near-wake flow using profiling Lidar data and large-eddy simulations with an actuator disk model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mirocha, Jeffrey D.; Rajewski, Daniel A.; Marjanovic, Nikola; Lundquist, Julie K.; Kosovic, Branko; Draxl, Caroline; Churchfield, Matthew J.

    2015-08-27

    In this study, wind turbine impacts on the atmospheric flow are investigated using data from the Crop Wind Energy Experiment (CWEX-11) and large-eddy simulations (LESs) utilizing a generalized actuator disk (GAD) wind turbine model. CWEX-11 employed velocity-azimuth display (VAD) data from two Doppler lidar systems to sample vertical profiles of flow parameters across the rotor depth both upstream and in the wake of an operating 1.5 MW wind turbine. Lidar and surface observations obtained during four days of July 2011 are analyzed to characterize the turbine impacts on wind speed and flow variability, and to examine the sensitivity of thesemore » changes to atmospheric stability. Significant velocity deficits (VD) are observed at the downstream location during both convective and stable portions of four diurnal cycles, with large, sustained deficits occurring during stable conditions. Variances of the streamwise velocity component, σu, likewise show large increases downstream during both stable and unstable conditions, with stable conditions supporting sustained small increases of σu , while convective conditions featured both larger magnitudes and increased variability, due to the large coherent structures in the background flow. Two representative case studies, one stable and one convective, are simulated using LES with a GAD model at 6 m resolution to evaluate the compatibility of the simulation framework with validation using vertically profiling lidar data in the near wake region. Virtual lidars were employed to sample the simulated flow field in a manner consistent with the VAD technique. Simulations reasonably reproduced aggregated wake VD characteristics, albeit with smaller magnitudes than observed, while σu values in the wake are more significantly underestimated. The results illuminate the limitations of using a GAD in combination with coarse model resolution in the simulation of near wake physics, and validation thereof using VAD data.« less

  8. Large Eddy Simulation Modeling of Flashback and Flame Stabilization in Hydrogen-Rich Gas Turbines Using a Hierarchical Validation Approach

    SciTech Connect (OSTI)

    Clemens, Noel

    2015-09-30

    This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LES to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.

  9. Eddy current NDE performance demonstrations using simulation tools

    SciTech Connect (OSTI)

    Maurice, L.; Costan, V.; Guillot, E.; Thomas, P.

    2013-01-25

    To carry out performance demonstrations of the Eddy-Current NDE processes applied on French nuclear power plants, EDF studies the possibility of using simulation tools as an alternative to measurements on steam generator tube mocks-up. This paper focuses on the strategy led by EDF to assess and use code{sub C}armel3D and Civa, on the case of Eddy-Current NDE on wears problem which may appear in the U-shape region of steam generator tubes due to the rubbing of anti-vibration bars.

  10. Large Eddy Simulation (LES) of Engines

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DOE Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Program, and focuses on the application of LES to low-temperature and diesel engine combustion research. ...

  11. Large Eddy Simulation (LES) of Engines

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Technical Information Lansing Community College Spotlights Home DOE Applauds LCC Science and Technical Programs Lansing, Michigan Alternative Energy Technology Applied Math & Statistics Biology Chemistry Computer Science Electrical Technology Environmental, Design & Building Technologies Geographic Information Systems Physical Sciences Transportation & Engineering Technologies Alternative Energy Engineering Technology Lansing Community College is one of the first colleges in

  12. Adaptive Detached Eddy Simulation of a High Lift Wing with Active Flow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Control | Argonne Leadership Computing Facility Vorticity contours colored by speed from a detached eddy simulation of flow around a high lift multi-element wing at maximum lift Vorticity contours colored by speed from a detached eddy simulation of flow around a high lift multi-element wing at maximum lift. Slat, flap and complex supporting structures (right sub figures) that create complex vorticity wakes are resolved in the adaptive, unstructured grid simulation (third subfigure is zoom on

  13. Adaptive Detached Eddy Simulation of a High Lift Wing with Active...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Vorticity contours colored by speed from a detached eddy simulation of flow around a high lift multi-element wing at maximum lift Vorticity contours colored by speed from a...

  14. The role of large eddy fluctuations in the Madison Dynamo Experiment

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LARGE EDDY FLUCTUATIONS IN THE MAGNETIC DYNAMICS OF THE MADISON DYNAMO EXPERIMENT by Elliot Kaplan A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Physics) at the UNIVERSITY OF WISCONSIN - MADISON 2012 Defended on 6 September 2012 Dissertation approved by the following members of the Final Oral Committee: Cary Forest * Professor of Physics Stanislaw Boldyrev * Associate Professor of Physics John Sarff * Professor of Physics Paul Terry *

  15. Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research

    Broader source: Energy.gov [DOE]

    2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting

  16. Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research

    Broader source: Energy.gov (indexed) [DOE]

    Energy MicroGrid | Department of Energy is a 47-slide presentation on the performance and operation of the microgrid on the island of Lanai with multi-megawatt solar PV generation. Location Hawaii United States See map: Google Maps Date October 2009 Topic Solar Basics & Educating Consumers Systems Performance Codes, Standards & Utility Policies Subprogram Systems Integration Author SunPower, Satcon, Florida Solar Energy Center PDF icon lanai_renewable_energy_microgrid.pdf More

  17. ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Title ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and...

  18. Sensitivity technologies for large scale simulation.

    SciTech Connect (OSTI)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version for error estimation. We investigate the advantages and disadvantages of continuous and discre

  19. SimFS: A Large Scale Parallel File System Simulator

    Energy Science and Technology Software Center (OSTI)

    2011-08-30

    The software provides both framework and tools to simulate a large-scale parallel file system such as Lustre.

  20. Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine Combustion Research

    Broader source: Energy.gov [DOE]

    2010 DOE Vehicle Technologies and Hydrogen Programs Annual Merit Review and Peer Evaluation Meeting, June 7-11, 2010 -- Washington D.C.

  1. Large Eddy Simulation (LES) Applied to Low-Temperature and Diesel Engine Combustion Research

    Broader source: Energy.gov [DOE]

    2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation

  2. Large Eddy Simulation of Stable Boundary Layer Turbulent Processes in Complex Terrain

    SciTech Connect (OSTI)

    Eric D. Skyllingstad

    2005-01-26

    Research was performed using a turbulence boundary layer model to study the behavior of cold, dense flows in regions of complex terrain. Results show that flows develop a balance between turbulent entrainment of warm ambient air and dense, cold air created by surface cooling. Flow depth and strength is a function of downslope distance, slope angle and angle changes, and the ambient air temperature.

  3. A Model for Turbulent Combustion Simulation of Large Scale Hydrogen...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Model for Turbulent Combustion Simulation of Large Scale Hydrogen Explosions Event Sponsor: Argonne Leadership Computing Facility Seminar Start Date: Oct 6 2015 - 10:00am...

  4. NREL Develops Simulations for Wind Plant Power and Turbine Loads (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2012-04-01

    NREL researchers are the first to use a high-performance computing tool for a large-eddy simulation of an entire wind plant.

  5. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    the BlueGeneL Platform using the Qbox Code Citation Details In-Document Search Title: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGeneL Platform using ...

  6. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect (OSTI)

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  7. Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Cosmological Simulations for Large-Scale Sky Surveys PI Name: Salman Habib PI Email: habib@anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 80 Million Year: 2016 Research Domain: Physics The focus of cosmology today is on its two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have been instrumental in making fundamental discoveries in these

  8. Sandia Energy - Computational Fluid Dynamics Simulations Provide...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    from a VWiS large-eddy simulation. One of the primary roles of Sandia's Scaled Wind Farm Technology (SWiFT) facility will be to conduct detailed experiments on turbine wakes and...

  9. electromagnetics, eddy current, computer codes

    Energy Science and Technology Software Center (OSTI)

    2002-03-12

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  10. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  11. Tidal Residual Eddies and their Effect on Water Exchange in Puget Sound

    SciTech Connect (OSTI)

    Yang, Zhaoqing; Wang, Taiping

    2013-08-30

    Tidal residual eddies are one of the important hydrodynamic features in tidally dominant estuaries and coastal bays, and they could have significant effects on water exchange in a tidal system. This paper presents a modeling study of tides and tidal residual eddies in Puget Sound, a tidally dominant fjord-like estuary in the Pacific Northwest coast, using a three-dimensional finite-volume coastal ocean model. Mechanisms of vorticity generation and asymmetric distribution patterns around an island/headland were analyzed using the dynamic vorticity transfer approach and numerical experiments. Model results of Puget Sound show that a number of large twin tidal residual eddies exist in the Admiralty Inlet because of the presence of major headlands in the inlet. Simulated residual vorticities near the major headlands indicate that the clockwise tidal residual eddy (negative vorticity) is generally stronger than the anticlockwise eddy (positive vorticity) because of the effect of Coriolis force. The effect of tidal residual eddies on water exchange in Puget Sound and its sub-basins were evaluated by simulations of dye transport. It was found that the strong transverse variability of residual currents in the Admiralty Inlet results in a dominant seaward transport along the eastern shore and a dominant landward transport along the western shore of the Inlet. A similar transport pattern in Hood Canal is caused by the presence of tidal residual eddies near the entrance of the canal. Model results show that tidal residual currents in Whidbey Basin are small in comparison to other sub-basins. A large clockwise residual circulation is formed around Vashon Island near entrance of South Sound, which can potentially constrain the water exchange between the Central Basin and South Sound.

  12. New Aerodynamics Simulations Provide Better Understanding of Wind Plant Underperformance and Loading (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2011-02-01

    Researchers at the National Renewable Energy Laboratory (NREL) develop a high-fidelity large-eddy simulation model designed to predict the performance of large wind plants with a higher degree of accuracy than current models.

  13. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect (OSTI)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.

  14. Arc plasma simulation of the KAERI large ion source

    SciTech Connect (OSTI)

    In, S. R.; Jeong, S. H.; Kim, T. S.

    2008-02-15

    The KAERI large ion source, developed for the KSTAR NBI system, recently produced ion beams of 100 keV, 50 A levels in the first half campaign of 2007. These results seem to be the best performance of the present ion source at a maximum available input power of 145 kW. A slight improvement in the ion source is certainly necessary to attain the final goal of an 8 MW ion beam. Firstly, the experimental results were analyzed to differentiate the cause and effect for the insufficient beam currents. Secondly, a zero dimensional simulation was carried out on the ion source plasma to identify which factors control the arc plasma and to find out what improvements can be expected.

  15. Final Report on the “New Directions in the Variational Multiscale Formulation of Large Eddy Simulation of Turbulence”

    SciTech Connect (OSTI)

    Oberai, Assad A

    2013-07-16

    In the report we present a summary of the new models and algorithms developed by the PI and the students supported by this grant. These developments are described in detail in ten peer-reviewed journal articles that acknowledge support from this grant.

  16. The Fidelity of Ocean Models With Explicit Eddies (Chapter 17)

    SciTech Connect (OSTI)

    McClean, J; Jayne, S; Maltrud, M; Ivanova, D

    2007-08-01

    Current practices within the oceanographic community have been reviewed with regard to the use of metrics to assess the realism of the upper-ocean circulation, ventilation processes diagnosed by time-evolving mixed layer depth and mode water formation, and eddy heat fluxes in large-scale fine resolution ocean model simulations. We have striven to understand the fidelity of these simulations in the context of their potential use in future fine-resolution coupled climate system studies. A variety of methodologies are used to assess the veracity of the numerical simulations. Sea surface height variability and the location of western boundary current paths from altimetry have been used routinely as basic indicators of fine-resolution model performance. Drifters and floats have also been used to provide pseudo-Eulerian measures of the mean and variability of surface and sub-surface flows, while statistical comparisons of observed and simulated means have been carried out using James tests. Probability density functions have been used to assess the Gaussian nature of the observed and simulated flows. Length and time scales have been calculated in both Eulerian and Lagrangian frameworks from altimetry and drifters, respectively. Concise measures of multiple model performance have been obtained from Taylor diagrams. The time-evolution of the mixed layer depth at monitoring stations has been compared with simulated time series. Finally, eddy heat fluxes are compared to climatological inferences.

  17. Large-Scale Atomistic Simulations of Material Failure

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Abraham, Farid [IBM Almaden Research; Duchaineau, Mark [LLNL; Wirth, Brian [LLNL; Heidelberg,; Seager, Mark [LLNL; De La Rubia, Diaz [LLNL

    These simulations from 2000 examine the supersonic propagation of cracks and the formation of complex junction structures in metals. Eight simulations concerning brittle fracture, ductile failure, and shockless compression are available.

  18. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    SciTech Connect (OSTI)

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  19. Interface Exchange as an Indicator for Eddy Heat Transport

    SciTech Connect (OSTI)

    Petersen, Mark R.; Williams, Sean J.; Hecht, Matthew W.; Maltrud, Mathew E.; Hamann, Bernd; Patchett, John M.; Ahrens, James P.

    2012-06-12

    The ocean contains many large-scale, long-lived vortices, called mesoscale eddies, that are believed to have a role in the transport and redistribution of salt, heat, and nutrients throughout the ocean. Determining this role, however, has proven to be a challenge, since the mechanics of eddies are only partly understood; a standard definition for these ocean eddies does not exist and, therefore, scientifically meaningful, robust methods for eddy extraction, characterization, tracking and visualization remain a challenge. In order to shed light on the nature and potential roles of eddies, we have combined our previous research on eddy identification and tracking, and have used those approaches as the basis for analysis-driven computational experiments on the nature of eddies. Based on the resulting visualizations of eddy behavior, we have devised a new metric to characterize the transfer of water into and out of eddies across their boundary, and have developed visualization methods for this new metric to provide clues about the role eddies play in the global ocean and, potentially, climate change.

  20. AUTOMATED PARAMETRIC EXECUTION AND DOCUMENTATION FOR LARGE-SCALE SIMULATIONS

    SciTech Connect (OSTI)

    R. L. KELSEY; ET AL

    2001-03-01

    A language has been created to facilitate the automatic execution of simulations for purposes of enabling parametric study and test and evaluation. Its function is similar in nature to a job-control language, but more capability is provided in that the language extends the notion of literate programming to job control. Interwoven markup tags self document and define the job control process. The language works in tandem with another language used to describe physical systems. Both languages are implemented in the Extensible Markup Language (XML). A user describes a physical system for simulation and then creates a set of instructions for automatic execution of the simulation. Support routines merge the instructions with the physical-system description, execute the simulation the specified number of times, gather the output data, and document the process and output for the user. The language enables the guided exploration of a parameter space and can be used for simulations that must determine optimal solutions to particular problems. It is generalized enough that it can be used with any simulation input files that are described using XML. XML is shown to be useful as a description language, an interchange language, and a self-documented language.

  1. Offshore Wind Farm Model Development – Upcoming Release of the University of Minnesota’s Virtual Wind Simulator

    Broader source: Energy.gov [DOE]

    Large-eddy simulation of wind farms with parameterization of wind turbines is emerging as a powerful tool for improving the performance and lowering the maintenance cost of existing wind farms and...

  2. Laboratory simulation of binary and triple well EGS in large granite blocks

    Office of Scientific and Technical Information (OSTI)

    using AE events for drilling guidance (Journal Article) | SciTech Connect Laboratory simulation of binary and triple well EGS in large granite blocks using AE events for drilling guidance Citation Details In-Document Search This content will become publicly available on May 2, 2017 Title: Laboratory simulation of binary and triple well EGS in large granite blocks using AE events for drilling guidance Authors: Frash, Luke P. ; Gutierrez, Marte ; Hampton, Jesse ; Hood, John Publication Date:

  3. Simulations of fast crab cavity failures in the high luminosity Large

    Office of Scientific and Technical Information (OSTI)

    Hadron Collider (Journal Article) | SciTech Connect Journal Article: Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider Citation Details In-Document Search Title: Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider Authors: Yee-Rendon, Bruce ; Lopez-Fernandez, Ricardo ; Barranco, Javier ; Calaga, Rama ; Marsili, Aurelien ; Tomás, Rogelio ; Zimmermann, Frank ; Bouly, Frédéric Publication Date: 2014-05-06 OSTI Identifier:

  4. Large-Scale First-Principles Molecular Dynamics Simulations on the

    Office of Scientific and Technical Information (OSTI)

    BlueGene/L Platform using the Qbox Code (Conference) | SciTech Connect Conference: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform using the Qbox Code Citation Details In-Document Search Title: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform using the Qbox Code We demonstrate that the Qbox code supports unprecedented large-scale First-Principles Molecular Dynamics (FPMD) applications on the BlueGene/L

  5. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    SciTech Connect (OSTI)

    Onunkwo, Uzoma

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.

  6. LES ARM Symbiotic Simulation and Observation (LASSO) Implementation

    Office of Scientific and Technical Information (OSTI)

    Strategy (Program Document) | SciTech Connect LES ARM Symbiotic Simulation and Observation (LASSO) Implementation Strategy Citation Details In-Document Search Title: LES ARM Symbiotic Simulation and Observation (LASSO) Implementation Strategy This document illustrates the design of the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) workflow to provide a routine, high-resolution modeling capability to augment the U.S. Department of Energy (DOE) Atmospheric

  7. New Eddy Correlation Systems Installed

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    3 New Eddy Correlation Systems Installed New and improved eddy correlation (ECOR) systems are being installed at the SGP CART site. ECOR instrument mentor Mikhail Pekour assembled the systems from commercially available components and added custom computer programming to operate them. The new systems will add measurements of carbon dioxide flux to the usual ECOR measurements, which include fluxes of water vapor, heat, and momentum. Pekour selected modular components that Technical Contact: James

  8. NV Energy Large-Scale Photovoltaic Integration Study: Intra-Hour Dispatch and AGC Simulation

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Meng, Da; Guo, Xinxin; Jin, Chunlian; Samaan, Nader A.

    2013-01-02

    The uncertainty and variability with photovoltaic (PV) generation make it very challenging to balance power system generation and load, especially under high penetration cases. Higher reserve requirements and more cycling of conventional generators are generally anticipated for large-scale PV integration. However, whether the existing generation fleet is flexible enough to handle the variations and how well the system can maintain its control performance are difficult to predict. The goal of this project is to develop a software program that can perform intra-hour dispatch and automatic generation control (AGC) simulation, by which the balancing operations of a system can be simulated to answer the questions posed above. The simulator, named Electric System Intra-Hour Operation Simulator (ESIOS), uses the NV Energy southern system as a study case, and models the system’s generator configurations, AGC functions, and operator actions to balance system generation and load. Actual dispatch of AGC generators and control performance under various PV penetration levels can be predicted by running ESIOS. With data about the load, generation, and generator characteristics, ESIOS can perform similar simulations and assess variable generation integration impacts for other systems as well. This report describes the design of the simulator and presents the study results showing the PV impacts on NV Energy real-time operations.

  9. NONDESTRUCTIVE EDDY CURRENT TESTING

    DOE Patents [OSTI]

    Renken, C.J. Jr.

    1961-05-23

    An eddy current testing device is described for measuring metal continuity independent of probe-to-sample spacing. An inductance would test probe is made a leg of a variable impedance bridge and the bridge is balanced with the probe away from the sample. An a-c signal is applied across the input terminals of the bridge circuit. As the probe is brought into proximity with the metal sample, the resulting impedance change in the probe gives an output signal from the bridge whose phase angle is proportional to the sample continuity and amplitude is proportional to the probe-tosample spacing. The output signal from the bridge is applied to a compensating network where, responsive to amplitude changes from the bridge output signal, a constant phased voltage output is maintained when the sample is continuous regardless of probe-to-sample spacing. A phase meter calibrated to read changes in resistivity of the metal sample measures the phase shift between the output of the compensating network and the original a-c signal applied to the bridge.

  10. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    SciTech Connect (OSTI)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  11. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    SciTech Connect (OSTI)

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; /SLAC /TechX Corp. /Fermilab

    2008-08-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES).

  12. Large-scale simulation of methane dissociation along the West Spitzbergen Margin

    SciTech Connect (OSTI)

    Reagan, M.T.; Moridis, G.J.

    2009-07-15

    Vast quantities of methane are trapped in oceanic hydrate deposits, and there is concern that a rise in the ocean temperature will induce dissociation of these hydrate accumulations, potentially releasing large amounts of methane into the atmosphere. The recent discovery of active methane gas venting along the landward limit of the gas hydrate stability zone (GHSZ) on the shallow continental slope west of Spitsbergen could be an indication of this process, if the source of the methane can be confidently attributed to dissociating hydrates. In the first large-scale simulation study of its kind, we simulate shallow hydrate dissociation in conditions representative of the West Spitsbergen margin to test the hypothesis that the observed gas release originated from hydrates. The simulation results are consistent with this hypothesis, and are in remarkable agreement with the recently published observations. They show that shallow, low-saturation hydrate deposits, when subjected to temperature increases at the seafloor, can release significant quantities of methane, and that the releases will be localized near the landward limit of the top of the GHSZ. These results indicate the possibility that hydrate dissociation and methane release may be both a consequence and a cause of climate change.

  13. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; Huang, Patrick; Lightstone, Felice C.

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less

  14. ARM - LES ARM Symbiotic Simulation and Observation Workflow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ThemesLES ARM Symbiotic Simulation and Observation Workflow LASSO Information LASSO Home LASSO Backgrounder Pilot Phase Begins for Routine Large-Eddy Simulations Pilot Project Timeline Presentations Science LASSO Implementation Strategy Related Information ARM Decadal Vision Archive of LASSO Information e-mail list LASSO Collaboration Letter Contacts William Gustafson, Lead Principal Investigator Andrew Vogelmann, Co-Principal Investigator Hanna Goss, Media Contact LES ARM Symbiotic Simulation

  15. ORISE Faculty Research Experiences: Dr. Eddie Red

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Eddie Red Professor Makes Important Connections through Research Experience Dr. Eddie Red In ORNL's Chemical Sciences Division, Dr. Eddie Red of Morehouse College studies nanotube characteristics-technology that could impact future solar cells, as well as battery and fuel cells. Eddie C. Red, Ph.D., has longed to set up a functional research laboratory for the training and development of under-represented minorities in physics and engineering at Morehouse College, where he is a professor.

  16. Eddie Bernice Johnson | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Eddie Bernice Johnson About Us Eddie Bernice Johnson, (D-TX) - Congresswoman Representing the 30th District of Texas Eddie Bernice Johnson Congresswoman Eddie Bernice Johnson is serving her 11th term representing the 30th Congressional District of Texas. The 30th District is entirely within Dallas County and it includes the cities of DeSoto, Lancaster, Wilmer, Hutchins, Cedar Hill, and Duncanville; with portions of the cities of Glenn Heights, Ferris, Ovilla, and South Grand Prairie. The Dallas

  17. Expert system for analyzing eddy current measurements

    DOE Patents [OSTI]

    Levy, Arthur J.; Oppenlander, Jane E.; Brudnoy, David M.; Englund, James M.; Loomis, Kent C.

    1994-01-01

    A method and apparatus (called DODGER) analyzes eddy current data for heat exchanger tubes or any other metallic object. DODGER uses an expert system to analyze eddy current data by reasoning with uncertainty and pattern recognition. The expert system permits DODGER to analyze eddy current data intelligently, and obviate operator uncertainty by analyzing the data in a uniform and consistent manner.

  18. Chaotic advection at large Pclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    SciTech Connect (OSTI)

    Figueroa, Aldo [Facultad de Ciencias, Universidad Autnoma del Estado de Morelos, Cuernavaca, Morelos 62209 (Mexico)] [Facultad de Ciencias, Universidad Autnoma del Estado de Morelos, Cuernavaca, Morelos 62209 (Mexico); Meunier, Patrice; Villermaux, Emmanuel [Aix-Marseille Univ., CNRS, Centrale Marseille, IRPHE, Marseille F-13384 (France)] [Aix-Marseille Univ., CNRS, Centrale Marseille, IRPHE, Marseille F-13384 (France); Cuevas, Sergio; Ramos, Eduardo [Instituto de Energas Renovables, Universidad Nacional Autnoma de Mxico, A.P. 34, Temixco, Morelos 62580 (Mexico)] [Instituto de Energas Renovables, Universidad Nacional Autnoma de Mxico, A.P. 34, Temixco, Morelos 62580 (Mexico)

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Pclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, The diffusive strip method for scalar mixing in two-dimensions, J. Fluid Mech. 662, 134172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  19. A combination of streamtube and geostatical simulation methodologies for the study of large oil reservoirs

    SciTech Connect (OSTI)

    Chakravarty, A.; Emanuel, A.S.; Bernath, J.A.

    1997-08-01

    The application of streamtube models for reservoir simulation has an extensive history in the oil industry. Although these models are strictly applicable only to fields under voidage balance, they have proved to be useful in a large number of fields provided that there is no solution gas evolution and production. These models combine the benefit of very fast computational time with the practical ability to model a large reservoir over the course of its history. These models do not, however, directly incorporate the detailed geological information that recent experience has taught is important. This paper presents a technique for mapping the saturation information contained in a history matched streamtube model onto a detailed geostatistically derived finite difference grid. With this technique, the saturation information in a streamtube model, data that is actually statistical in nature, can be identified with actual physical locations in a field and a picture of the remaining oil saturation can be determined. Alternatively, the streamtube model can be used to simulate the early development history of a field and the saturation data then used to initialize detailed late time finite difference models. The proposed method is presented through an example application to the Ninian reservoir. This reservoir, located in the North Sea (UK), is a heterogeneous sandstone characterized by a line drive waterflood, with about 160 wells, and a 16 year history. The reservoir was satisfactorily history matched and mapped for remaining oil saturation. A comparison to 3-D seismic survey and recently drilled wells have provided preliminary verification.

  20. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    SciTech Connect (OSTI)

    Shimojo, Fuyuki; Hattori, Shinnosuke [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States)] [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Kunaseth, Manaschai [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); National Nanotechnology Center, Pathumthani 12120 (Thailand); Ohmura, Satoshi [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Physics, Kyoto University, Kyoto 606-8502 (Japan); Shimamura, Kohei [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States) [Collaboratory for Advanced Computing and Simulations, Department of Physics and Astronomy, Department of Computer Science, and Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, California 90089-0242 (United States); Department of Physics, Kumamoto University, Kumamoto 860-8555 (Japan); Department of Applied Quantum Physics and Nuclear Engineering, Kyushu University, Fukuoka 819-0395 (Japan)

    2014-05-14

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786?432 cores for a 50.3 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16?661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.

  1. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    SciTech Connect (OSTI)

    Nomura, K; Seymour, R; Wang, W; Kalia, R; Nakano, A; Vashishta, P; Shimojo, F; Yang, L H

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based on hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).

  2. Eddy current thickness measurement apparatus

    DOE Patents [OSTI]

    Rosen, Gary J.; Sinclair, Frank; Soskov, Alexander; Buff, James S.

    2015-06-16

    A sheet of a material is disposed in a melt of the material. The sheet is formed using a cooling plate in one instance. An exciting coil and sensing coil are positioned downstream of the cooling plate. The exciting coil and sensing coil use eddy currents to determine a thickness of the solid sheet on top of the melt.

  3. High-rate Plastic Deformation of Nanocrystalline Tantalum to Large Strains: Molecular Dynamics Simulation

    SciTech Connect (OSTI)

    Rudd, R E

    2009-02-05

    Recent advances in the ability to generate extremes of pressure and temperature in dynamic experiments and to probe the response of materials has motivated the need for special materials optimized for those conditions as well as a need for a much deeper understanding of the behavior of materials subjected to high pressure and/or temperature. Of particular importance is the understanding of rate effects at the extremely high rates encountered in those experiments, especially with the next generation of laser drives such as at the National Ignition Facility. Here we use large-scale molecular dynamics (MD) simulations of the high-rate deformation of nanocrystalline tantalum to investigate the processes associated with plastic deformation for strains up to 100%. We use initial atomic configurations that were produced through simulations of solidification in the work of Streitz et al [Phys. Rev. Lett. 96, (2006) 225701]. These 3D polycrystalline systems have typical grain sizes of 10-20 nm. We also study a rapidly quenched liquid (amorphous solid) tantalum. We apply a constant volume (isochoric), constant temperature (isothermal) shear deformation over a range of strain rates, and compute the resulting stress-strain curves to large strains for both uniaxial and biaxial compression. We study the rate dependence and identify plastic deformation mechanisms. The identification of the mechanisms is facilitated through a novel technique that computes the local grain orientation, returning it as a quaternion for each atom. This analysis technique is robust and fast, and has been used to compute the orientations on the fly during our parallel MD simulations on supercomputers. We find both dislocation and twinning processes are important, and they interact in the weak strain hardening in these extremely fine-grained microstructures.

  4. Combustion Energy Frontier Research Center Post-Doctoral Position in Advanced Combustion Simulations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    EFRC seeks outstanding applicants for the position of post-doctoral research associate to perform research at Cornell University and Sandia National Laboratories on advanced simulations of turbulent combustion. The project involves two simulation methodologies: direct numerical simulation (DNS); and large-eddy simulation (LES) using the filtered density function (FDF) approach. DNS involves minimal modeling, but is restricted (by computational capabilities) to simple geometries and a moderate

  5. A method of orbital analysis for large-scale first-principles simulations

    SciTech Connect (OSTI)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  6. Large-scale Nanostructure Simulations from X-ray Scattering Data On Graphics Processor Clusters

    SciTech Connect (OSTI)

    Sarje, Abhinav; Pien, Jack; Li, Xiaoye; Chan, Elaine; Chourou, Slim; Hexemer, Alexander; Scholz, Arthur; Kramer, Edward

    2012-01-15

    X-ray scattering is a valuable tool for measuring the structural properties of materialsused in the design and fabrication of energy-relevant nanodevices (e.g., photovoltaic, energy storage, battery, fuel, and carbon capture andsequestration devices) that are key to the reduction of carbon emissions. Although today's ultra-fast X-ray scattering detectors can provide tremendousinformation on the structural properties of materials, a primary challenge remains in the analyses of the resulting data. We are developing novelhigh-performance computing algorithms, codes, and software tools for the analyses of X-ray scattering data. In this paper we describe two such HPCalgorithm advances. Firstly, we have implemented a flexible and highly efficient Grazing Incidence Small Angle Scattering (GISAXS) simulation code based on theDistorted Wave Born Approximation (DWBA) theory with C++/CUDA/MPI on a cluster of GPUs. Our code can compute the scattered light intensity from any givensample in all directions of space; thus allowing full construction of the GISAXS pattern. Preliminary tests on a single GPU show speedups over 125x compared tothe sequential code, and almost linear speedup when executing across a GPU cluster with 42 nodes, resulting in an additional 40x speedup compared to usingone GPU node. Secondly, for the structural fitting problems in inverse modeling, we have implemented a Reverse Monte Carlo simulation algorithm with C++/CUDAusing one GPU. Since there are large numbers of parameters for fitting in the in X-ray scattering simulation model, the earlier single CPU code required weeks ofruntime. Deploying the AccelerEyes Jacket/Matlab wrapper to use GPU gave around 100x speedup over the pure CPU code. Our further C++/CUDA optimization deliveredan additional 9x speedup.

  7. Thermal Performance Evaluation of Attic Radiant Barrier Systems Using the Large Scale Climate Simulator (LSCS)

    SciTech Connect (OSTI)

    Shrestha, Som S; Miller, William A; Desjarlais, Andre Omer

    2013-01-01

    Application of radiant barriers and low-emittance surface coatings in residential building attics can significantly reduce conditioning loads from heat flow through attic floors. The roofing industry has been developing and using various radiant barrier systems and low-emittance surface coatings to increase energy efficiency in buildings; however, minimal data are available that quantifies the effectiveness of these technologies. This study evaluates performance of various attic radiant barrier systems under simulated summer daytime conditions and nighttime or low solar gain daytime winter conditions using the large scale climate simulator (LSCS). The four attic configurations that were evaluated are 1) no radiant barrier (control), 2) perforated low-e foil laminated oriented strand board (OSB) deck, 3) low-e foil stapled on rafters, and 4) liquid applied low-emittance coating on roof deck and rafters. All test attics used nominal RUS 13 h-ft2- F/Btu (RSI 2.29 m2-K/W) fiberglass batt insulation on attic floor. Results indicate that the three systems with radiant barriers had heat flows through the attic floor during summer daytime condition that were 33%, 50%, and 19% lower than the control, respectively.

  8. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    SciTech Connect (OSTI)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp; Vazhkudai, Sudharshan S.; Cao, Qing

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  9. Magnetic diagnostics for equilibrium reconstructions with eddy...

    Office of Scientific and Technical Information (OSTI)

    Magnetic diagnostics for equilibrium reconstructions with eddy currents on the lithium tokamak experimenta) Citation Details In-Document Search Title: Magnetic diagnostics for...

  10. In-situ sampling of a large-scale particle simulation for interactive...

    Office of Scientific and Technical Information (OSTI)

    Sponsoring Org: DOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; APPROXIMATIONS; SAMPLING; SIMULATION; STORAGE Word Cloud ...

  11. Rotating concave eddy current probe

    DOE Patents [OSTI]

    Roach, Dennis P. (Albuquerque, NM); Walkington, Phil (Albuquerque, NM); Rackow, Kirk A. (Albuquerque, NM); Hohman, Ed (Albuquerque, NM)

    2008-04-01

    A rotating concave eddy current probe for detecting fatigue cracks hidden from view underneath the head of a raised head fastener, such as a buttonhead-type rivet, used to join together structural skins, such as aluminum aircraft skins. The probe has a recessed concave dimple in its bottom surface that closely conforms to the shape of the raised head. The concave dimple holds the probe in good alignment on top of the rivet while the probe is rotated around the rivet's centerline. One or more magnetic coils are rigidly embedded within the probe's cylindrical body, which is made of a non-conducting material. This design overcomes the inspection impediment associated with widely varying conductivity in fastened joints.

  12. Eddy current technique for predicting burst pressure

    DOE Patents [OSTI]

    Petri, Mark C.; Kupperman, David S.; Morman, James A.; Reifman, Jaques; Wei, Thomas Y. C.

    2003-01-01

    A signal processing technique which correlates eddy current inspection data from a tube having a critical tubing defect with a range of predicted burst pressures for the tube is provided. The method can directly correlate the raw eddy current inspection data representing the critical tubing defect with the range of burst pressures using a regression technique, preferably an artificial neural network. Alternatively, the technique deconvolves the raw eddy current inspection data into a set of undistorted signals, each of which represents a separate defect of the tube. The undistorted defect signal which represents the critical tubing defect is related to a range of burst pressures utilizing a regression technique.

  13. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    SciTech Connect (OSTI)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I.; Winey, J. Michael; Gupta, Yogendra Mohan; Lane, J. Matthew D.; Ditmire, Todd; Quevedo, Hernan J.

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  14. In-situ sampling of a large-scale particle simulation for interactive...

    Office of Scientific and Technical Information (OSTI)

    The limiting technology in this situation is analogous to the problem in many population surveys: there aren't enough human resources to query a large population. To cope with the ...

  15. Using Cloud-Resolving Model Simulations of Deep Convection to Inform Cloud Parmaterizations in Large-Scale Models

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Using Cloud-Resolving Model Simulations of Deep Convection to Inform Cloud Parameterizations in Large-Scale Models S. A. Klein National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory Princeton, New Jersey R. Pincus National Oceanic and Atmospheric Administration Cooperative Institute for Research in Environmental Science Climate Diagnostics Center Boulder, Colorado K. -M. Xu National Aeronautics and Space Administration Langley Research Center Hampton, Virginia

  16. Simulation of Post-Frontal Boundary Layers Observed During the ARM 2000 Cloud IOP

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulation of Post-Frontal Boundary Layers Observed During the ARM 2000 Cloud IOP D. B. Mechem and Y. L. Kogan Cooperative Institute for Mesoscale Meteorological Studies University of Oklahoma Norman, Oklahoma M. Poellot University of North Dakota Grand Forks, North Dakota Introduction Large-eddy simulation (LES) models have been widely employed in the study of radiatively forced cloud topped boundary layers (CTBL). These boundary layers are typically well mixed and characterized by a sharp jump

  17. ActivitySim: large-scale agent based activity generation for infrastructure simulation

    SciTech Connect (OSTI)

    Gali, Emmanuel; Eidenbenz, Stephan; Mniszewski, Sue; Cuellar, Leticia; Teuscher, Christof

    2008-01-01

    The United States' Department of Homeland Security aims to model, simulate, and analyze critical infrastructure and their interdependencies across multiple sectors such as electric power, telecommunications, water distribution, transportation, etc. We introduce ActivitySim, an activity simulator for a population of millions of individual agents each characterized by a set of demographic attributes that is based on US census data. ActivitySim generates daily schedules for each agent that consists of a sequence of activities, such as sleeping, shopping, working etc., each being scheduled at a geographic location, such as businesses or private residences that is appropriate for the activity type and for the personal situation of the agent. ActivitySim has been developed as part of a larger effort to understand the interdependencies among national infrastructure networks and their demand profiles that emerge from the different activities of individuals in baseline scenarios as well as emergency scenarios, such as hurricane evacuations. We present the scalable software engineering principles underlying ActivitySim, the socia-technical modeling paradigms that drive the activity generation, and proof-of-principle results for a scenario in the Twin Cities, MN area of 2.6 M agents.

  18. Self-consistent inclusion of classical large-angle Coulomb collisions in plasma Monte Carlo simulations

    SciTech Connect (OSTI)

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2015-10-15

    Large-angle Coulomb collisions allow for the exchange of a significant proportion of the energy of a particle in a single collision, but are not included in models of plasmas based on fluids, the Vlasov–Fokker–Planck equation, or currently available plasma Monte Carlo techniques. Their unique effects include the creation of fast ‘knock-on’ ions, which may be more likely to undergo certain reactions, and distortions to ion distribution functions relative to what is predicted by small-angle collision only theories. We present a computational method which uses Monte Carlo techniques to include the effects of large-angle Coulomb collisions in plasmas and which self-consistently evolves distribution functions according to the creation of knock-on ions of any generation. The method is used to demonstrate ion distribution function distortions in an inertial confinement fusion (ICF) relevant scenario of the slowing of fusion products.

  19. Large-scale Environmental Variables and Transition to Deep Convection in Cloud Resolving Model Simulations: A Vector Representation

    SciTech Connect (OSTI)

    Hagos, Samson M.; Leung, Lai-Yung R.

    2012-11-01

    Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.

  20. Scattering of electromagnetic waves by vortex density structures associated with interchange instability: Analytical and large scale plasma simulation results

    SciTech Connect (OSTI)

    Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.

    2014-05-15

    The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics, and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present particle-in-cell simulation results of electromagnetic scattering on vortex type density structures using the large scale plasma code LSP and compare them with analytical results.

  1. HIGH-TEMPERATURE ELECTROLYSIS FOR LARGE-SCALE HYDROGEN AND SYNGAS PRODUCTION FROM NUCLEAR ENERGY SYSTEM SIMULATION AND ECONOMICS

    SciTech Connect (OSTI)

    J. E. O'Brien; M. G. McKellar; E. A. Harvego; C. M. Stoots

    2009-05-01

    A research and development program is under way at the Idaho National Laboratory (INL) to assess the technological and scale-up issues associated with the implementation of solid-oxide electrolysis cell technology for efficient high-temperature hydrogen production from steam. This work is supported by the US Department of Energy, Office of Nuclear Energy, under the Nuclear Hydrogen Initiative. This paper will provide an overview of large-scale system modeling results and economic analyses that have been completed to date. System analysis results have been obtained using the commercial code UniSim, augmented with a custom high-temperature electrolyzer module. Economic analysis results were based on the DOE H2A analysis methodology. The process flow diagrams for the system simulations include an advanced nuclear reactor as a source of high-temperature process heat, a power cycle and a coupled steam electrolysis loop. Several reactor types and power cycles have been considered, over a range of reactor outlet temperatures. Pure steam electrolysis for hydrogen production as well as coelectrolysis for syngas production from steam/carbon dioxide mixtures have both been considered. In addition, the feasibility of coupling the high-temperature electrolysis process to biomass and coal-based synthetic fuels production has been considered. These simulations demonstrate that the addition of supplementary nuclear hydrogen to synthetic fuels production from any carbon source minimizes emissions of carbon dioxide during the production process.

  2. Contoured Surface Eddy Current Inspection System

    DOE Patents [OSTI]

    Batzinger, Thomas James; Fulton, James Paul; Rose, Curtis Wayne; Perocchi, Lee Cranford

    2003-04-08

    Eddy current inspection of a contoured surface of a workpiece is performed by forming a backing piece of flexible, resiliently yieldable material with a contoured exterior surface conforming in shape to the workpiece contoured surface. The backing piece is preferably cast in place so as to conform to the workpiece contoured surface. A flexible eddy current array probe is attached to the contoured exterior surface of the backing piece such that the probe faces the contoured surface of the workpiece to be inspected when the backing piece is disposed adjacent to the workpiece. The backing piece is then expanded volumetrically by inserting at least one shim into a slot in the backing piece to provide sufficient contact pressure between the probe and the workpiece contoured surface to enable the inspection of the workpiece contoured surface to be performed.

  3. Eddy current signal comparison for tube identification

    SciTech Connect (OSTI)

    Glass, S. W. E-mail: Ratko.Vojvodic@areva.com; Vojvodic, R. E-mail: Ratko.Vojvodic@areva.com

    2015-03-31

    Inspection of nuclear power plant steam generator tubes is required to justify continued safe plant operation. The steam generators consist of thousands of tubes with nominal diameters of 15 to 22mm, approximately 1mm wall thickness, and 20 to 30m in length. The tubes are inspected by passing an eddy current probe through the tubes from tube end to tube end. It is critical to know exactly which tube identification (row and column) is associated with each tube's data. This is controlled by a precision manipulator that provides the tube ID to the eddy current system. Historically there have been some instances where the manipulator incorrectly reported the tube ID. This can have serious consequences including lack of inspection of a tube, or if a pluggable indication is detected, the tube is likely to be mis-plugged thereby risking a primary to secondary leak.

  4. Eddy current inspection tool. [Patent application

    DOE Patents [OSTI]

    Petrini, R.R.; Van Lue, D.F.

    1980-10-29

    A miniaturized inspection tool, for testing and inspection of metal objects in locations with difficult accessibility, which comprises eddy current sensing equipment with a probe coil, and associated coaxial coil cable, oil energizing means, and circuit means responsive to impedance changes in the coil as effected by induced eddy currents in a test object to produce a data output signal proportional to such changes. The coil and cable are slideably received in the utility channel of the flexible insertion tube of a fiberoptic scope. The scope is provided with light transmitting and receiving fiberoptics for viewing through the flexible tube, and articulation means for articulating the distal end of the tube and permitting close control of coil placement relative to a test object. The eddy current sensing equipment includes a tone generator for generating audible signals responsive to the data output signal. In one selected mode of operation, the tone generator responsive to the output signal above a selected level generates a constant single frequency tone for signalling detection of a discontinuity and, in a second selected mode, generates a tone whose frequency is proportional to the difference between the output signal and a predetermined selected threshold level.

  5. Nek5000 Ready to Use after Simulations of Important Pipe Flow Benchmark |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Nek5000 Ready to Use after Simulations of Important Pipe Flow Benchmark Nek5000 Ready to Use after Simulations of Important Pipe Flow Benchmark January 29, 2013 - 1:42pm Addthis Velocity magnitude in MATiS-H spacer grid with swirl-type vanes. Velocity magnitude in MATiS-H spacer grid with swirl-type vanes. As part of the on-going Nek5000 validation efforts, a series of large eddy simulations (LES) have been performed for thermal stratification in a pipe. Results were in

  6. Large Scale DD Simulation Results for Crystal Plasticity Parameters in Fe-Cr And Fe-Ni Systems

    SciTech Connect (OSTI)

    Zbib, Hussein M.; Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2012-04-30

    The development of viable nuclear energy source depends on ensuring structural materials integrity. Structural materials in nuclear reactors will operate in harsh radiation conditions coupled with high level hydrogen and helium production, as well as formation of high density of point defects and defect clusters, and thus will experience severe degradation of mechanical properties. Therefore, the main objective of this work is to develop a capability that predicts aging behavior and in-service lifetime of nuclear reactor components and, thus provide an instrumental tool for tailoring materials design and development for application in future nuclear reactor technologies. Towards this end goal, the long term effort is to develop a physically based multiscale modeling hierarchy, validated and verified, to address outstanding questions regarding the effects of irradiation on materials microstructure and mechanical properties during extended service in the fission and fusion environments. The focus of the current investigation is on modern steels for use in nuclear reactors including high strength ferritic-martensitic steels (Fe-Cr-Ni alloys). The effort is to develop a predicative capability for the influence of irradiation on mechanical behavior. Irradiation hardening is related to structural information crossing different length scales, such as composition, dislocation, and crystal orientation distribution. To predict effective hardening, the influence factors along different length scales should be considered. Therefore, a hierarchical upscaling methodology is implemented in this work in which relevant information is passed between models at three scales, namely, from molecular dynamics to dislocation dynamics to dislocation-based crystal plasticity. The molecular dynamics (MD) was used to predict the dislocation mobility in body centered cubic (bcc) Fe and its Ni and Cr alloys. The results are then passed on to dislocation dynamics to predict the critical resolved shear stress (CRSS) from the evolution of local dislocation and defects. In this report the focus is on the results obtained from large scale dislocation dynamics simulations. The effect of defect density, materials structure was investigated, and evolution laws are obtained. These results will form the bases for the development of evolution and hardening laws for a dislocation-based crystal plasticity framework. The hierarchical upscaling method being developed in this project can provide a guidance tool to evaluate performance of structural materials for next-generation nuclear reactors. Combined with other tools developed in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the models developed will have more impact in improving the reliability of current reactors and affordability of new reactors.

  7. Eddy Correlation Flux Measurement System Handbook (Technical Report) |

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Eddy Correlation Flux Measurement System Handbook Citation Details In-Document Search Title: Eddy Correlation Flux Measurement System Handbook The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The fluxes are

  8. Eddy Correlation Flux Measurement System (ECOR) Handbook (Technical Report)

    Office of Scientific and Technical Information (OSTI)

    | SciTech Connect Eddy Correlation Flux Measurement System (ECOR) Handbook Citation Details In-Document Search Title: Eddy Correlation Flux Measurement System (ECOR) Handbook The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The

  9. Eddy County, New Mexico: Energy Resources | Open Energy Information

    Open Energy Info (EERE)

    Eddy County, New Mexico: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 32.4170622, -104.4723301 Show Map Loading map... "minzoom":false,"mapp...

  10. Eddy current measurement of tube element spacing

    DOE Patents [OSTI]

    Latham, Wayne Meredith; Hancock, Jimmy Wade; Grut, Jayne Marie

    1998-01-01

    A method of electromagnetically measuring the distance between adjacent tube elements in a heat exchanger. A cylindrical, high magnetic permeability ferrite slug is placed in the tube adjacent the spacing to be measured. A bobbin or annular coil type probe operated in the absolute mode is inserted into a second tube adjacent the spacing to be measured. From prior calibrations on the response of the eddy current coil, the signals from the coil, when sensing the presence of the ferrite slug, are used to determine the spacing between the tubes.

  11. Eddy-current-damped microelectromechanical switch

    DOE Patents [OSTI]

    Christenson, Todd R.; Polosky, Marc A.

    2009-12-15

    A microelectromechanical (MEM) device is disclosed that includes a shuttle suspended for movement above a substrate. A plurality of permanent magnets in the shuttle of the MEM device interact with a metal plate which forms the substrate or a metal portion thereof to provide an eddy-current damping of the shuttle, thereby making the shuttle responsive to changes in acceleration or velocity of the MEM device. Alternately, the permanent magnets can be located in the substrate, and the metal portion can form the shuttle. An electrical switch closure in the MEM device can occur in response to a predetermined acceleration-time event. The MEM device, which can be fabricated either by micromachining or LIGA, can be used for sensing an acceleration or deceleration event (e.g. in automotive applications such as airbag deployment or seat belt retraction).

  12. Eddy-current-damped microelectromechanical switch

    DOE Patents [OSTI]

    Christenson, Todd R.; Polosky, Marc A.

    2007-10-30

    A microelectromechanical (MEM) device is disclosed that includes a shuttle suspended for movement above a substrate. A plurality of permanent magnets in the shuttle of the MEM device interact with a metal plate which forms the substrate or a metal portion thereof to provide an eddy-current damping of the shuttle, thereby making the shuttle responsive to changes in acceleration or velocity of the MEM device. Alternately, the permanent magnets can be located in the substrate, and the metal portion can form the shuttle. An electrical switch closure in the MEM device can occur in response to a predetermined acceleration-time event. The MEM device, which can be fabricated either by micromachining or LIGA, can be used for sensing an acceleration or deceleration event (e.g. in automotive applications such as airbag deployment or seat belt retraction).

  13. Dynamics of positive probes in underdense, strongly magnetized, E×B drifting plasma: Particle-in-cell simulations

    SciTech Connect (OSTI)

    Heinrich, Jonathon R.; Cooke, David L.

    2013-09-15

    Electron trapping, electron heating, space-charge wings, wake eddies, and current collection by a positive probe in E×B drifting plasma were studied in three-dimensional electromagnetic particle-in-cell simulations. In these simulations, electrons and ions were magnetized with respect to the probe and the plasma was underdense (ω{sub pe}<ω{sub ce}). A large drift velocity (Mach 4.5 with respect to the ion acoustic speed) between the plasma and probe was created with background electric and magnetic fields. Four distinct regions developed in the presences of the positive probe: a quasi-trapped electron region, an electron-depletion wing, an ion-rich wing, and a wake region. We report on the observations of strong electron heating mechanisms, space-charge wings, ion cyclotron charge-density eddies in the wake, electron acceleration due to a magnetic presheath, and the current-voltage relationship.

  14. Stochastic Engine Final Report: Applying Markov Chain Monte Carlo Methods with Importance Sampling to Large-Scale Data-Driven Simulation

    SciTech Connect (OSTI)

    Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A

    2004-03-11

    Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source of indistinct atmospheric plumes, and improving flash radiography. In the course of developing these applications, we also developed new methods to cluster and analyze the results of the state-space searches, as well as a number of algorithms to improve the search speed and efficiency. Our generalized solution contributes both a means to make more informed predictions of the behavior of very complex systems, and to improve those predictions as events unfold, using new data in real time.

  15. Eddy Current for Sizing Cracks in Canisters for Dry Storage of Used Nuclear Fuel

    SciTech Connect (OSTI)

    Meyer, Ryan M.; Jones, Anthony M.; Pardini, Allan F.

    2014-01-01

    The storage of used nuclear fuel (UNF) in dry canister storage systems (DCSSs) at Independent Spent Fuel Storage Installations (ISFSI) sites is a temporary measure to accommodate UNF inventory until it can be reprocessed or transferred to a repository for permanent disposal. Policy uncertainty surrounding the long-term management of UNF indicates that DCSSs will need to store UNF for much longer periods than originally envisioned. Meanwhile, the structural and leak-tight integrity of DCSSs must not be compromised. The eddy current technique is presented as a potential tool for inspecting the outer surfaces of DCSS canisters for degradation, particularly atmospheric stress corrosion cracking (SCC). Results are presented that demonstrate that eddy current can detect flaws that cannot be detected reliably using standard visual techniques. In addition, simulations are performed to explore the best parameters of a pancake coil probe for sizing of SCC flaws in DCSS canisters and to identify features in frequency sweep curves that may potentially be useful for facilitating accurate depth sizing of atmospheric SCC flaws from eddy current measurements.

  16. ARM - Evaluation Product - Quality Controlled Eddy Correlation Flux

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    (QCECOR) ProductsQuality Controlled Eddy Correlation Flux (QCECOR) ARM Data Discovery Browse Data Documentation Use the Data File Inventory tool to view data availability at the file level. Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Quality Controlled Eddy Correlation Flux (QCECOR) Eddy correlation flux measurement systems (ECOR) are used by ARM to provide surface turbulence flux measurements. With the help of the

  17. Lattice Boltzmann simulation of solute transport in heterogeneous porous media with conduits to estimate macroscopic continuous time random walk model parameters

    SciTech Connect (OSTI)

    Anwar, S.; Cortis, A.; Sukop, M.

    2008-10-20

    Lattice Boltzmann models simulate solute transport in porous media traversed by conduits. Resulting solute breakthrough curves are fitted with Continuous Time Random Walk models. Porous media are simulated by damping flow inertia and, when the damping is large enough, a Darcy's Law solution instead of the Navier-Stokes solution normally provided by the lattice Boltzmann model is obtained. Anisotropic dispersion is incorporated using a direction-dependent relaxation time. Our particular interest is to simulate transport processes outside the applicability of the standard Advection-Dispersion Equation (ADE) including eddy mixing in conduits. The ADE fails to adequately fit any of these breakthrough curves.

  18. Eddy sensors for small diameter stainless steel tubes.

    SciTech Connect (OSTI)

    Skinner, Jack L.; Morales, Alfredo Martin; Grant, J. Brian; Korellis, Henry James; LaFord, Marianne Elizabeth; Van Blarigan, Benjamin; Andersen, Lisa E.

    2011-08-01

    The goal of this project was to develop non-destructive, minimally disruptive eddy sensors to inspect small diameter stainless steel metal tubes. Modifications to Sandia's Emphasis/EIGER code allowed for the modeling of eddy current bobbin sensors near or around 1/8-inch outer diameter stainless steel tubing. Modeling results indicated that an eddy sensor based on a single axial coil could effectively detect changes in the inner diameter of a stainless steel tubing. Based on the modeling results, sensor coils capable of detecting small changes in the inner diameter of a stainless steel tube were designed, built and tested. The observed sensor response agreed with the results of the modeling and with eddy sensor theory. A separate limited distribution SAND report is being issued demonstrating the application of this sensor.

  19. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    SciTech Connect (OSTI)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix, based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.

  20. On the Use of Integrated Daylighting and Energy Simulations To Drive the Design of a Large Net-Zero Energy Office Building: Preprint

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    522 August 2010 On the Use of Integrated Daylighting and Energy Simulations To Drive the Design of a Large Net-Zero Energy Office Building Preprint Rob Guglielmetti, Shanti Pless, and Paul Torcellini Presented at SimBuild 2010 New York, New York August 15-19, 2010 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor of the US Government under Contract No. DE-AC36-08GO28308. Accordingly, the US Government and Alliance

  1. On the scalability of the Albany/FELIX first-order Stokes approximation ice sheet solver for large-scale simulations of the Greenland and Antarctic ice sheets

    Office of Scientific and Technical Information (OSTI)

    013C This space is reserved for the Procedia header, do not use it On the scalability of the Albany/FELIX first-order Stokes approximation ice sheet solver for large-scale simulations of the Greenland and Antarctic ice sheets Irina Kalashnikova1, Raymond S. Tuminaro2, Mauro Perego2, Andrew G. Salinger2, and Stephen F. Price3 1 Quantitative Modeling & Analysis Dept., Sandia National Laboratories, Livermore, CA, USA Phone: (925)294-2474, E-mail: ikalash@sandia.gov 2 Computational Mathematics

  2. Large scale simulations of the mechanical properties of layered transition metal ternary compounds for fossil energy power system applications

    SciTech Connect (OSTI)

    Ching, Wai-Yim

    2014-12-31

    Advanced materials with applications in extreme conditions such as high temperature, high pressure, and corrosive environments play a critical role in the development of new technologies to significantly improve the performance of different types of power plants. Materials that are currently employed in fossil energy conversion systems are typically the Ni-based alloys and stainless steels that have already reached their ultimate performance limits. Incremental improvements are unlikely to meet the more stringent requirements aimed at increased efficiency and reduce risks while addressing environmental concerns and keeping costs low. Computational studies can lead the way in the search for novel materials or for significant improvements in existing materials that can meet such requirements. Detailed computational studies with sufficient predictive power can provide an atomistic level understanding of the key characteristics that lead to desirable properties. This project focuses on the comprehensive study of a new class of materials called MAX phases, or Mn+1AXn (M = a transition metal, A = Al or other group III, IV, and V elements, X = C or N). The MAX phases are layered transition metal carbides or nitrides with a rare combination of metallic and ceramic properties. Due to their unique structural arrangements and special types of bonding, these thermodynamically stable alloys possess some of the most outstanding properties. We used a genomic approach in screening a large number of potential MAX phases and established a database for 665 viable MAX compounds on the structure, mechanical and electronic properties and investigated the correlations between them. This database if then used as a tool for materials informatics for further exploration of this class of intermetallic compounds.

  3. Big Eddy-Knight | Open Energy Information

    Open Energy Info (EERE)

    the transmission line route that would pose an immediate threat to human health or the environment, including large dump sites, drums of unknown substances, suspicious odors,...

  4. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    University (b) Twelfth Atmospheric Radiation Measurement (ARM) Science Team Meeting LES (large eddy simulation) models can explicitly resolve large turbulent eddies, which...

  5. Probability of detection models for eddy current NDE methods

    SciTech Connect (OSTI)

    Rajesh, S.N.

    1993-04-30

    The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.

  6. Eddy-current system for the vibration-testing of blades

    DOE Patents [OSTI]

    Jacobs, Martin E.

    1977-01-01

    This invention is an improved system for the vibration-testing of cantilevered non-ferrous articles by inducing eddy currents therein. The principal advantage of the system is that relatively little heat is generated in the article being vibrated. Thus, a more accurate measurement of the fatigue characteristics of the article is obtained. Furthermore, the generation of relatively little heat in the blade permits tests to be conducted in low-pressure atmospheres simulating certain actual processes environments. Heat-generation in the vibrated article is minimized by utilizing eddy currents which are generated by an electromagnet whose magnetic field varies but does not change polarity. The typical winding for the electromagnet is excited with pulsating d.c. That is, the winding is alternately charged by connecting it across a d.c. power supply and then discharged by connecting it across a circuit for receiving current generated in the winding by self-induction. Preferably, the discharge circuit is designed so that the waveform of the discharging current approximates that of the charging current.

  7. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    SciTech Connect (OSTI)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPUGPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.

  8. Eddy-driven sediment transport in the Argentine Basin: Is the height of the Zapiola Rise hydrodynamically controlled?

    SciTech Connect (OSTI)

    Weijer, Wilbert; Maltrud, Mathew E.; Homoky, William B.; Polzin, Kurt L.; Maas, Leo R. M.

    2015-03-27

    In this study, we address the question whether eddy-driven transports in the Argentine Basin can be held responsible for enhanced sediment accumulation over the Zapiola Rise, hence accounting for the existence and growth of this sediment drift. To address this question, we perform a 6 year simulation with a strongly eddying ocean model. We release two passive tracers, with settling velocities that are consistent with silt and clay size particles. Our experiments show contrasting behavior between the silt fraction and the lighter clay. Due to its larger settling velocity, the silt fraction reaches a quasisteady state within a few years, with abyssal sedimentation rates that match net input. In contrast, clay settles only slowly, and its distribution is heavily stratified, being transported mainly along isopycnals. Yet, both size classes display a significant and persistent concentration minimum over the Zapiola Rise. We show that the Zapiola Anticyclone, a strong eddy-driven vortex that circulates around the Zapiola Rise, is a barrier to sediment transport, and hence prevents significant accumulation of sediments on the Rise. We conclude that sediment transport by the turbulent circulation in the Argentine Basin alone cannot account for the preferred sediment accumulation over the Rise. We speculate that resuspension is a critical process in the formation and maintenance of the Zapiola Rise.

  9. Eddy-driven sediment transport in the Argentine Basin: Is the height of the Zapiola Rise hydrodynamically controlled?

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Weijer, Wilbert; Maltrud, Mathew E.; Homoky, William B.; Polzin, Kurt L.; Maas, Leo R. M.

    2015-03-27

    In this study, we address the question whether eddy-driven transports in the Argentine Basin can be held responsible for enhanced sediment accumulation over the Zapiola Rise, hence accounting for the existence and growth of this sediment drift. To address this question, we perform a 6 year simulation with a strongly eddying ocean model. We release two passive tracers, with settling velocities that are consistent with silt and clay size particles. Our experiments show contrasting behavior between the silt fraction and the lighter clay. Due to its larger settling velocity, the silt fraction reaches a quasisteady state within a few years,more » with abyssal sedimentation rates that match net input. In contrast, clay settles only slowly, and its distribution is heavily stratified, being transported mainly along isopycnals. Yet, both size classes display a significant and persistent concentration minimum over the Zapiola Rise. We show that the Zapiola Anticyclone, a strong eddy-driven vortex that circulates around the Zapiola Rise, is a barrier to sediment transport, and hence prevents significant accumulation of sediments on the Rise. We conclude that sediment transport by the turbulent circulation in the Argentine Basin alone cannot account for the preferred sediment accumulation over the Rise. We speculate that resuspension is a critical process in the formation and maintenance of the Zapiola Rise.« less

  10. CgWind: A high-order accurate simulation tool for wind turbines and wind farms

    SciTech Connect (OSTI)

    Chand, K K; Henshaw, W D; Lundquist, K A; Singer, M A

    2010-02-22

    CgWind is a high-fidelity large eddy simulation (LES) tool designed to meet the modeling needs of wind turbine and wind park engineers. This tool combines several advanced computational technologies in order to model accurately the complex and dynamic nature of wind energy applications. The composite grid approach provides high-quality structured grids for the efficient implementation of high-order accurate discretizations of the incompressible Navier-Stokes equations. Composite grids also provide a natural mechanism for modeling bodies in relative motion and complex geometry. Advanced algorithms such as matrix-free multigrid, compact discretizations and approximate factorization will allow CgWind to perform highly resolved calculations efficiently on a wide class of computing resources. Also in development are nonlinear LES subgrid-scale models required to simulate the many interacting scales present in large wind turbine applications. This paper outlines our approach, the current status of CgWind and future development plans.

  11. On the simulation of shock-driven material mixing in high-Re flows (u)

    SciTech Connect (OSTI)

    Grinstein, Fernando F [Los Alamos National Laboratory

    2009-01-01

    Implicit large eddy simulation proposes to effectively rely on the use of subgrid modeling and filtering provided implicitly by physics capturing numerics. Extensive work has demonstrated that predictive simulations of turbulent velocity fields are possible using a class of high resolution, non-oscillatory finite-volume (NFV) numerical algorithms. Truncation terms associated with NFV methods implicitly provide subgrid models capable of emulating the physical dynamics of the unresolved turbulent velocity fluctuations by themselves. The extension of the approach to the substantially more difficult problem of under-resolved material mixing by an under-resolved velocity field has not yet been investigated numerically, nor are there any theories as to when the methodology may be expected to be successful. Progress in addressing these issues in studies of shock-driven scalar mixing driven by Ritchmyer-Meshkov instabilities will be reported in the context of ongoing simulations of shock-tube laboratory experiments.

  12. simulations | National Nuclear Security Administration

    National Nuclear Security Administration (NNSA)

    simulations

  13. Eddy Correlation Flux Measurement System (ECOR) Instrument Handbook

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SC-ARM/TR-052 Eddy Correlation Flux Measurement System Instrument Handbook DR Cook January 2016 DOE/SC-ARM/TR-052 DISCLAIMER This report was prepared as an account of work sponsored by the U.S. Government. Neither the United States nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that

  14. Automated detection and location of indications in eddy current signals

    DOE Patents [OSTI]

    Brudnoy, David M.; Oppenlander, Jane E.; Levy, Arthur J.

    2000-01-01

    A computer implemented information extraction process that locates and identifies eddy current signal features in digital point-ordered signals, signals representing data from inspection of test materials, by enhancing the signal features relative to signal noise, detecting features of the signals, verifying the location of the signal features that can be known in advance, and outputting information about the identity and location of all detected signal features.

  15. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; et al

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  16. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  17. Performance of powder-filled evacuated panel insulation in a manufactured home roof cavity: Tests in the Large Scale Climate Simulator

    SciTech Connect (OSTI)

    Petrie, T.W.; Kosny, J.; Childs, P.W.

    1996-03-01

    A full-scale section of half the top of a single-wide manufactured home has been studied in the Large Scale Climate Simulator (LSCS) at the Oak Ridge National Laboratory. A small roof cavity with little room for insulation at the eaves is often the case with single-wide units and limits practical ways to improve thermal performance. The purpose of the current tests was to obtain steady-state performance data for the roof cavity of the manufactured home test section when the roof cavity was insulated with fiberglass batts, blown-in rock wool insulation or combinations of these insulations and powder-filled evacuated panel (PEP) insulation. Four insulation configurations were tested: (A) a configuration with two layers of nominal R{sub US}-7 h {center_dot} ft{sup 2} {center_dot} F/BTU (R{sub SI}-1.2 m{sup 2} {center_dot} K/W) fiberglass batts; (B) a layer of PEPs and one layer of the fiberglass batts; (C) four layers of the fiberglass batts; and (D) an average 4.1 in. (10.4 cm) thick layer of blown-in rock wool at an average density of 2.4 lb/ft{sup 3} (38 kg/m{sup 3}). Effects of additional sheathing were determined for Configurations B and C. With Configuration D over the ceiling, two layers of expanded polystyrene (EPS) boards, each about the same thickness as the PEPs, were installed over the trusses instead of the roof. Aluminum foils facing the attic and over the top layer of EPS were added. The top layer of EPS was then replaced by PEPs.

  18. Method and apparatus for correcting eddy current signal voltage for temperature effects

    DOE Patents [OSTI]

    Kustra, Thomas A.; Caffarel, Alfred J.

    1990-01-01

    An apparatus and method for measuring physical characteristics of an electrically conductive material by the use of eddy-current techniques and compensating measurement errors caused by changes in temperature includes a switching arrangement connected between primary and reference coils of an eddy-current probe which allows the probe to be selectively connected between an eddy current output oscilloscope and a digital ohm-meter for measuring the resistances of the primary and reference coils substantially at the time of eddy current measurement. In this way, changes in resistance due to temperature effects can be completely taken into account in determining the true error in the eddy current measurement. The true error can consequently be converted into an equivalent eddy current measurement correction.

  19. Eddy current probe with foil sensor mounted on flexible probe tip and method of use

    DOE Patents [OSTI]

    Viertl, John R. M.; Lee, Martin K.

    2001-01-01

    A pair of copper coils are embedded in the foil strip. A first coil of the pair generates an electromagnetic field that induces eddy currents on the surface, and the second coil carries a current influenced by the eddy currents on the surface. The currents in the second coil are analyzed to obtain information on the surface eddy currents. An eddy current probe has a metal housing having a tip that is covered by a flexible conductive foil strip. The foil strip is mounted on a deformable nose at the probe tip so that the strip and coils will conform to the surface to which they are applied.

  20. Microsoft Word - FEIS-0421-SA-01-Big Eddy-KnightDesignAdjustmentFY12...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Final Environmental Impact Statement (EIS) (DOEEIS-0421) and issued a Record of Decision (ROD) documenting its decision to build and operate the Big Eddy-Knight transmission line. ...

  1. Is it Cost-Effective to Replace Old Eddy-Current Drives? - Motor Tip Sheet #12

    SciTech Connect (OSTI)

    2008-07-01

    New pulse-width-modulated (PWM) adjustable speed drives (ASDs) may be cost-effective replacements for aging or maintenance-intensive eddy-current drives.

  2. Simulating Collisions for Hydrokinetic Turbines

    SciTech Connect (OSTI)

    Richmond, Marshall C.; Romero Gomez, Pedro DJ; Rakowski, Cynthia L.

    2013-10-01

    Evaluations of blade-strike on an axial-flow Marine Hydrokinetic turbine were conducted using a conventional methodology as well as an alternative modeling approach proposed in the present document. The proposed methodology integrates the following components into a Computa- tional Fluid Dynamics (CFD) model: (i) advanced eddy-resolving flow simulations, (ii) ambient turbulence based on field data, (iii) moving turbine blades in highly transient flows, and (iv) Lagrangian particles to mimic the potential fish pathways. The sensitivity of blade-strike prob- ability to the following conditions was also evaluated: (i) to the turbulent environment, (ii) to fish size and (iii) to mean stream flow velocity. The proposed methodology provided fraction of collisions and offered the capability of analyzing the causal relationships between the flow envi- ronment and resulting strikes on rotating blades. Overall, the conventional methodology largely overestimates the probability of strike, and lacks the ability to produce potential fish and aquatic biota trajectories as they interact with the rotating turbine. By using a set of experimental corre- lations of exposure-response of living fish colliding on moving blades, the occurrence, frequency and intensity of the particle collisions was next used to calculate the survival rate of fish crossing the MHK turbine. This step indicated survival rates always greater than 98%. Although the proposed CFD framework is computationally more expensive, it provides the advantage of evaluating multiple mechanisms of stress and injury of hydrokinetic turbine devices on fish.

  3. Methods of and apparatus for levitating an eddy current probe

    DOE Patents [OSTI]

    Stone, William J.

    1988-05-03

    An eddy current probe is supported against the force of gravity with an air earing while being urged horizontally toward the specimen being examined by a spring and displaced horizontally against the force of the spring pneumatically. The pneumatic displacement is accomplished by flowing air between a plenum chamber fixed with respect to the probe and the surface of the specimen. In this way, the surface of the specimen can be examined without making mechanical contact therewith while precisely controlling the distance at which the probe stands-off from the surface of the specimen.

  4. The Role of Eddy-Tansport in the Thermohaline Circulation

    SciTech Connect (OSTI)

    Dr. Paola Cessi

    2011-11-17

    Several research themes were developed during the course of this project. (1) Low-frequency oceanic varibility; (2) The role of eddies in the Antarctic Circumpolar Current (ACC) region; (3) Deep stratification and the overturning circulation. The key findings were as follows: (1) The stratification below the main thermocline (at about 500m) is determined in the circumpolar region and then communicated to the enclosed portions of the oceans through the overturning circulation. (2) An Atlantic pole-to-pole overturning circulation can be maintained with very small interior mixing as long as surface buoyancy values are shared between the northern North Atlantic and the ACC region.

  5. Eddy current gauge for monitoring displacement using printed circuit coil

    DOE Patents [OSTI]

    Visioli, Jr., Armando J.

    1977-01-01

    A proximity detection system for non-contact displacement and proximity measurement of static or dynamic metallic or conductive surfaces is provided wherein the measurement is obtained by monitoring the change in impedance of a flat, generally spiral-wound, printed circuit coil which is excited by a constant current, constant frequency source. The change in impedance, which is detected as a corresponding change in voltage across the coil, is related to the eddy current losses in the distant conductive material target. The arrangement provides for considerable linear displacement range with increased accuracies, stability, and sensitivity over the entire range.

  6. Atmospheric Moisture Budget and Spatial Resolution Dependence of Precipitation Extremes in Aquaplanet Simulations

    SciTech Connect (OSTI)

    Yang, Qing; Leung, Lai-Yung R.; Rauscher, Sara; Ringler, Todd; Taylor, Mark

    2014-05-01

    This study investigates the resolution dependency of precipitation extremes in an aqua-planet framework. Strong resolution dependency of precipitation extremes is seen over both tropics and extra-tropics, and the magnitude of this dependency also varies with dynamical cores. Moisture budget analyses based on aqua-planet simulations with the Community Atmosphere Model (CAM) using the Model for Prediction Across Scales (MPAS) and High Order Method Modeling Environment (HOMME) dynamical cores but the same physics parameterizations suggest that during precipitation extremes moisture supply for surface precipitation is mainly derived from advective moisture convergence. The resolution dependency of precipitation extremes mainly originates from advective moisture transport in the vertical direction. At most vertical levels over the tropics and in the lower atmosphere over the subtropics, the vertical eddy transport of mean moisture field dominates the contribution to precipitation extremes and its resolution dependency. Over the subtropics, the source of moisture, its associated energy, and the resolution dependency during extremes are dominated by eddy transport of eddies moisture at the mid- and upper-troposphere. With both MPAS and HOMME dynamical cores, the resolution dependency of the vertical advective moisture convergence is mainly explained by dynamical changes (related to vertical velocity or omega), although the vertical gradients of moisture act like averaging kernels to determine the sensitivity of the overall resolution dependency to the changes in omega at different vertical levels. The natural reduction of variability with coarser resolution, represented by areal data averaging (aggregation) effect, largely explains the resolution dependency in omega. The thermodynamic changes, which likely result from non-linear feedback in response to the large dynamical changes, are small compared to the overall changes in dynamics (omega). However, after excluding the data aggregation effect in omega, thermodynamic changes become relatively significant in offsetting the effect of dynamics leading to reduce differences between the simulated and aggregated results. Compared to MPAS, the simulated stronger vertical motion with HOMME also results in larger resolution dependency. Compared to the simulation at fine resolution, the vertical motion during extremes is insufficiently resolved/parameterized at the coarser resolution even after accounting for the natural reduction in variability with coarser resolution, and this is more distinct in the simulation with HOMME. To reduce uncertainties in simulated precipitation extremes, future development in cloud parameterizations must address their sensitivity to spatial resolution as well as dynamical cores.

  7. Three-fluid, three-dimensional magnetohydrodynamic solar wind model with eddy viscosity and turbulent resistivity

    SciTech Connect (OSTI)

    Usmanov, Arcadi V.; Matthaeus, William H.; Goldstein, Melvyn L.

    2014-06-10

    We have developed a three-fluid, three-dimensional magnetohydrodynamic solar wind model that incorporates turbulence transport, eddy viscosity, turbulent resistivity, and turbulent heating. The solar wind plasma is described as a system of co-moving solar wind protons, electrons, and interstellar pickup protons, with separate energy equations for each species. Numerical steady-state solutions of Reynolds-averaged solar wind equations coupled with turbulence transport equations for turbulence energy, cross helicity, and correlation length are obtained by the time relaxation method in the corotating with the Sun frame of reference in the region from 0.3 to 100 AU (but still inside the termination shock). The model equations include the effects of electron heat conduction, Coulomb collisions, photoionization of interstellar hydrogen atoms and their charge exchange with the solar wind protons, turbulence energy generation by pickup protons, and turbulent heating of solar wind protons and electrons. The turbulence transport model is based on the Reynolds decomposition and turbulence phenomenologies that describe the conversion of fluctuation energy into heat due to a turbulent cascade. In addition to using separate energy equations for the solar wind protons and electrons, a significant improvement over our previous work is that the turbulence model now uses an eddy viscosity approximation for the Reynolds stress tensor and the mean turbulent electric field. The approximation allows the turbulence model to account for driving of turbulence by large-scale velocity gradients. Using either a dipole approximation for the solar magnetic field or synoptic solar magnetograms from the Wilcox Solar Observatory for assigning boundary conditions at the coronal base, we apply the model to study the global structure of the solar wind and its three-dimensional properties, including embedded turbulence, heating, and acceleration throughout the heliosphere. The model results are compared with plasma and magnetic field observations on WIND, Ulysses, and Voyager 2 spacecraft.

  8. Computer programs for eddy-current defect studies

    SciTech Connect (OSTI)

    Pate, J. R.; Dodd, C. V.

    1990-06-01

    Several computer programs to aid in the design of eddy-current tests and probes have been written. The programs, written in Fortran, deal in various ways with the response to defects exhibited by four types of probes: the pancake probe, the reflection probe, the circumferential boreside probe, and the circumferential encircling probe. Programs are included which calculate the impedance or voltage change in a coil due to a defect, which calculate and plot the defect sensitivity factor of a coil, and which invert calculated or experimental readings to obtain the size of a defect. The theory upon which the programs are based is the Burrows point defect theory, and thus the calculations of the programs will be more accurate for small defects. 6 refs., 21 figs.

  9. Direct numerical simulation of turbulent reacting flows

    SciTech Connect (OSTI)

    Chen, J.H.

    1993-12-01

    The development of turbulent combustion models that reflect some of the most important characteristics of turbulent reacting flows requires knowledge about the behavior of key quantities in well defined combustion regimes. In turbulent flames, the coupling between the turbulence and the chemistry is so strong in certain regimes that is is very difficult to isolate the role played by one individual phenomenon. Direct numerical simulation (DNS) is an extremely useful tool to study in detail the turbulence-chemistry interactions in certain well defined regimes. Globally, non-premixed flames are controlled by two limiting cases: the fast chemistry limit, where the turbulent fluctuations. In between these two limits, finite-rate chemical effects are important and the turbulence interacts strongly with the chemical processes. This regime is important because industrial burners operate in regimes in which, locally the flame undergoes extinction, or is at least in some nonequilibrium condition. Furthermore, these nonequilibrium conditions strongly influence the production of pollutants. To quantify the finite-rate chemistry effect, direct numerical simulations are performed to study the interaction between an initially laminar non-premixed flame and a three-dimensional field of homogeneous isotropic decaying turbulence. Emphasis is placed on the dynamics of extinction and on transient effects on the fine scale mixing process. Differential molecular diffusion among species is also examined with this approach, both for nonreacting and reacting situations. To address the problem of large-scale mixing and to examine the effects of mean shear, efforts are underway to perform large eddy simulations of round three-dimensional jets.

  10. Method and infrastructure for cycle-reproducible simulation on large scale digital circuits on a coordinated set of field-programmable gate arrays (FPGAs)

    DOE Patents [OSTI]

    Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A

    2014-01-28

    A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.

  11. Is it Cost-Effective to Replace Old Eddy-Current Drives?

    Broader source: Energy.gov [DOE]

    New pulse-width-modulated (PWM) adjustable speed drives (ASDs) may be cost-effective replacements for aging or maintenance-intensive eddy-current drives. This tip sheet provides suggested actions and example energy savings calculations.

  12. LES ARM Symbiotic Simulation and Observation (LASSO) Implementation Strategy

    SciTech Connect (OSTI)

    Gustafson Jr., WI; Vogelmann, AM

    2015-09-01

    This document illustrates the design of the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) workflow to provide a routine, high-resolution modeling capability to augment the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s high-density observations. LASSO will create a powerful new capability for furthering ARM’s mission to advance understanding of cloud, radiation, aerosol, and land-surface processes. The combined observational and modeling elements will enable a new level of scientific inquiry by connecting processes and context to observations and providing needed statistics for details that cannot be measured. The result will be improved process understanding that facilitates concomitant improvements in climate model parameterizations. The initial LASSO implementation will be for ARM’s Southern Great Plains site in Oklahoma and will focus on shallow convection, which is poorly simulated by climate models due in part to clouds’ typically small spatial scale compared to model grid spacing, and because the convection involves complicated interactions of microphysical and boundary layer processes.

  13. Method for removal of random noise in eddy-current testing system

    DOE Patents [OSTI]

    Levy, Arthur J.

    1995-01-01

    Eddy-current response voltages, generated during inspection of metallic structures for anomalies, are often replete with noise. Therefore, analysis of the inspection data and results is difficult or near impossible, resulting in inconsistent or unreliable evaluation of the structure. This invention processes the eddy-current response voltage, removing the effect of random noise, to allow proper identification of anomalies within and associated with the structure.

  14. Fast Acting Eddy Current Driven Valve for Massive Gas Injection on ITER

    SciTech Connect (OSTI)

    Lyttle, Mark S; Baylor, Larry R; Carmichael, Justin R; Combs, Stephen Kirk; Ericson, Milton Nance; Ezell, N Dianne Bull; Meitner, S. J.; Rasmussen, David A; Warmack, Robert J Bruce; Maruyama, So; Kiss, Gabor

    2015-01-01

    Tokamak plasma disruptions present a significant challenge to ITER as they can result in intense heat flux, large forces from halo and eddy currents, and potential first-wall damage from the generation of multi-MeV runaway electrons. Massive gas injection (MGI) of high Z material using fast acting valves is being explored on existing tokamaks and is planned for ITER as a method to evenly distribute the thermal load of the plasma to prevent melting, control the rate of the current decay to minimize mechanical loads, and to suppress the generation of runaway electrons. A fast acting valve and accompanying power supply have been designed and first test articles produced to meet the requirements for a disruption mitigation system on ITER. The test valve incorporates a flyer plate actuator similar to designs deployed on TEXTOR, ASDEX upgrade, and JET [1 3] of a size useful for ITER with special considerations to mitigate the high mechanical forces developed during actuation due to high background magnetic fields. The valve includes a tip design and all-metal valve stem sealing for compatibility with tritium and high neutron and gamma fluxes.

  15. v{sub 4} from ideal and viscous hydrodynamic simulations of nuclear collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC)

    SciTech Connect (OSTI)

    Luzum, Matthew; Gombeaud, Clement; Ollitrault, Jean-Yves

    2010-05-15

    We compute v{sub 4}/(v{sub 2}){sup 2} in ideal and viscous hydrodynamics. We investigate its sensitivity to details of the hydrodynamic model and compare the results to experimental data from the BNL Relativistic Heavy Ion Collider (RHIC). Whereas v{sub 2} has a significant sensitivity only to initial eccentricity and viscosity while being insensitive to freeze-out temperature, we find that v{sub 4}/(v{sub 2}){sup 2} is quite insensitive to initial eccentricity. On the other hand, it can still be sensitive to shear viscosity in addition to freeze-out temperature, although viscous effects do not universally increase v{sub 4}/(v{sub 2}){sup 2} as originally predicted. Consistent with data, we find no dependence on particle species. We also make a prediction for v{sub 4}/(v{sub 2}){sup 2} in heavy ion collisions at the CERN Large Hadron Collider (LHC).

  16. Variable current speed controller for eddy current motors

    DOE Patents [OSTI]

    Gerth, H.L.; Bailey, J.M.; Casstevens, J.M.; Dixon, J.H.; Griffith, B.O.; Igou, R.E.

    1982-03-12

    A speed control system for eddy current motors is provided in which the current to the motor from a constant frequency power source is varied by comparing the actual motor speed signal with a setpoint speed signal to control the motor speed according to the selected setpoint speed. A three-phase variable voltage autotransformer is provided for controlling the voltage from a three-phase power supply. A corresponding plurality of current control resistors is provided in series with each phase of the autotransformer output connected to inputs of a three-phase motor. Each resistor is connected in parallel with a set of normally closed contacts of plurality of relays which are operated by control logic. A logic circuit compares the selected speed with the actual motor speed obtained from a digital tachometer monitoring the motor spindle speed and operated the relays to add or substract resistance equally in each phase of the motor input to vary the motor current to control the motor at the selected speed.

  17. Cold Crucible Induction Melter (CCIM) Demonstration Using a Representative Savannah River Site Sludge Simulant On the Large-Size Pilot Platform at the CEA-Marcoule

    SciTech Connect (OSTI)

    Girold, C.; Delaunay, M.; Dussossoy, J.L.; Lacombe, J. [CEA Marcoule, CEA/DEN/DTCD/SCDV, 30 (France); Marra, S.; Peeler, D.; Herman, C.; Smith, M.; Edwards, R.; Barnes, A.; Stone, M. [Savannah River National Laboratory (SRNL), Washington Savannah River Company, Savannah River Site, Aiken, SC (United States); Iverson, D. [Liquid Waste Operations, Washington Savannah River Company (WSRC), Aiken, SC (United States); Do Quang, R. [AREVA NC, Tour AREVA, 92 - Paris La Defense (France); Tchemitcheff, E. [AREVA Federal Services LLC, Richland Office, Richland, WA (United States); Veyer, C. [Consultant, 59 - Saint Waast la Vallee (France)

    2008-07-01

    The cold-crucible induction melter technology (CCIM) is considered worldwide for industrial implementation to overcome the current limits of high level waste vitrification technologies and to answer future challenges such as: new or difficult sludge compositions, need for improving waste loading, need for high temperatures, and corrosive effluents. More particularly, this technology is being considered for implementation at the US DOE Savannah River site to increase the rate of waste processing while reducing the number of HLW canisters to be produced through increased waste loading and improved waste throughput. A collaborative program involving AREVA, CEA (French Atomic Energy Commission), SRNL (Savannah River National Laboratory) and WSRC (Washington Savannah River Company) has thus been initiated in 2007 to demonstrate vitrification with waste loadings on the order of 50% (versus the current DWPF waste loading of about 35%) with a PUREX-type waste composition (high Fe{sub 2}O{sub 3} composition), and to perform two pilot-scale runs on the large size platform equipped with a 650 mm diameter CCIM at the CEA Marcoule. The objectives of the demonstrations were 1) to show the feasibility of processing a representative SRS sludge surrogate using continuous slurry feeding, 2) to produce a glass that would meet the acceptance specifications with an increased waste loading when compared to what is presently achieved at the DWPF, and 3) achieve improved waste throughputs. This presentation describes the platform and the very encouraging results obtained from the demonstration performed at temperatures, specific throughputs and waste loadings that overcome current DWPF limits. Results from the initial exploratory run and second demonstration run include 1) production of a glass product that achieved the targeted glass composition that was more durable than the standard Environmental Assessment (EA) glass, 2) successful slurry feeding of the CCIM, and 3) promising waste processing rates (at 1250 deg. C and 1300 deg. C melt pool temperature) that could result in processing of the Savannah River HLW faster than could be currently achieved with the existing Joule Heated melter in DWPF. In conclusion, this joint effort conducted by CEA, AREVA, SRNL and WSRC led to very encouraging results, demonstrating waste throughputs 44 % that of the DWPF ceramic melter throughput in a 650 mm CCIM melter for the same waste type with a Sludge Batch 3 PUREX-type waste feed flux of 150 L/h/m{sup 2} demonstrated at 1250 deg. C. The very high waste loading (above 52%) allows reducing the amount of glass to be produced by about 27% to treat the same amount of waste when compared to previous DWPF operation for this specific type of feed, since 27 % less glass is needed to immobilize the same amount of waste. It was also demonstrated, for this type of feed, an unusual behavior with regard to nepheline formation, which would require further evaluation for future applications. The product from the baseline demonstration run, with a waste loading of at least 52%, displayed a very good quality. Stabilized operation close to the maximum throughput was demonstrated. Cesium volatility was apparently between 7 and 12 % (based on glass analysis); however this value is only preliminary. This demonstration also allowed the CEA to better understand the SRS slurry feed behavior and to propose adaptations to the platform for any future demonstrations using this type of feed. Finally, use of a large diameter CCIM ({approx}1 meter) may allow faster processing of the SRS HLW than can be achieved with the current DWPF melter. (authors)

  18. Modulations of the plasma uniformity by low frequency sources in a large-area dual frequency inductively coupled plasma based on fluid simulations

    SciTech Connect (OSTI)

    Sun, Xiao-Yan; Zhang, Yu-Ru; Li, Xue-Chun; Wang, You-Nian

    2015-05-15

    As the wafer size increases, dual frequency (DF) inductively coupled plasma (ICP) sources have been proposed as an effective method to achieve large-area uniform plasma processing. A two-dimensional (2D) self-consistent fluid model, combined with an electromagnetic module, has been employed to investigate the influence of the low frequency (LF) source on the plasma radial uniformity in an argon DF discharge. When the DF antenna current is fixed at 10 A, the bulk plasma density decreases significantly with the LF due to the less efficient heating, and the best radial uniformity is obtained at 3.39 MHz. As the LF decreases to 2.26 MHz, the plasma density is characterized by an edge-high profile, and meanwhile the maximum of the electron temperature appears below the outer two-turn coil. Moreover, the axial ion flux at 3.39 MHz is rather uniform in the center region except at the radial edge of the substrate, where a higher ion flux is observed. When the inner five-turn coil frequency is fixed at 2.26 MHz, the plasma density profiles shift from edge-high over uniform to center-high as the LF coil current increases from 6 A to 18 A, and the best plasma uniformity is obtained at 14 A. In addition, the maximum of the electron temperature becomes lower with a second peak appears at the radial position of r = 9 cm at 18 A.

  19. Adaptive Detached Eddy Simulation of a High Lift Wing with Active...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    such as multi-element wings. Specifically, researchers will model an array of synthetic jets that have been vectored to augment the streamwise momentum near the flap suction peak....

  20. Diagnosing isopycnal diffusivity in an eddying, idealized midlatitude ocean basin via Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT)

    SciTech Connect (OSTI)

    Wolfram, Phillip J.; Ringler, Todd D.; Maltrud, Mathew E.; Jacobsen, Douglas W.; Petersen, Mark R.

    2015-08-01

    Isopycnal diffusivity due to stirring by mesoscale eddies in an idealized, wind-forced, eddying, midlatitude ocean basin is computed using Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT). Simulation is performed via LIGHT within the Model for Prediction across Scales Ocean (MPAS-O). Simulations are performed at 4-, 8-, 16-, and 32-km resolution, where the first Rossby radius of deformation (RRD) is approximately 30 km. Scalar and tensor diffusivities are estimated at each resolution based on 30 ensemble members using particle cluster statistics. Each ensemble member is composed of 303 665 particles distributed across five potential density surfaces. Diffusivity dependence upon model resolution, velocity spatial scale, and buoyancy surface is quantified and compared with mixing length theory. The spatial structure of diffusivity ranges over approximately two orders of magnitude with values of O(105) m2 s–1 in the region of western boundary current separation to O(103) m2 s–1 in the eastern region of the basin. Dominant mixing occurs at scales twice the size of the first RRD. Model resolution at scales finer than the RRD is necessary to obtain sufficient model fidelity at scales between one and four RRD to accurately represent mixing. Mixing length scaling with eddy kinetic energy and the Lagrangian time scale yield mixing efficiencies that typically range between 0.4 and 0.8. In conclusion, a reduced mixing length in the eastern region of the domain relative to the west suggests there are different mixing regimes outside the baroclinic jet region.

  1. Diagnosing isopycnal diffusivity in an eddying, idealized midlatitude ocean basin via Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT)

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wolfram, Phillip J.; Ringler, Todd D.; Maltrud, Mathew E.; Jacobsen, Douglas W.; Petersen, Mark R.

    2015-08-01

    Isopycnal diffusivity due to stirring by mesoscale eddies in an idealized, wind-forced, eddying, midlatitude ocean basin is computed using Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT). Simulation is performed via LIGHT within the Model for Prediction across Scales Ocean (MPAS-O). Simulations are performed at 4-, 8-, 16-, and 32-km resolution, where the first Rossby radius of deformation (RRD) is approximately 30 km. Scalar and tensor diffusivities are estimated at each resolution based on 30 ensemble members using particle cluster statistics. Each ensemble member is composed of 303 665 particles distributed across five potential density surfaces. Diffusivity dependence upon modelmore » resolution, velocity spatial scale, and buoyancy surface is quantified and compared with mixing length theory. The spatial structure of diffusivity ranges over approximately two orders of magnitude with values of O(105) m2 s–1 in the region of western boundary current separation to O(103) m2 s–1 in the eastern region of the basin. Dominant mixing occurs at scales twice the size of the first RRD. Model resolution at scales finer than the RRD is necessary to obtain sufficient model fidelity at scales between one and four RRD to accurately represent mixing. Mixing length scaling with eddy kinetic energy and the Lagrangian time scale yield mixing efficiencies that typically range between 0.4 and 0.8. In conclusion, a reduced mixing length in the eastern region of the domain relative to the west suggests there are different mixing regimes outside the baroclinic jet region.« less

  2. Modifications to WRF's dynamical core to improve the treatment of

    Office of Scientific and Technical Information (OSTI)

    moisture for large--eddy simulations (Journal Article) | SciTech Connect WRF's dynamical core to improve the treatment of moisture for large--eddy simulations Citation Details In-Document Search Title: Modifications to WRF's dynamical core to improve the treatment of moisture for large--eddy simulations Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to

  3. Modifications to WRFs dynamical core to improve the treatment of moisture

    Office of Scientific and Technical Information (OSTI)

    for large-eddy simulations (Journal Article) | SciTech Connect WRFs dynamical core to improve the treatment of moisture for large-eddy simulations Citation Details In-Document Search Title: Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity

  4. REMOTE FIELD EDDY CURRENT INSPECTION OF UNPIGGABLE PIPELINES

    SciTech Connect (OSTI)

    Albert Teitsma

    2004-03-01

    The Remote Field Eddy Current (RFEC) technique is ideal for inspecting unpiggable pipelines because all its components can be made much smaller than the diameter of the pipe to be inspected. We reviewed the technique, and used demonstrations from prior work by others in presentations on the technique and how we plan to develop it. Coils were wound; a jig for pulling the coils through the pipe was manufactured; defects were machined in one six-inch diameter, ten-foot long pipe; and the equipment was assembled. After completing first crude pullout test to show that RFEC inspection would work, we repeated the experiment with a proper jig and got excellent results. The test showed the expected behavior, with the direct field dominating the signal to about two pipe diameters from the drive coil, and the remote field dominating for greater separations between the drive coil and the sensing coils. Response of RFEC to a typical defect was measured, as was the sensitivity to defect size. Before manufacturing defects in the pipe, we measured the effect of defect separation and concluded that defects separated by 18 inches or 1/3rd of the pipe diameter did not interfere with each other. We manufactured a set of 13 defects, and measured the RFEC signals. We found a background variation that was eventually attributed to permeability variations in the seamless pipe. We scanned all thirteen defects and got satisfactory results. The two smallest defects did not show a signal, but these were much too small to be reported in a pipeline inspection. We acquired a ten-foot seam welded pipe that has much less background variation. We are measuring the sensitivity of RFEC signals to mechanical variations between the exciter and sensing coils.

  5. Magnetic diagnostics for equilibrium reconstructions with eddy currents on the lithium tokamak experiment

    SciTech Connect (OSTI)

    Schmitt, J. C. Lazerson, S.; Majeski, R.; Bialek, J.

    2014-11-15

    The Lithium Tokamak eXperiment is a spherical tokamak with a close-fitting low-recycling wall composed of thin lithium layers evaporated onto a stainless steel-lined copper shell. Long-lived non-axisymmetric eddy currents are induced in the shell and vacuum vessel by transient plasma and coil currents and these eddy currents influence both the plasma and the magnetic diagnostic signals that are used as constraints for equilibrium reconstruction. A newly installed set of re-entrant magnetic diagnostics and internal saddle flux loops, compatible with high-temperatures and lithium environments, is discussed. Details of the axisymmetric (2D) and non-axisymmetric (3D) treatments of the eddy currents and the equilibrium reconstruction are presented.

  6. Evaluation and field validation of Eddy-Current array probes for steam generator tube inspection

    SciTech Connect (OSTI)

    Dodd, C.V.; Pate, J.R.

    1996-07-01

    The objective of the Improved Eddy-Current ISI for Steam Generator Tubing program is to upgrade and validate eddy-current inspections, including probes, instrumentation, and data processing techniques for inservice inspection of new, used, and repaired steam generator tubes; to improve defect detection, classification, and characterization as affected by diameter and thickness variations, denting, probe wobble, tube sheet, tube supports, copper and sludge deposits, even when defect types and other variables occur in combination; to transfer this advanced technology to NRC`s mobile NDE laboratory and staff. This report describes the design of specialized high-speed 16-coil eddy-current array probes. Both pancake and reflection coils are considered. Test results from inspections using the probes in working steam generators are given. Computer programs developed for probe calculations are also supplied.

  7. Frontal Eddy Dynamics (FRED) experiment off North Carolina: Volume 1. Executive summary

    SciTech Connect (OSTI)

    Ebbesmeyer, C.C.

    1989-03-01

    In preparation for oil and gas lease sales on the outer continental shelf offshore of North Carolina, the Minerals Management Service was requested to investigate the potential transport and impacts of oil spilled offshore. The Gulf Stream and associated eddies are an important aspect of the transport. Although the speed and location of the Gulf Stream are reasonably well known, knowledge of the meanders of the Gulf Stream is limited. How the circulatory structure and movement of associated frontal eddies and filaments affect the North Carolina coastal waters is not clear. This study investigates the interactions of these circulatory elements and follows the evolution of frontal eddies as they migrate along the North Carolina coast.

  8. Magnetic diagnostics for equilibrium reconstructions with eddy currents on the lithium tokamak experimenta)

    SciTech Connect (OSTI)

    Schmitt, J. C. [Princeton Plasma Physics Laboratory (PPPL), Princeton, NJ (United States); Bialek, J. [Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY (United States); Lazerson, S. [Princeton Plasma Physics Laboratory (PPPL), Princeton, NJ (United States); Majeski, R. [Princeton Plasma Physics Laboratory (PPPL), Princeton, NJ (United States)

    2014-11-01

    The Lithium Tokamak eXperiment is a spherical tokamak with a close-fitting low-recycling wall composed of thin lithium layers evaporated onto a stainless steel-lined copper shell. Long-lived non-axisymmetric eddy currents are induced in the shell and vacuum vessel by transient plasma and coil currents and these eddy currents influence both the plasma and the magnetic diagnositc signals that are used as constraints for equilibrium reconstruction. A newly installed set of re-entrant magnetic diagnostics and internal saddle flux loops, compatible with high-temperatures and lithium environments, is discussed. Details of the axisymmetric (2D) and non-axisymmetric (3D) treatments of the eddy currents and the equilibrium reconstruction are presented.

  9. Westinghouse Offers $6,400 in College Scholarships to Eddy County Students

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Offers $6,400 in College Scholarships to Eddy County Students CARLSBAD, N.M., February 18, 2000 - The Westinghouse Government Services Group today announced that $6,400 in college scholarships will be awarded to Eddy County students for the 2000-2001 school year. The deadline to apply is April 3. Two $2,500 scholarships are being offered - one will be honored at New Mexico State University (NMSU) in Las Cruces, and one at the College of the Southwest (CSW) in Carlsbad. High school seniors

  10. Automated measurement system employing eddy currents to adjust probe position and determine metal hardness

    DOE Patents [OSTI]

    Prince, J.M.; Dodson, M.G.; Lechelt, W.M.

    1989-07-18

    A system for measuring the hardness of cartridge cases employs an eddy current probe for inducing and sensing eddy currents in each cartridge case. A first component of the sensed signal is utilized in a closed loop system for accurately positioning the probe relative to the cartridge case both in the lift off direction and in the tangential direction, and a second component of the sensed signal is employed as a measure of the hardness. The positioning and measurement are carried out under closed loop microprocessor control facilitating hardness testing on a production line basis. 14 figs.

  11. Automated measurement system employing eddy currents to adjust probe position and determine metal hardness

    DOE Patents [OSTI]

    Prince, James M. (Kennewick, WA); Dodson, Michael G. (Richland, WA); Lechelt, Wayne M. (Benton City, WA)

    1989-01-01

    A system for measuring the hardness of cartridge cases employs an eddy current probe for inducing and sensing eddy currents in each cartridge case. A first component of the sensed signal is utilized in a closed loop system for accurately positioning the probe relative to the cartridge case both in the lift off direction and in the tangential direction, and a second component of the sensed signal is employed as a measure of the hardness. The positioning and measurement are carried out under closed loop microprocessor control facilitating hardness testing on a production line basis.

  12. Transient Eddy Current Response Due to a Subsurface Crack in a Conductive Plate

    SciTech Connect (OSTI)

    Fangwei Fu

    2006-08-09

    Eddy current nondestructive evaluation (NDE) is usually carried out by exciting a time harmonic field using an inductive probe. However, a viable alternative is to use transient eddy current NDE in which a current pulse in a driver coil produces a transient .eld in a conductor that decays at a rate dependent on the conductivity and the permeability of the material and the coil configuration. By using transient eddy current, it is possible to estimate the properties of the conductive medium and to locate and size potential .aws from the measured probe response. The fundamental study described in this dissertation seeks to establish a theoretical understanding of the transient eddy current NDE. Compared with the Fourier transform method, the derived analytical formulations are more convenient when the transient eddy current response within a narrow time range is evaluated. The theoretical analysis provides a valuable tool to study the effect of layer thickness, location of defect, crack opening as well as the optimization of probe design. Analytical expressions have been developed to evaluate the transient response due to eddy currents in a conductive plate based on two asymptotic series. One series converges rapidly for a short time regime and the other for a long time regime and both of them agree with the results calculated by fast Fourier transform over all the times considered. The idea of asymptotic expansion is further applied to determine the induced electromotive force (EMF) in a pick-up coil due to eddy currents in a cylindrical rod. Starting from frequency domain representation, a quasi-static time domain dyadic Green's function for an electric source in a conductive plate has been derived. The resulting expression has three parts; a free space term, multiple image terms and partial reflection terms. The dyadic Green's function serves as the kernel of an electric field integral equation which defines the interaction of an ideal crack with the transient eddy currents in a conductive plate. The crack response is found using the reciprocity theorem. Good agreement is observed between the predictions of the magnetic field due to the crack and experimental measurements.

  13. L2:THM.P4.01 Rob Lowrie LANL

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... large-eddy simulation (ILES), detached-eddy simulation (DES), and Spalart-Allmaras (SA). ... Table 7: Averaged time-step based on a fixed-CFL number used for the SA calculations. ...

  14. Electrical Circuit Simulation Code

    Energy Science and Technology Software Center (OSTI)

    2001-08-09

    Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.

  15. Radiative impacts on the growth of a population of drops within simulated

    Office of Scientific and Technical Information (OSTI)

    summertime Arctic stratus (Journal Article) | SciTech Connect Radiative impacts on the growth of a population of drops within simulated summertime Arctic stratus Citation Details In-Document Search Title: Radiative impacts on the growth of a population of drops within simulated summertime Arctic stratus The impact of solar heating and infrared cooling on the growth of a population of drops is studied with two numerical modeling frameworks. An eddy-resolving model (ERM) simulation of Arctic

  16. A novel fabrication technique for thin metallic vacuum chambers with low eddy current losses

    SciTech Connect (OSTI)

    Kouptsidis, J.; Banthau, R.; Hartwig, H.

    1985-10-01

    Eddy current problems in synchrotrons have been avoided until now by using costly and thick ceramic vacuum chambers which reduce the free magnet aperture. These disadvantages are eliminated by a novel fabrication technique developed for the chambers of the new 9 GeV electron synchrotron DESY II operating with 12.5 Hz repetion rate. The elliptical chambers 80x40 mm are made from .3 mm thick stainless steel tubes reinforced by thin ribs. The ribs are brazed on the tubes by a high temperature Ni-base brazing alloy. The linear eddy current losses are 60 W/m and increase the chamber temperature to only 50/sup 0/C. The available beam aperture is now 93% of the magnet gap. A still higher repetion rate up to 100 Hz can be achieved by reducing the wall thickness to .1 mm and using tubes made from a Ti-alloy having higher resistivity than stainless steel.

  17. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    DOE Patents [OSTI]

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  18. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    DOE Patents [OSTI]

    Danby, G.T.; Jackson, J.W.

    1990-03-19

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations (dB/dt) in the particle beam.

  19. Frontal Eddy Dynamics (FRED) experiment off North Carolina: Volume 2. Technical report

    SciTech Connect (OSTI)

    Ebbesmeyer, C.C.

    1988-03-01

    In preparation for oil and gas lease sales on the outer continental shelf offshore of North Carolina, the Minerals Management Service was requested to investigate the potential transport and impacts of oil spilled offshore. Of particular concern is estimating the movement of spilled oil, especially the probability of shoreward transport and/or beaching of the floatable fraction. Although the speed and location of the Gulf Stream are well known, knowledge of the meanders of the Gulf Stream is limited. How the circulatory structure and movement of associated frontal eddies and filaments affect the North Carolina coastal waters is not clear. This present study investigates the interactions of these circulatory elements and follows the evolution of frontal eddies as they migrate along the North Carolina coast.

  20. Reactance simulation for the defects in steam generator tube with outside ferrite sludge

    SciTech Connect (OSTI)

    Ryu, Kwon-sang; Kima, Yong-il; Son, Derac; Park, Duck-gun; Jung, Jae-kap

    2009-04-01

    A magnetic sludge is partly produced around the tube sheet outside a steam generator due to stress and heat. The sludge with magnetite is one of the important factors affecting eddy current signals. It causes trouble for the safety of the steam generator tubes and is difficult to detect by conventional eddy current methods. A new type of probe is needed to detect the signals for the magnetic sludge. We designed a new U-type yoke which has two kinds of coils--a magnetizing coil and the other a detecting coil--and we simulated the signal induced by the ferromagnetic sludge in the Inconel 600 tube.

  1. RACORO continental boundary layer cloud investigations. 3. Separation of parameterization biases in single-column model CAM5 simulations of shallow cumulus

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; Fridlind, Ann; Endo, Satoshi; Song, Hua; Feng, Sha; Toto, Tami; Li, Zhijin; Zhang, Minghua

    2015-06-19

    Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) project has constructed case studies from the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only amore » relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.« less

  2. RACORO continental boundary layer cloud investigations. 3. Separation of parameterization biases in single-column model CAM5 simulations of shallow cumulus

    SciTech Connect (OSTI)

    Lin, Wuyin; Liu, Yangang; Vogelmann, Andrew M.; Fridlind, Ann; Endo, Satoshi; Song, Hua; Feng, Sha; Toto, Tami; Li, Zhijin; Zhang, Minghua

    2015-06-19

    Climatically important low-level clouds are commonly misrepresented in climate models. The FAst-physics System TEstbed and Research (FASTER) project has constructed case studies from the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plain site during the RACORO aircraft campaign to facilitate research on model representation of boundary-layer clouds. This paper focuses on using the single-column Community Atmosphere Model version 5 (SCAM5) simulations of a multi-day continental shallow cumulus case to identify specific parameterization causes of low-cloud biases. Consistent model biases among the simulations driven by a set of alternative forcings suggest that uncertainty in the forcing plays only a relatively minor role. In-depth analysis reveals that the model's shallow cumulus convection scheme tends to significantly under-produce clouds during the times when shallow cumuli exist in the observations, while the deep convective and stratiform cloud schemes significantly over-produce low-level clouds throughout the day. The links between model biases and the underlying assumptions of the shallow cumulus scheme are further diagnosed with the aid of large-eddy simulations and aircraft measurements, and by suppressing the triggering of the deep convection scheme. It is found that the weak boundary layer turbulence simulated is directly responsible for the weak cumulus activity and the simulated boundary layer stratiform clouds. Increased vertical and temporal resolutions are shown to lead to stronger boundary layer turbulence and reduction of low-cloud biases.

  3. Improved multi-directional eddy current inspection test apparatus for detecting flaws in metal articles

    DOE Patents [OSTI]

    Nance, Roy A.; Hartley, William H.; Caffarel, Alfred J.

    1984-01-01

    Apparatus is described for detecting flaws in a tubular workpiece in a single scan. The coils of a dual coil bobbin eddy current inspection probe are wound at a 45.degree. angle to the transverse axis of the probe, one coil having an angular position about the axis about 90.degree. relative to the angular position of the other coil, and the angle of intersection of the planes containing the coils being about 60.degree..

  4. Effects of bending stresses and tube curvature on remote field eddy current signals

    SciTech Connect (OSTI)

    Sutherland, J.; Atherton, D.L.

    1997-01-01

    The effects of bending stresses and tube curvature on remote field eddy current signals were investigated. This technique is a recognized method for the nondestructive evaluation of ferromagnetic tubing, as used in heat exchangers and boiler systems. Different stress states were examined (elastic stress, plastic deformation, and residual stress) and found to give distinctive behavior. Elastic and residual stresses can appear as wall loss, depending on the operating frequency and baseline used for inspection and interpretation.

  5. Eddy current nondestructive testing device for measuring variable characteristics of a sample utilizing Walsh functions

    DOE Patents [OSTI]

    Libby, Hugo L.; Hildebrand, Bernard P.

    1978-01-01

    An eddy current testing device for measuring variable characteristics of a sample generates a signal which varies with variations in such characteristics. A signal expander samples at least a portion of this generated signal and expands the sampled signal on a selected basis of square waves or Walsh functions to produce a plurality of signal components representative of the sampled signal. A network combines these components to provide a display of at least one of the characteristics of the sample.

  6. Search for: All records | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    ... Idaho National Laboratory Specific Manufacturing Plant Idaho National Laboratory, Idaho ... the simulations of atmospheric process models including Large Eddy Simulations (LES), ...

  7. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    this study, a marine stratocumulus cloud was simulated by using a large eddy simulation (LES) model and a detailed microphysical bin model. Including infrared cooling as well as...

  8. Doug Longman | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of Nozzle Orifice Geometry Fuel Spray Modeling High-Fidelity Large Eddy Simulations (LES) Multi-Dimensional Modeling Simulation Approaches for Drop-in Biofuels Virtual Engine...

  9. ARM - Feature Stories and Releases Article

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    needed statistics for details that cannot yet be measured. The Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation, or LASSO, workflow will play a pivotal role...

  10. Turbulent eddies in a compressible jet in crossflow measured using pulse-burst particle image velocimetry

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Beresh, Steven J.; Wagner, Justin L.; Henfling, John F.; Spillers, Russell Wayne; Pruett, Brian Owen Matthew

    2016-01-01

    Pulse-burst Particle Image Velocimetry(PIV) has been employed to acquire time-resolved data at 25 kHz of a supersonic jet exhausting into a subsonic compressible crossflow. Data were acquired along the windward boundary of the jet shear layer and used to identify turbulenteddies as they convect downstream in the far-field of the interaction. Eddies were found to have a tendency to occur in closely spaced counter-rotating pairs and are routinely observed in the PIV movies, but the variable orientation of these pairs makes them difficult to detect statistically. Correlated counter-rotating vortices are more strongly observed to pass by at a larger spacing,more » both leading and trailing the reference eddy. This indicates the paired nature of the turbulenteddies and the tendency for these pairs to recur at repeatable spacing. Velocity spectra reveal a peak at a frequency consistent with this larger spacing between shear-layer vortices rotating with identical sign. The spatial scale of these vortices appears similar to previous observations of compressible jets in crossflow. Furthermore,super-sampled velocity spectra to 150 kHz reveal a power-law dependency of –5/3 in the inertial subrange as well as a –1 dependency at lower frequencies attributed to the scales of the dominant shear-layer eddies.« less

  11. Open-loop correction for an eddy current dominated beam-switching magnet

    SciTech Connect (OSTI)

    Koseki, K. Nakayama, H.; Tawada, M.

    2014-04-15

    A beam-switching magnet and the pulsed power supply it requires have been developed for the Japan Proton Accelerator Research Complex. To switch bunched proton beams, the dipole magnetic field must reach its maximum value within 40 ms. In addition, the field flatness should be less than 5 × 10{sup −4} to guide each bunched beam to the designed orbit. From a magnetic field measurement by using a long search coil, it was found that an eddy current in the thick endplates and laminated core disturbs the rise of the magnetic field. The eddy current also deteriorates the field flatness over the required flat-top period. The measured field flatness was 5 × 10{sup −3}. By using a double-exponential equation to approximate the measured magnetic field, a compensation pattern for the eddy current was calculated. The integrated magnetic field was measured while using the newly developed open-loop compensation system. A field flatness of less than 5 × 10{sup −4}, which is an acceptable value, was achieved.

  12. FIONDA (Filtering Images of Niobium Disks Application): Filter application for Eddy Current Scanner data analysis

    SciTech Connect (OSTI)

    Boffo, C.; Bauer, P.; /Fermilab

    2005-05-01

    As part of the material QC process, each Niobium disk from which a superconducting RF cavity is built must undergo an eddy current scan [1]. This process allows to discover embedded defects in the material that are not visible to the naked eye because too small or under the surface. Moreover, during the production process of SC cavities the outer layer of Nb is removed via chemical or electro-chemical etching, thus it is important to evaluate the quality of the subsurface layer (in the order of 100nm) where superconductivity will happen. The reference eddy current scanning machine is operated at DESY; at Fermilab we are using the SNS eddy current scanner on loan, courtesy of SNS. In the past year, several upgrades were implemented aiming at raising the SNS machine performance to that of the DESY reference machine [2]. As part of this effort an algorithm that enables the filtering of the results of the scans and thus improves the resolution of the process was developed. The description of the algorithm and of the software used to filter the scan results is presented in this note.

  13. Turbulent eddies in a compressible jet in crossflow measured using pulse-burst particle image velocimetry

    SciTech Connect (OSTI)

    Beresh, Steven J.; Wagner, Justin L.; Henfling, John F.; Spillers, Russell Wayne; Pruett, Brian Owen Matthew

    2016-01-01

    Pulse-burst Particle Image Velocimetry(PIV) has been employed to acquire time-resolved data at 25 kHz of a supersonic jet exhausting into a subsonic compressible crossflow. Data were acquired along the windward boundary of the jet shear layer and used to identify turbulenteddies as they convect downstream in the far-field of the interaction. Eddies were found to have a tendency to occur in closely spaced counter-rotating pairs and are routinely observed in the PIV movies, but the variable orientation of these pairs makes them difficult to detect statistically. Correlated counter-rotating vortices are more strongly observed to pass by at a larger spacing, both leading and trailing the reference eddy. This indicates the paired nature of the turbulenteddies and the tendency for these pairs to recur at repeatable spacing. Velocity spectra reveal a peak at a frequency consistent with this larger spacing between shear-layer vortices rotating with identical sign. The spatial scale of these vortices appears similar to previous observations of compressible jets in crossflow. Furthermore,super-sampled velocity spectra to 150 kHz reveal a power-law dependency of –5/3 in the inertial subrange as well as a –1 dependency at lower frequencies attributed to the scales of the dominant shear-layer eddies.

  14. Simulating neural systems with Xyce.

    SciTech Connect (OSTI)

    Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting; Warrender, Christina E.; Aimone, James Bradley; Teeter, Corinne; Duda, Alex M.

    2012-12-01

    Sandia's parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.

  15. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    SciTech Connect (OSTI)

    Rosa, B.; Parishani, H.; Ayala, O.; Wang, L.-P.

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [An examination of forcing in direct numerical simulations of turbulence, Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynolds number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.

  16. Mountable eddy current sensor for in-situ remote detection of surface and sub-surface fatigue cracks

    DOE Patents [OSTI]

    Yepez, III, Esteban; Roach, Dennis P.; Rackow, Kirk A.; DeLong, Waylon A.

    2011-09-06

    A wireless, integrated, mountable, portable, battery-operated, non-contact eddy current sensor that provides similar accuracy to 1970's laboratory scale equipment (e.g., a Hewlett-Packard GP4194A Impedance Analyzer) at a fraction of the size and cost.

  17. Modeling & Simulation | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NISACModeling & Simulation content top Overview Posted by Admin on Feb 13, 2012 in | Comments 0 comments NISAC experts analyze-using modeling and simulation capabilities-critical infrastructure, along with their interdependencies, vulnerabilities, and complexities. Their analyses are used to aid decisionmakers with policy assessment, mitigation planning, education, and training and provide near-real-time assistance to crisis-response organizations. Infrastructure systems are large, complex,

  18. Improving Parameterization of Entrainment Rate for Shallow Convection with

    Office of Scientific and Technical Information (OSTI)

    Aircraft Measurements and Large-Eddy Simulation (Journal Article) | DOE PAGES DOE PAGES Search Results Accepted Manuscript: Improving Parameterization of Entrainment Rate for Shallow Convection with Aircraft Measurements and Large-Eddy Simulation This content will become publicly available on February 1, 2017 Title: Improving Parameterization of Entrainment Rate for Shallow Convection with Aircraft Measurements and Large-Eddy Simulation This work examines the relationships of entrainment

  19. On eddy accumulation with limited conditional sampling to measure air-surface exchange

    SciTech Connect (OSTI)

    Wesely, M.L.; Hart, R.L.

    1994-01-01

    An analysis of turbulence data collected at a height of 12.3 m above grasslands was carried out to illustrate some of the limitations and possible improvements in methods to compute vertical fluxes of trace substances by the eddy accumulation technique with conditional sampling. The empirical coefficient used in the technique has a slight dependence on atmospheric stability, which can be minimized by using a threshold vertical velocity equal to approximately 0.75{sigma}{sub w}, below which chemical sampling is suspended. This protocol results in a smaller chemical sample but increases the differences in concentrations by approximately 70%. For effective conditional sampling when mass is being accumulated in a trap or reservoir, the time of sampling during updrafts versus downdrafts should be measured and used to adjust estimates of the mean concentrations.

  20. Performance demonstration tests for eddy current inspection of steam generator tubing

    SciTech Connect (OSTI)

    Kurtz, R.J.; Heasler, P.G.; Anderson, C.M.

    1996-05-01

    This report describes the methodology and results for development of performance demonstration tests for eddy current (ET) inspection of steam generator tubes. Statistical test design principles were used to develop the performance demonstration tests. Thresholds on ET system inspection performance were selected to ensure that field inspection systems would have a high probability of detecting and and correctly sizing tube degradation. The technical basis for the ET system performance thresholds is presented in detail. Statistical test design calculations for probability of detection and flaw sizing tests are described. A recommended performance demonstration test based on the design calculations is presented. A computer program for grading the probability of detection portion of the performance demonstration test is given.

  1. Bluff Body Flow Simulation Using a Vortex Element Method

    SciTech Connect (OSTI)

    Anthony Leonard; Phillippe Chatelain; Michael Rebel

    2004-09-30

    Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remains substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.

  2. Offshore Wind Farm Model Development - Upcoming Release of the...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Offshore Wind Farm Model Development - Upcoming Release of the University of Minnesota's ... September 16, 2015 - 1:14pm Addthis Large-eddy simulation of wind farms with ...

  3. 2014 UTSR Workshop | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and without Impurities Eric Petersen, Texas A&M University Large-Eddy Simulation of Gas-Turbine Combustors and Characterization of Facility Effects Matthias Ihme, Stanford ...

  4. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    cloud scenes created by a large eddy simulation model. Progressively greater degrees of tilt and stretching were imposed on these scenes, so that an ensemble of scenes were...

  5. Sticky Thermals: Evidence for a Dominant Balance Between

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    by studying thousands of cloud thermals in a high- resolution large-eddy simulation (LES) of deep convection. Schematically, the acceleration of a cloud thermal can be written...

  6. Modeling Precipitating Cumulus Congestus Observed by the ARM...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    (GCMs). Typically, results from cloud-resolving models (CRMs) or large-eddy simulation (LES) models serve as benchmarks for developing and tuning single-column models (SCMs),...

  7. LES Modeling for IC Engines

    Broader source: Energy.gov [DOE]

    Large eddy simulation offers better accuracy and sensitivity to study cyclic variability, mode transition and mixing effects in engine design and operation

  8. Research Highlight

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    per cm-3) droplet concentrations. Six realizations are shown for each droplet concentration. Large-eddy simulation time series output of daytime thin stratiform LWP for low...

  9. Modifications to WRFs dynamical core to improve the treatment...

    Office of Scientific and Technical Information (OSTI)

    Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations Citation Details In-Document Search Title: Modifications to WRFs dynamical core ...

  10. Modifications to WRF's dynamical core to improve the treatment...

    Office of Scientific and Technical Information (OSTI)

    dynamical core to improve the treatment of moisture for large--eddy simulations Citation Details In-Document Search Title: Modifications to WRF's dynamical core to improve the ...

  11. Modifications to WRFs dynamical core to improve the treatment...

    Office of Scientific and Technical Information (OSTI)

    Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations Title: Modifications to WRFs dynamical core to improve the treatment of ...

  12. Improving Parameterization of Entrainment Rate for Shallow Convection...

    Office of Scientific and Technical Information (OSTI)

    Aircraft Measurements and Large-Eddy Simulation This content will become publicly ... Sponsoring Org: USDOE Office of Science (SC), Biological and Environmental Research (BER) ...

  13. Search for: All records | DOE PAGES

    Office of Scientific and Technical Information (OSTI)

    fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. ...

  14. Research Highlight

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    convection using a new technique for large-eddy simulations (LES) called "Eulerian direct measurement". These results were confirmed by Dawe and Austin (2011) using a related...

  15. 1

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    numerical models. QJRMS., 125, 391-423. Stevens, B., C. -H. Moeng, and P. P. Sullivan, 1999: Large-eddy simulations of radiatively driven convection: Sensitivities to the...

  16. EDDY RESOLVING NUTRIENT ECODYNAMICS IN THE GLOBAL PARALLEL OCEAN PROGRAM AND CONNECTIONS WITH TRACE GASES IN THE SULFUR, HALOGEN AND NMHC CYCLES

    SciTech Connect (OSTI)

    S. CHU; S. ELLIOTT

    2000-08-01

    Ecodynamics and the sea-air transfer of climate relevant trace gases are intimately coupled in the oceanic mixed layer. Ventilation of species such as dimethyl sulfide and methyl bromide constitutes a key linkage within the earth system. We are creating a research tool for the study of marine trace gas distributions by implementing coupled ecology-gas chemistry in the high resolution Parallel Ocean Program (POP). The fundamental circulation model is eddy resolving, with cell sizes averaging 0.15 degree (lat/long). Here we describe ecochemistry integration. Density dependent mortality and iron geochemistry have enhanced agreement with chlorophyll measurements. Indications are that dimethyl sulfide production rates must be adjusted for latitude dependence to match recent compilations. This may reflect the need for phytoplankton to conserve nitrogen by favoring sulfurous osmolytes. Global simulations are also available for carbonyl sulfide, the methyl halides and for nonmethane hydrocarbons. We discuss future applications including interaction with atmospheric chemistry models, high resolution biogeochemical snapshots and the study of open ocean fertilization.

  17. Simulations of Turbulent Flows with Strong Shocks and Density Variations: Final Report

    SciTech Connect (OSTI)

    Sanjiva Lele

    2012-10-01

    The target of this SciDAC Science Application was to develop a new capability based on high-order and high-resolution schemes to simulate shock-turbulence interactions and multi-material mixing in planar and spherical geometries, and to study Rayleigh-Taylor and Richtmyer-Meshkov turbulent mixing. These fundamental problems have direct application in high-speed engineering flows, such as inertial confinement fusion (ICF) capsule implosions and scramjet combustion, and also in the natural occurrence of supernovae explosions. Another component of this project was the development of subgrid-scale (SGS) models for large-eddy simulations of flows involving shock-turbulence interaction and multi-material mixing, that were to be validated with the DNS databases generated during the program. The numerical codes developed are designed for massively-parallel computer architectures, ensuring good scaling performance. Their algorithms were validated by means of a sequence of benchmark problems. The original multi-stage plan for this five-year project included the following milestones: 1) refinement of numerical algorithms for application to the shock-turbulence interaction problem and multi-material mixing (years 1-2); 2) direct numerical simulations (DNS) of canonical shock-turbulence interaction (years 2-3), targeted at improving our understanding of the physics behind the combined two phenomena and also at guiding the development of SGS models; 3) large-eddy simulations (LES) of shock-turbulence interaction (years 3-5), improving SGS models based on the DNS obtained in the previous phase; 4) DNS of planar/spherical RM multi-material mixing (years 3-5), also with the two-fold objective of gaining insight into the relevant physics of this instability and aiding in devising new modeling strategies for multi-material mixing; 5) LES of planar/spherical RM mixing (years 4-5), integrating the improved SGS and multi-material models developed in stages 3 and 5. This final report is outlined as follows. Section 2 shows an assessment of numerical algorithms that are best suited for the numerical simulation of compressible flows involving turbulence and shock phenomena. Sections 3 and 4 deal with the canonical shock-turbulence interaction problem, from the DNS and LES perspectives, respectively. Section 5 considers the shock-turbulence inter-action in spherical geometry, in particular, the interaction of a converging shock with isotropic turbulence as well as the problem of the blast wave. Section 6 describes the study of shock-accelerated mixing through planar and spherical Richtmyer-Meshkov mixing as well as the shock-curtain interaction problem In section 7 we acknowledge the different interactions between Stanford and other institutions participating in this SciDAC project, as well as several external collaborations made possible through it. Section 8 presents a list of publications and presentations that have been generated during the course of this SciDAC project. Finally, section 9 concludes this report with the list of personnel at Stanford University funded by this SciDAC project.

  18. Adaptive LES Methodology for Turbulent Flow Simulations

    SciTech Connect (OSTI)

    Oleg V. Vasilyev

    2008-06-12

    Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulations that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic turbulence have recently been completed at the Japanese Earth Simulator (Yokokawa et al. 2002, Kaneda et al. 2003) using a resolution of 40963 (approximately 10{sup 11}) grid points with a Taylor-scale Reynolds number of 1217 (Re {approx} 10{sup 6}). Impressive as these calculations are, performed on one of the world's fastest super computers, more brute computational power would be needed to simulate the flow over the fuselage of a commercial aircraft at cruising speed. Such a calculation would require on the order of 10{sup 16} grid points and would have a Reynolds number in the range of 108. Such a calculation would take several thousand years to simulate one minute of flight time on today's fastest super computers (Moin & Kim 1997). Even using state-of-the-art zonal approaches, which allow DNS calculations that resolve the necessary range of scales within predefined 'zones' in the flow domain, this calculation would take far too long for the result to be of engineering interest when it is finally obtained. Since computing power, memory, and time are all scarce resources, the problem of simulating turbulent flows has become one of how to abstract or simplify the complexity of the physics represented in the full Navier-Stokes (NS) equations in such a way that the 'important' physics of the problem is captured at a lower cost. To do this, a portion of the modes of the turbulent flow field needs to be approximated by a low order model that is cheaper than the full NS calculation. This model can then be used along with a numerical simulation of the 'important' modes of the problem that cannot be well represented by the model. The decision of what part of the physics to model and what kind of model to use has to be based on what physical properties are considered 'important' for the problem. It should be noted that 'nothing is free', so any use of a low order model will by definition lose some information about the original flow.

  19. Fundamentals of plasma simulation

    SciTech Connect (OSTI)

    Forslund, D.W.

    1985-01-01

    With the increasing size and speed of modern computers, the incredibly complex nonlinear properties of plasmas in the laboratory and in space are being successfully explored in increasing depth. Of particular importance have been numerical simulation techniques involving finite size particles on a discrete mesh. After discussing the importance of this means of understanding a variety of nonlinear plasma phenomena, we describe the basic elements of particle-in-cell simulation and their limitations and advantages. The differencing techniques, stability and accuracy issues, data management and optimization issues are discussed by means of a simple example of a particle-in-cell code. Recent advances in simulation methods allowing large space and time scales to be treated with minimal sacrifice in physics are reviewed. Various examples of nonlinear processes successfully studied by plasma simulation will be given.

  20. Eddy current inspection tool which is selectively operable in a discontinuity detection mode and a discontinuity magnitude mode

    DOE Patents [OSTI]

    Petrini, Richard R.; Van Lue, Dorin F.

    1983-01-01

    A miniaturized inspection tool, for testing and inspection of metal objects in locations with difficult accessibility, which comprises eddy current sensing equipment (12) with a probe coil (11), and associated coaxial coil cable (13), coil energizing means (21), and circuit means (21, 12) responsive to impedance changes in the coil as effected by induced eddy currents in a test object to produce a data output signal proportional to such changes. The coil and cable are slideably received in the utility channel of the flexible insertion tube 17 of fiberoptic scope 10. The scope 10 is provided with light transmitting and receiving fiberoptics for viewing through the flexible tube, and articulation means (19, 20) for articulating the distal end of the tube and permitting close control of coil placement relative to a test object. The eddy current sensing equipment includes a tone generator 30 for generating audibly signals responsive to the data output signal. In one selected mode of operation, the tone generator responsive to the output signal above a selected level generates a constant single frequency tone for signalling detection of a discontinuity and, in a second selected mode, generates a tone whose frequency is proportional to the difference between the output signal and a predetermined selected threshold level.

  1. Eddy current inspection tool which is selectively operable in a discontinuity detection mode and a discontinuity magnitude mode

    DOE Patents [OSTI]

    Petrini, R.R.; Van Lue, D.F.

    1983-10-25

    A miniaturized inspection tool, for testing and inspection of metal objects in locations with difficult accessibility, which comprises eddy current sensing equipment with a probe coil, and associated coaxial coil cable, coil energizing means, and circuit means responsive to impedance changes in the coil as effected by induced eddy currents in a test object to produce a data output signal proportional to such changes. The coil and cable are slideably received in the utility channel of the flexible insertion tube of fiberoptic scope. The scope is provided with light transmitting and receiving fiberoptics for viewing through the flexible tube, and articulation means for articulating the distal end of the tube and permitting close control of coil placement relative to a test object. The eddy current sensing equipment includes a tone generator 30 for generating audibly signals responsive to the data output signal. In one selected mode of operation, the tone generator responsive to the output signal above a selected level generates a constant single frequency tone for signaling detection of a discontinuity and, in a second selected mode, generates a tone whose frequency is proportional to the difference between the output signal and a predetermined selected threshold level. 5 figs.

  2. EDDY CURRENT EFFECT OF THE BNL-AGS VACUUM CHAMBER ON THE OPTICS OF THE BNL-AGS SYNCHROTRON.

    SciTech Connect (OSTI)

    TSOUPAS,N.; AHRENS,L.; BROWN,K.A.; GLENN,J.W.; GARDNER,K.

    1999-03-29

    During the acceleration cycle of the AGS synchrotron, eddy currents are generated within the walls of the vacuum chambers of the AGS main magnets. The vacuum chambers have elliptical cross section, are made of inconel material with a wall thickness of 2 mm and are placed within the gap of the combined-function main magnets of the AGS synchrotron. The generation of the eddy currents in the walls of the vacuum chambers, creates various magnetic multipoles, which affect the optics of the AGS machine. In this report these magnetic multipoles are calculated for various time interval starting at the acceleration cycle, where the magnetic field of the main magnet is {approx}0.1 T, and ending before the beam extraction process, where the magnetic field of the main magnet is almost constant at {approx}1.1 T. The calculations show that the magnetic multipoles generated by the eddy-currents affect the optics of the AGS synchrotron during the acceleration cycle and in particular at low magnetic fields of the main magnet. Their effect is too weak to affect the optics of the AGS machine during beam extraction at the nominal energies.

  3. Large-Scale Information Systems

    SciTech Connect (OSTI)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  4. Deriving Daytime Variables From the AmeriFlux Standard Eddy Covariance Data Set

    SciTech Connect (OSTI)

    van Ingen, Catharine; Agarwal, Deborah A.; Humphrey, Marty; Li, Jie

    2008-12-06

    A gap-filled, quality assessed eddy covariance dataset has recently become available for the AmeriFluxnetwork. This dataset uses standard processing and produces commonly used science variables. This shared dataset enables robust comparisons across different analyses. Of course, there are many remaining questions. One of those is how to define 'during the day' which is an important concept for many analyses. Some studies have used local time for example 9am to 5pm; others have used thresholds on photosynthetic active radiation (PAR). A related question is how to derive quantities such as the Bowen ratio. Most studies compute the ratio of the averages of the latent heat (LE) and sensible heat (H). In this study, we use different methods of defining 'during the day' for GPP, LE, and H. We evaluate the differences between methods in two ways. First, we look at a number of statistics of GPP. Second, we look at differences in the derived Bowen ratio. Our goal is not science per se, but rather informatics in support of the science.

  5. Consequences of Urban Stability Conditions for Computational Fluid Dynamics Simulations of Urban Dispersion

    SciTech Connect (OSTI)

    Lundquist, J K; Chan, S T

    2005-11-30

    The validity of omitting stability considerations when simulating transport and dispersion in the urban environment is explored using observations from the Joint URBAN 2003 field experiment and computational fluid dynamics simulations of that experiment. Four releases of sulfur hexafluoride, during two daytime and two nighttime intensive observing periods, are simulated using the building-resolving computational fluid dynamics model, FEM3MP to solve the Reynolds Averaged Navier-Stokes equations with two options of turbulence parameterizations. One option omits stability effects but has a superior turbulence parameterization using a non-linear eddy viscosity (NEV) approach, while the other considers buoyancy effects with a simple linear eddy viscosity (LEV) approach for turbulence parameterization. Model performance metrics are calculated by comparison with observed winds and tracer data in the downtown area, and with observed winds and turbulence kinetic energy (TKE) profiles at a location immediately downwind of the central business district (CBD) in the area we label as the urban shadow. Model predictions of winds, concentrations, profiles of wind speed, wind direction, and friction velocity are generally consistent with and compare reasonably well with the field observations. Simulations using the NEV turbulence parameterization generally exhibit better agreement with observations. To further explore this assumption of a neutrally-stable atmosphere within the urban area, TKE budget profiles slightly downwind of the urban wake region in the 'urban shadow' are examined. Dissipation and shear production are the largest terms which may be calculated directly. The advection of TKE is calculated as a residual; as would be expected downwind of an urban area, the advection of TKE produced within the urban area is a very large term. Buoyancy effects may be neglected in favor of advection, shear production, and dissipation. For three of the IOPs, buoyancy production may be neglected entirely, and for one IOP, buoyancy production contributes approximately 25% of the total TKE at this location. For both nighttime releases, the contribution of buoyancy to the total TKE budget is always negligible though positive. Results from the simulations provide estimates of the average TKE values in the upwind, downtown, downtown shadow, and urban wake zones of the computational domain. These values suggest that building-induced turbulence can cause the average turbulence intensity in the urban area to increase by as much as much as seven times average 'upwind' values, explaining the minimal role of buoyant forcing in the downtown region. The downtown shadow exhibits an exponential decay in average TKE, while the distant downwind wake region approaches the average upwind values. For long-duration releases in downtown and downtown shadow areas, the assumption of neutral stability is valid because building-induced turbulence dominates the budget. However, further downwind in the urban wake region, which we find to be approximately 1500 m beyond the perimeter of downtown Oklahoma City, the levels of building-induced turbulence greatly subside, and therefore the assumption of neutral stability is less valid.

  6. Simulating atmosphere flow for wind energy applications with WRF-LES

    SciTech Connect (OSTI)

    Lundquist, J K; Mirocha, J D; Chow, F K; Kosovic, B; Lundquist, K A

    2008-01-14

    Forecasts of available wind energy resources at high spatial resolution enable users to site wind turbines in optimal locations, to forecast available resources for integration into power grids, to schedule maintenance on wind energy facilities, and to define design criteria for next-generation turbines. This array of research needs implies that an appropriate forecasting tool must be able to account for mesoscale processes like frontal passages, surface-atmosphere interactions inducing local-scale circulations, and the microscale effects of atmospheric stability such as breaking Kelvin-Helmholtz billows. This range of scales and processes demands a mesoscale model with large-eddy simulation (LES) capabilities which can also account for varying atmospheric stability. Numerical weather prediction models, such as the Weather and Research Forecasting model (WRF), excel at predicting synoptic and mesoscale phenomena. With grid spacings of less than 1 km (as is often required for wind energy applications), however, the limits of WRF's subfilter scale (SFS) turbulence parameterizations are exposed, and fundamental problems arise, associated with modeling the scales of motion between those which LES can represent and those for which large-scale PBL parameterizations apply. To address these issues, we have implemented significant modifications to the ARW core of the Weather Research and Forecasting model, including the Nonlinear Backscatter model with Anisotropy (NBA) SFS model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005).We are also modifying WRF's terrain-following coordinate system by implementing an immersed boundary method (IBM) approach to account for the effects of complex terrain. Companion papers presenting idealized simulations with NBA-RSFS-WRF (Mirocha et al.) and IBM-WRF (K. A. Lundquist et al.) are also presented. Observations of flow through the Altamont Pass (Northern California) wind farm are available for validation of the WRF modeling tool for wind energy applications. In this presentation, we use these data to evaluate simulations using the NBA-RSFS-WRF tool in multiple configurations. We vary nesting capabilities, multiple levels of RSFS reconstruction, SFS turbulence models (the new NBA turbulence model versus existing WRF SFS turbulence models) to illustrate the capabilities of the modeling tool and to prioritize recommendations for operational uses. Nested simulations which capture both significant mesoscale processes as well as local-scale stable boundary layer effects are required to effectively predict available wind resources at turbine height.

  7. Large displacement spherical joint

    DOE Patents [OSTI]

    Bieg, Lothar F.; Benavides, Gilbert L.

    2002-01-01

    A new class of spherical joints has a very large accessible full cone angle, a property which is beneficial for a wide range of applications. Despite the large cone angles, these joints move freely without singularities.

  8. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Large Scale Jobs Running Large Scale Jobs Users face various challenges with running and scaling large scale jobs on peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or I/O dominates run time. This page lists some available programming and run time tuning options and tips users can try on their large scale applications on Hopper for better performance. Try different compilers

  9. Wind Simulation

    Energy Science and Technology Software Center (OSTI)

    2008-12-31

    The Software consists of a spreadsheet written in Microsoft Excel that provides an hourly simulation of a wind energy system, which includes a calculation of wind turbine output as a power-curve fit of wind speed.

  10. Eddy current sensor for in-situ monitoring of swelling of Li-ion prismatic cells

    SciTech Connect (OSTI)

    Plotnikov, Yuri Karp, Jason Knobloch, Aaron Kapusta, Chris Lin, David

    2015-03-31

    In-situ monitoring an on-board rechargeable battery in hybrid cars can be used to ensure a long operating life of the battery and safe operation of the vehicle. Intercalations of ions in the electrode material during charge and discharge of a Lithium Ion battery cause periodic stress and strain of the electrode materials that can ultimately lead to fatigue resulting in capacity loss and potential battery failure. Currently this process is not monitored directly on the cells. This work is focused on development technologies that would quantify battery swelling and provide in-situ monitoring for onboard vehicle applications. Several rounds of tests have been performed to spatially characterize cell expansion of a 5 Ah cell with a nickel/manganese/cobalt-oxide cathode (Sanyo, Japan) used by Ford in their Fusion HEV battery pack. A collaborative team of researchers from GE and the University of Michigan has characterized the free expansion of these cells to be in the range of 100125 microns (1% of total cell thickness) at the center point of the cell. GE proposed to use a thin eddy current (EC) coil to monitor these expansions on the cells while inside the package. The photolithography manufacturing process previously developed for EC arrays for detecting cracks in aircraft engine components was used to build test coils for gap monitoring. These sensors are thin enough to be placed safely between neighboring cells and capable of monitoring small variations in the gap between the cells. Preliminary investigations showed that these coils can be less than 100 micron thick and have sufficient sensitivity in a range from 0 to 2 mm. Laboratory tests revealed good correlation between EC and optical gap measurements in the desired range. Further technology development could lead to establishing a sensor network for a low cost solution for the in-situ monitoring of cell swelling during battery operation.

  11. Laboratory simulation of binary and triple well EGS in large...

    Office of Scientific and Technical Information (OSTI)

    Additional Journal Information: Journal Volume: 55; Journal Issue: C; Related Information: CHORUS Timestamp: 2016-05-08 05:34:48; Journal ID: ISSN 0375-6505 Publisher: Elsevier ...

  12. SOLTES: simulator of large thermal energy systems (Conference...

    Office of Scientific and Technical Information (OSTI)

    solar energy heating and cooling, geothermal energy, and solar hot water, are discussed. ... ASSOCIATED PLANTS; 14 SOLAR ENERGY; 15 GEOTHERMAL ENERGY; GEOTHERMAL POWER PLANTS; ...

  13. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    Measurements of performance by means of hardware counters show that 37% of the peak FPU performance can be attained. Authors: Gygi, F ; Draeger, E W ; de Supinski, B R ; Yates, R K ...

  14. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    on all HPC systems. In particular, on the IBM BGQ system, HACC has reached very high levels of performance-almost 14 petaflops (the highest ever recorded by a science code)...

  15. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    try on their large scale applications on Hopper for better performance. Try different compilers and compiler options The available compilers on Hopper are PGI, Cray, Intel, GNU,...

  16. Advanced Oil Recovery Technologies for Improved Recovery from Slope Basin Clastic Reservoirs, Nash Draw Brushy Canyon Pool, Eddy County, NM

    SciTech Connect (OSTI)

    Mark B. Murphy

    2005-09-30

    The Nash Draw Brushy Canyon Pool in Eddy County New Mexico was a cost-shared field demonstration project in the U.S. Department of Energy Class III Program. A major goal of the Class III Program was to stimulate the use of advanced technologies to increase ultimate recovery from slope-basin clastic reservoirs. Advanced characterization techniques were used at the Nash Draw Pool (NDP) project to develop reservoir management strategies for optimizing oil recovery from this Delaware reservoir. The objective of the project was to demonstrate that a development program, which was based on advanced reservoir management methods, could significantly improve oil recovery at the NDP. Initial goals were (1) to demonstrate that an advanced development drilling and pressure maintenance program can significantly improve oil recovery compared to existing technology applications and (2) to transfer these advanced methodologies to other oil and gas producers. Analysis, interpretation, and integration of recently acquired geological, geophysical, and engineering data revealed that the initial reservoir characterization was too simplistic to capture the critical features of this complex formation. Contrary to the initial characterization, a new reservoir description evolved that provided sufficient detail regarding the complexity of the Brushy Canyon interval at Nash Draw. This new reservoir description was used as a risk reduction tool to identify 'sweet spots' for a development drilling program as well as to evaluate pressure maintenance strategies. The reservoir characterization, geological modeling, 3-D seismic interpretation, and simulation studies have provided a detailed model of the Brushy Canyon zones. This model was used to predict the success of different reservoir management scenarios and to aid in determining the most favorable combination of targeted drilling, pressure maintenance, well stimulation, and well spacing to improve recovery from this reservoir. An Advanced Log Analysis technique developed from the NDP project has proven useful in defining additional productive zones and refining completion techniques. This program proved to be especially helpful in locating and evaluating potential recompletion intervals, which has resulted in low development costs with only small incremental increases in lifting costs. To develop additional reserves at lower costs, zones behind pipe in existing wells were evaluated using techniques developed for the Brushy Canyon interval. These techniques were used to complete uphole zones in thirteen of the NDP wells. A total of 14 recompletions were done: four during 1999, four during 2000, two during 2001, and four during 2002-2003. These workovers added reserves of 332,304 barrels of oil (BO) and 640,363 MCFG (thousand cubic feet of gas) at an overall weighted average development cost of $1.87 per BOE (barrel of oil equivalent). A pressure maintenance pilot project in a developed area of the field was not conducted because the pilot area was pressure depleted, and the reservoir in that area was found to be compartmentalized and discontinuous. Economic analyses and simulation studies indicated that immiscible injection of lean hydrocarbon gas for pressure maintenance was not warranted at the NDP and would need to be considered for implementation in similar fields very soon after production has started. Simulation studies suggested that the injection of miscible carbon dioxide (CO{sub 2}) could recover significant quantities of oil at the NDP, but a source of low-cost CO{sub 2} was not available in the area. Results from the project indicated that further development will be under playa lakes and potash areas that were beyond the regions covered by well control and are not accessible with vertical wells. These areas, covered by 3-D seismic surveys that were obtained as part of the project, were accessed with combinations of deviated/horizontal wells. Three directional/horizontal wells have been drilled and completed to develop reserves under surface-restricted areas and potash mines. The third well has not been on production long enough for an accurate assessment but initial results from it are encouraging. Cumulative production from the first two wells through August 31, 2005 was 235,039 BO, 816,592 MCFG and 310,333 barrels of water (BW). Total estimated reserves from all three of the horizontal wells are 878,135 BO and 3.87 BCFG. The ratio of net revenue to cost for the first two wells is approximately 2.9 to 1 for an oil price of $30 per barrel that existed when the wells were drilled. Based on recent pricing trends, a detailed reserve study for the project was performed that assumed an oil price of $40 per barrel and a gas price of $7 per MCFG. These results show that this project has acceptable economics and similar projects can be economically developed as long as oil and gas prices remain over $30 per BOE.

  17. 2011_INCITE_Fact_Sheets.pdf | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1_INCITE_Fact_Sheets.pdf 2011_INCITE_Fact_Sheets.pdf PDF icon 2011_INCITE_Fact_Sheets.pdf More Documents & Publications Advance Patent Waiver W(A)2006-028 Large Eddy Simulation (LES) Applied to LTC/Diesel/Hydrogen Engine Combustion Research Vehicle Technologies Office Merit Review 2015: Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research

  18. Eddy current position indicating apparatus for measuring displacements of core components of a liquid metal nuclear reactor

    DOE Patents [OSTI]

    Day, Clifford K.; Stringer, James L.

    1977-01-01

    Apparatus for measuring displacements of core components of a liquid metal fast breeder reactor by means of an eddy current probe. The active portion of the probe is located within a dry thimble which is supported on a stationary portion of the reactor core support structure. Split rings of metal, having a resistivity significantly different than sodium, are fixedly mounted on the core component to be monitored. The split rings are slidably positioned around, concentric with the probe and symmetrically situated along the axis of the probe so that motion of the ring along the axis of the probe produces a proportional change in the probes electrical output.

  19. Large Customers (DR Sellers)

    SciTech Connect (OSTI)

    Kiliccot, Sila

    2011-10-25

    State of the large customers for demand response integration of solar and wind into electric grid; openADR; CAISO; DR as a pseudo generation; commercial and industrial DR strategies; California regulations

  20. PEBBLES Mechanics Simulation Speedup

    SciTech Connect (OSTI)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2010-05-01

    Pebble bed reactors contain large numbers of spherical fuel elements arranged randomly. Determining the motion and location of these fuel elements is required for calculating certain parameters of pebble bed reactor operation. These simulations involve hundreds of thousands of pebbles and involve determining the entire core motion as pebbles are recirculated. Single processor algorithms for this are insufficient since they would take decades to centuries of wall-clock time. This paper describes the process of parallelizing and speeding up the PEBBLES pebble mechanics simulation code. Both shared memory programming with the Open Multi-Processing API and distributed memory programming with the Message Passing Interface API are used in simultaneously in this process. A new shared memory lock-less linear time collision detection algorithm is described. This method allows faster detection of pebbles in contact than generic methods. These combine to make full recirculations on AVR sized reactors possible in months of wall clock time.

  1. NREL: Measurements and Characterization - Simulated Module Current Versus

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Voltage (I-V) Simulated Module Current Versus Voltage (I-V) The National Renewable Energy Laboratory Device Performance group uses two I-V measurement systems to assess the performance parameters for photovoltaic (PV) modules under simulated conditions: a Spire 240A pulsed solar simulator and a large-area continuous solar simulator. The following table is a condensed list of characteristics for simulated module I-V measurements instrumentation. Major Instrumentation for Outdoor Simulated

  2. Large Group Visits

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Group Visits Large Group Visits All tours of the Museum are self-guided, but please schedule in advance so we can best accommodate your group. Contact Us thumbnail of Bradbury Science Museum (505) 667-4444 Email Let us know if you plan to bring a group of 10 or more. All tours of the Museum are self-guided, but please schedule in advance so we can best accommodate your group. Parking for buses and RVs is available on Iris Street behind the Museum off of 15th St. See attached map (pdf).

  3. Self-consistency tests of large-scale dynamics parameterizations...

    Office of Scientific and Technical Information (OSTI)

    large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG ... Journal Name: Journal of Advances in Modeling Earth Systems ...

  4. Hybrid Simulator

    Energy Science and Technology Software Center (OSTI)

    2005-10-15

    HybSim (short for Hybrid Simulator) is a flexible, easy to use screening tool that allows the user to quanti the technical and economic benefits of installing a village hybrid generating system and simulates systems with any combination of —Diesel generator sets —Photovoltaic arrays -Wind Turbines and -Battery energy storage systems Most village systems (or small population sites such as villages, remote military bases, small communities, independent or isolated buildings or centers) depend on diesel generationmore » systems for their source of energy. HybSim allows the user to determine other "sources" of energy that can greatly reduce the dollar to kilo-watt hour ratio. Supported by the DOE, Energy Storage Program, HybSim was initially developed to help analyze the benefits of energy storage systems in Alaskan villages. Soon after its development, other sources of energy were added providing the user with a greater range of analysis opportunities and providing the village with potentially added savings. In addition to village systems, HybSim has generated interest for use from military institutions in energy provisions and USAID for international village analysis.« less

  5. Extra-Large Memory Nodes

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Extra-Large Memory Nodes Extra-Large Memory Nodes Extra-Large Memory Nodes Overview Carver has two "extra-large" memory nodes; each node has four 8-core Intel X7550 ("Nehalem EX")...

  6. Diamagnetic composite material structure for reducing undesired electromagnetic interference and eddy currents in dielectric wall accelerators and other devices

    DOE Patents [OSTI]

    Caporaso, George J.; Poole, Brian R.; Hawkins, Steven A.

    2015-06-30

    The devices, systems and techniques disclosed here can be used to reduce undesired effects by magnetic field induced eddy currents based on a diamagnetic composite material structure including diamagnetic composite sheets that are separated from one another to provide a high impedance composite material structure. In some implementations, each diamagnetic composite sheet includes patterned conductor layers are separated by a dielectric material and each patterned conductor layer includes voids and conductor areas. The voids in the patterned conductor layers of each diamagnetic composite sheet are arranged to be displaced in position from one patterned conductor layer to an adjacent patterned conductor layer while conductor areas of the patterned conductor layers collectively form a contiguous conductor structure in each diamagnetic composite sheet to prevent penetration by a magnetic field.

  7. Changes in Moisture Flux over the Tibetan Plateau during 1979-2011: Insights from a High Resolution Simulation

    SciTech Connect (OSTI)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-15

    Net precipitation (precipitation minus evapotranspiration, P-E) changes between 1979 and 2011 from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The high resolution simulation better resolves precipitation changes than its coarse resolution forcing, which contributes dominantly to the improved P-E change in the regional simulation compared to the global reanalysis. Hence, the former may provide better insights about the drivers of P-E changes. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  8. 1

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    (3D) broken field of marine clouds is simulated by using large-eddy simulation (LES) model. The obtained 3D cloud field is considered as a real 3D cloud field. Second, we...

  9. Analysis of turbulent transport and mixing in transitional Rayleigh/Taylor unstable flow using direct numerical simulation data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schilling, Oleg; Mueschke, Nicholas J.

    2010-10-18

    Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmoreand destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. These results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.less

  10. 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations Halo Finder Enhancement Puts Supercomputer Users in the Driver's Seat November 2, 2015 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Nyxfilamentsandreeberhalos In this Reeber halo finder simulation, the blueish haze is a volume rendering of the density field that Nyx calculates every time step. The light blue and

  11. Large Particle Titanate Sorbents

    SciTech Connect (OSTI)

    Taylor-Pashow, K.

    2015-10-08

    This research project was aimed at developing a synthesis technique for producing large particle size monosodium titanate (MST) to benefit high level waste (HLW) processing at the Savannah River Site (SRS). Two applications were targeted, first increasing the size of the powdered MST used in batch contact processing to improve the filtration performance of the material, and second preparing a form of MST suitable for deployment in a column configuration. Increasing the particle size should lead to improvements in filtration flux, and decreased frequency of filter cleaning leading to improved throughput. Deployment of MST in a column configuration would allow for movement from a batch process to a more continuous process. Modifications to the typical MST synthesis led to an increase in the average particle size. Filtration testing on dead-end filters showed improved filtration rates with the larger particle material; however, no improvement in filtration rate was realized on a crossflow filter. In order to produce materials suitable for column deployment several approaches were examined. First, attempts were made to coat zirconium oxide microspheres (196 µm) with a layer of MST. This proved largely unsuccessful. An alternate approach was then taken synthesizing a porous monolith of MST which could be used as a column. Several parameters were tested, and conditions were found that were able to produce a continuous structure versus an agglomeration of particles. This monolith material showed Sr uptake comparable to that of previously evaluated samples of engineered MST in batch contact testing.

  12. Simulations of fast crab cavity failures in the high luminosity...

    Office of Scientific and Technical Information (OSTI)

    Title: Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider Authors: Yee-Rendon, Bruce ; Lopez-Fernandez, Ricardo ; Barranco, Javier ; Calaga, Rama ...

  13. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Large Magnetization at Carbon Surfaces Print Wednesday, 31 August 2011 00:00 From organic matter to pencil lead, carbon is a versatile...

  14. Changes in Moisture Flux Over the Tibetan Plateau During 1979-2011: Insights from a High Resolution Simulation

    SciTech Connect (OSTI)

    Gao, Yanhong; Leung, Lai-Yung R.; Zhang, Yongxin; Cuo, Lan

    2015-05-01

    Net precipitation (precipitation minus evapotranspiration, P-E) changes from a high resolution regional climate simulation and its reanalysis forcing are analyzed over the Tibet Plateau (TP) and compared to the global land data assimilation system (GLDAS) product. The mechanism behind the P-E changes is explored by decomposing the column integrated moisture flux convergence into thermodynamic, dynamic, and transient eddy components. High-resolution climate simulation improves the spatial pattern of P-E changes over the best available global reanalysis. Improvement in simulating precipitation changes at high elevations contributes dominantly to the improved P-E changes. High-resolution climate simulation also facilitates new and substantial findings regarding the role of thermodynamics and transient eddies in P-E changes reflected in observed changes in major river basins fed by runoff from the TP. The analysis revealed the contrasting convergence/divergence changes between the northwestern and southeastern TP and feedback through latent heat release as an important mechanism leading to the mean P-E changes in the TP.

  15. Large forging manufacturing process

    DOE Patents [OSTI]

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  16. Large scale tracking algorithms.

    SciTech Connect (OSTI)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  17. EM Active Sites (large) | Department of Energy

    Energy Savers [EERE]

    Active Sites (large) EM Active Sites (large) Center

  18. Dynamic Simulation Nuclear Power Plants

    Energy Science and Technology Software Center (OSTI)

    1992-03-03

    DSNP (Dynamic Simulator for Nuclear Power-Plants) is a system of programs and data files by which a nuclear power plant, or part thereof, can be simulated. The acronym DSNP is used interchangeably for the DSNP language, the DSNP libraries, the DSNP precompiler, and the DSNP document generator. The DSNP language is a special-purpose, block-oriented, digital-simulation language developed to facilitate the preparation of dynamic simulations of a large variety of nuclear power plants. It is amore » user-oriented language that permits the user to prepare simulation programs directly from power plant block diagrams and flow charts by recognizing the symbolic DSNP statements for the appropriate physical components and listing these statements in a logical sequence according to the flow of physical properties in the simulated power plant. Physical components of nuclear power plants are represented by functional blocks, or modules. Many of the more complex components are represented by several modules. The nuclear reactor, for example, has a kinetic module, a power distribution module, a feedback module, a thermodynamic module, a hydraulic module, and a radioactive heat decay module. These modules are stored in DSNP libraries in the form of a DSNP subroutine or function, a block of statements, a macro, or a combination of the above. Basic functional blocks such as integrators, pipes, function generators, connectors, and many auxiliary functions representing properties of materials used in nuclear power plants are also available. The DSNP precompiler analyzes the DSNP simulation program, performs the appropriate translations, inserts the requested modules from the library, links these modules together, searches necessary data files, and produces a simulation program in FORTRAN.« less

  19. Reframing Accelerator Simulations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulations Mori-1.png Key Challenges: Use advanced simulation tools to study the feasibility of plasma-based linear colliders and to optimize conceptual designs. Much of the...

  20. Building a Particle Simulator

    SciTech Connect (OSTI)

    Weaver, Brian Phillip; Williams, Brian J.

    2015-10-06

    The purpose of this manuscript is to illustrate how to use the simulator we have developed to generate counts from simulated spectra.

  1. Look At (Search) Large Files

    Energy Science and Technology Software Center (OSTI)

    1992-07-13

    Scanning large files for information can be time consuming and expensive when using edit utilities on large mainframe computers. The reason is that editors must usually load the file into a buffer.

  2. Modeling needs for very large systems.

    SciTech Connect (OSTI)

    Stein, Joshua S.

    2010-10-01

    Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements at opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.

  3. Extra-Large Memory Nodes

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Extra-Large Memory Nodes Extra-Large Memory Nodes Extra-Large Memory Nodes Overview Carver has two "extra-large" memory nodes; each node has four 8-core Intel X7550 ("Nehalem EX") 2.0 GHz processors (32 cores total) and 1TB memory. These nodes are available through the queue "reg_xlmem". They can be used for interactive and batch jobs that require large amount of memory (16GB per core or more). reg_xlmem queue Please refer to the "Queues and Policies" page

  4. Sandia National Laboratories: Electromagnetic Environments Simulator (EMES)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Electromagnetic Environments Simulator (EMES) The Electromagnetic Environments Simulator (EMES) is a large transverse electromagnetic (TEM) cell that propagates a uniform, planar electromagnetic wave through the cell volume where test items are placed. EMES can be used for continuous wave (CW) Electromagnetic Radiation (EMR) and transient Electromagnetic Pulse (EMP) testing. The electric field is vertically polarized between the center conductor and the floor. If it is desired to illuminate test

  5. modeling-and-simulation-with-ls-dyna

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling and Simulation with LS-DYNA®: INSIGHTS INTO MODELING WITH A GOAL OF PROVIDING CREDIBLE PREDICTIVE SIMULATIONS Feb. 11-12, 2010 Argonne TRACC Dr. Ronald F. Kulak Announcement pdficon small This email address is being protected from spambots. You need JavaScript enabled to view it. Most applications of LS-DYNA are for complex, and often combined, physics where nonlinearities due to large deformations and material response, including failure, are the norm. Often the goal of such

  6. Large

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    detector array upgrade for a ruby-laser Thomson scattering system T. M. Biewer, a) ... A low-cost upgrade has been implemented on the Madison Symmetric Torus MST ruby-laser ...

  7. Xyce parallel electronic simulator release notes.

    SciTech Connect (OSTI)

    Keiter, Eric Richard; Hoekstra, Robert John; Mei, Ting; Russo, Thomas V.; Schiek, Richard Louis; Thornquist, Heidi K.; Rankin, Eric Lamont; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. Specific requirements include, among others, the ability to solve extremely large circuit problems by supporting large-scale parallel computing platforms, improved numerical performance and object-oriented code design and implementation. The Xyce release notes describe: Hardware and software requirements New features and enhancements Any defects fixed since the last release Current known defects and defect workarounds For up-to-date information not available at the time these notes were produced, please visit the Xyce web page at http://www.cs.sandia.gov/xyce.

  8. Simulating flame lift-off characteristics of diesel and biodiesel fuels using detailed chemical-kinetic mechanisms and LES turbulence model.

    SciTech Connect (OSTI)

    Som, S; Longman, D. E.; Luo, Z; Plomer, M; Lu, T; Senecal, P.K.; Pomraning, E

    2012-01-01

    Combustion in direct-injection diesel engines occurs in a lifted, turbulent diffusion flame mode. Numerous studies indicate that the combustion and emissions in such engines are strongly influenced by the lifted flame characteristics, which are in turn determined by fuel and air mixing in the upstream region of the lifted flame, and consequently by the liquid breakup and spray development processes. From a numerical standpoint, these spray combustion processes depend heavily on the choice of underlying spray, combustion, and turbulence models. The present numerical study investigates the influence of different chemical kinetic mechanisms for diesel and biodiesel fuels, as well as Reynolds-averaged Navier-Stokes (RANS) and large eddy simulation (LES) turbulence models on predicting flame lift-off lengths (LOLs) and ignition delays. Specifically, two chemical kinetic mechanisms for n-heptane (NHPT) and three for biodiesel surrogates are investigated. In addition, the RNG k-{epsilon} (RANS) model is compared to the Smagorinsky based LES turbulence model. Using adaptive grid resolution, minimum grid sizes of 250 {micro}m and 125 {micro}m were obtained for the RANS and LES cases respectively. Validations of these models were performed against experimental data from Sandia National Laboratories in a constant volume combustion chamber. Ignition delay and flame lift-off validations were performed at different ambient temperature conditions. The LES model predicts lower ignition delays and qualitatively better flame structures compared to the RNG k-{epsilon} model. The use of realistic chemistry and a ternary surrogate mixture, which consists of methyl decanoate, methyl 9-decenoate, and NHPT, results in better predicted LOLs and ignition delays. For diesel fuel though, only marginal improvements are observed by using larger size mechanisms. However, these improved predictions come at a significant increase in computational cost.

  9. A NEW GENERATION CHEMICAL FLOODING SIMULATOR

    SciTech Connect (OSTI)

    Gary A. Pope; Kamy Sepehrnoori; Mojdeh Delshad

    2005-01-01

    The premise of this research is that a general-purpose reservoir simulator for several improved oil recovery processes can and should be developed so that high-resolution simulations of a variety of very large and difficult problems can be achieved using state-of-the-art algorithms and computers. Such a simulator is not currently available to the industry. The goal of this proposed research is to develop a new-generation chemical flooding simulator that is capable of efficiently and accurately simulating oil reservoirs with at least a million gridblocks in less than one day on massively parallel computers. Task 1 is the formulation and development of solution scheme, Task 2 is the implementation of the chemical module, and Task 3 is validation and application. In this final report, we will detail our progress on Tasks 1 through 3 of the project.

  10. Simulation of water flow in terrestrial systems

    Energy Science and Technology Software Center (OSTI)

    2008-12-18

    ParFlow is a parallel, variabley saturated groundwater flow code that is especially suitable for large scale problem. ParFlow simulates the three-dimensional saturated and variably saturated subsurface flow in heterogeneous porous media in three spatial dimensions. ParFlow's developemt and appkication has been on-ging for more than 10 uear. ParFlow has recently been extended to coupled surface-subsurface flow to enabel the simulation of hillslope runoff and channel routing in a truly integrated fashion. ParFlow simulates the three-dimensionalmore » varably saturated subsurface flow in strongly heterogeneous porous media in three spatial dimension.« less

  11. Final report on the project entitled "The Effects of Disturbance & Climate on Carbon Storage & the Exchanges of CO2 Water Vapor & Energy Exchange of Evergreen Coniferous Forests in the Pacific Northwest: Integration of Eddy Flux, Plant and Soil Measurements at a Cluster of Supersites"

    SciTech Connect (OSTI)

    Beverly E. Law , Christoph K. Thomas

    2011-09-20

    This is the final technical report containing a summary of all findings with regard to the following objectives of the project: (1) To quantify and understand the effects of wildfire on carbon storage and the exchanges of energy, CO2, and water vapor in a chronosequence of ponderosa pine (disturbance gradient); (2) To investigate the effects of seasonal and interannual variation in climate on carbon storage and the exchanges of energy, CO2, and water vapor in mature conifer forests in two climate zones: mesic 40-yr old Douglas-fir and semi-arid 60-yr old ponderosa pine (climate gradient); (3) To reduce uncertainty in estimates of CO2 feedbacks to the atmosphere by providing an improved model formulation for existing biosphere-atmosphere models; and (4) To provide high quality data for AmeriFlux and the NACP on micrometeorology, meteorology, and biology of these systems. Objective (1): A study integrating satellite remote sensing, AmeriFlux data, and field surveys in a simulation modeling framework estimated that the pyrogenic carbon emissions, tree mortality, and net carbon exchange associated with four large wildfires that burned ~50,000 hectares in 2002-2003 were equivalent to 2.4% of Oregon statewide anthropogenic carbon emissions over the same two-year period. Most emissions were from the combustion of the forest floor and understory vegetation, and only about 1% of live tree mass was combusted on average. Objective (2): A study of multi-year flux records across a chronosequence of ponderosa pine forests yielded that the net carbon uptake is over three times greater at a mature pine forest compared with young pine. The larger leaf area and wetter and cooler soils of the mature forest mainly caused this effect. A study analyzing seven years of carbon and water dynamics showed that interannual and seasonal variability of net carbon exchange was primarily related to variability in growing season length, which was a linear function of plant-available soil moisture in spring and early summer. A multi-year drought (2001-2003) led to a significant reduction of net ecosystem exchange due to carry-over effects in soil moisture and carbohydrate reserves in plant-tissue. In the same forest, the interannual variability in the rate carbon is lost from the soil and forest floor is considerable and related to the variability in tree growth as much as it is to variability in soil climatic conditions. Objective (3): Flux data from the mature ponderosa pine site support a physical basis for filtering nighttime data with friction velocity above the canopy. An analysis of wind fields and heat transport in the subcanopy at the mesic 40-year old Douglas site yielded that the non-linear structure and behavior of spatial temperature gradients and the flow field require enhanced sensor networks to estimate advective fluxes in the subcanopy of forest to close the surface energy balance in forests. Reliable estimates for flux uncertainties are needed to improve model validation and data assimilation in process-based carbon models, inverse modeling studies and model-data synthesis, where the uncertainties may be as important as the fluxes themselves. An analysis of the time scale dependence of the random and flux sampling error yielded that the additional flux obtained by increasing the perturbation timescale beyond about 10 minutes is dominated by random sampling error, and therefore little confidence can be placed in its value. Artificial correlation between gross ecosystem productivity (GEP) and ecosystem respiration (Re) is a consequence of flux partitioning of eddy covariance flux data when GEP is computed as the difference between NEE and computed daytime Re (e.g. using nighttime Re extrapolated into daytime using soil or air temperatures). Tower-data must be adequately spatially averaged before comparison to gridded model output as the time variability of both is inherently different. The eddy-covariance data collected at the mature ponderosa pine site and the mesic Douglas fir site were used to develop and evaluate a new method to extra

  12. Quantum simulations of physics problems

    SciTech Connect (OSTI)

    Somma, R. D.; Ortiz, G.; Knill, E. H.; Gubernatis, J. E.

    2003-01-01

    If a large Quantum Computer (QC) existed today, what type of physical problems could we efficiently simulate on it that we could not efficiently simulate on a classical Turing machine? In this paper we argue that a QC could solve some relevant physical 'questions' more efficiently. The existence of one-to-one mappings between different algebras of observables or between different Hilbert spaces allow us to represent and imitate any physical system by any other one (e.g., a bosonic system by a spin-1/2 system). We explain how these mappings can be performed, and we show quantum networks useful for the efficient evaluation of some physical properties, such as correlation functions and energy spectra.

  13. CNS 2008 Template

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    andoua@westinghouse.com Abstract The purpose of the present study is to evaluate the feasibility of use of CFD Large Eddy Simulation (LES) modeling techniques in CD-adapco CFD code...

  14. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Eddy Simulations of Fair-Weather Cumulus Case at SGP Site Zhu, P. and Albrecht, B.A., University of Miami Twelfth Atmospheric Radiation Measurement (ARM) Science Team Meeting...

  15. L3:THM.CFD.P6.02 Hydra-TH Milestone Report Yidong Xia and Hong...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    from tra- ditional Reynolds-averaged Navier-Stokes (RANS) to large-eddy simulation (LES). The following models implemented in Hydra-TH are used in the test: * RANS models -...

  16. ARM - Publications: Science Team Meeting Documents

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    as that of Large Eddy Simulation models, they provide a means for explicitly evaluating LES (LEO for LES). Further the radar observations can be used to examine the subgrid scale...

  17. Evaluation of a 1000 MW Commercial Ultra Super-Critical Coal...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of instantaneous O2 mass fraction in a hypothetical commercial scale 1000 MW, Ultra Super-Critical (USC) coal boiler Large eddy simulation prediction of instantaneous O2 mass...

  18. Regional Transportation Simulation Tool for Emergency Planning

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    rtstep-diag TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Regional Transportation Simulation Tool for Emergency Evacuation Planning (Click to play movie) Large-scale evacuations from major cities during no-notice events - such as chemical or radiological attacks, hazardous material spills, or earthquakes - have an obvious impact on large regions rather than on just the directly affected area. The scope of impact includes the

  19. Weld arc simulator

    DOE Patents [OSTI]

    Burr, Melvin J.

    1990-01-30

    An arc voltage simulator for an arc welder permits the welder response to a variation in arc voltage to be standardized. The simulator uses a linear potentiometer connected to the electrode to provide a simulated arc voltage at the electrode that changes as a function of electrode position.

  20. Air Shower Simulations

    SciTech Connect (OSTI)

    Alania, Marco; Gomez, Adolfo V. Chamorro; Araya, Ignacio J.; Huerta, Humberto Martinez; Flores, Alejandra Parra; Knapp, Johannes

    2009-04-30

    Air shower simulations are a vital part of the design of air shower experiments and the analysis of their data. We describe the basic features of air showers and explain why numerical simulations are the appropriate approach to model the shower simulation. The CORSIKA program, the standard simulation program in this field, is introduced and its features, performance and limitations are discussed. The basic principles of hadronic interaction models and some gerneral simulation techniques are explained. Also a brief introduction to the installation and use of CORSIKA is given.

  1. Microsoft Word - L3 THM ITM P3 01 (Rev 2) - Solution Verification report (2-1-12)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Solution Verification Applied to TransAT Large Eddy Simulations for Smooth Wall Channel Flow With Periodic Boundary Conditions, Revision 1 D. Chatzikyriakou, J. Buongiorno, Massachusetts Institute of Technology C. Narayanan, ASCOMP GmbH February 28, 2012 CASL-U-2011-0184-001 Solution Verification Applied to TransAT Large Eddy Simulations for Smooth Wall Channel Flow With Periodic Boundary Conditions REVISION 1 D. Chatzikyriakou 1 , J. Buongiorno 1* , C. Narayanan 2 1 Massachusetts Institute of

  2. The NOvA simulation chain

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Aurisano, A.; Backhouse, C.; Hatcher, R.; Mayer, N.; Musser, J.; Patterson, R.; Schroeter, R.; Sousa, A.

    2015-12-23

    The NOvA experiment is a two-detector, long-baseline neutrino experiment operating in the recently upgraded NuMI muon neutrino beam. Simulating neutrino interactions and backgrounds requires many steps including: the simulation of the neutrino beam flux using FLUKA and the FLUGG interface, cosmic ray generation using CRY, neutrino interaction modeling using GENIE, and a simulation of the energy deposited in the detector using GEANT4. To shorten generation time, the modeling of detector-specific aspects, such as photon transport, detector and electronics noise, and readout electronics, employs custom, parameterized simulation applications. We will describe the NOvA simulation chain, and present details on the techniquesmore » used in modeling photon transport near the ends of cells, and in developing a novel data-driven noise simulation. Due to the high intensity of the NuMI beam, the Near Detector samples a high rate of muons originating in the surrounding rock. In addition, due to its location on the surface at Ash River, MN, the Far Detector collects a large rate ((˜) 140 kHz) of cosmic muons. Furthermore, we will discuss the methods used in NOvA for overlaying rock muons and cosmic ray muons with simulated neutrino interactions and show how realistically the final simulation reproduces the preliminary NOvA data.« less

  3. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure...

  4. Reactor refueling machine simulator

    SciTech Connect (OSTI)

    Rohosky, T.L.; Swidwa, K.J.

    1987-10-13

    This patent describes in combination: a nuclear reactor; a refueling machine having a bridge, trolley and hoist each driven by a separate motor having feedback means for generating a feedback signal indicative of movement thereof. The motors are operable to position the refueling machine over the nuclear reactor for refueling the same. The refueling machine also has a removable control console including means for selectively generating separate motor signals for operating the bridge, trolley and hoist motors and for processing the feedback signals to generate an indication of the positions thereof, separate output leads connecting each of the motor signals to the respective refueling machine motor, and separate input leads for connecting each of the feedback means to the console; and a portable simulator unit comprising: a single simulator motor; a single simulator feedback signal generator connected to the simulator motor for generating a simulator feedback signal in response to operation of the simulator motor; means for selectively connecting the output leads of the console to the simulator unit in place of the refueling machine motors, and for connecting the console input leads to the simulator unit in place of the refueling machine motor feedback means; and means for driving the single simulator motor in response to any of the bridge, trolley or hoist motor signals generated by the console and means for applying the simulator feedback signal to the console input lead associated with the motor signal being generated by the control console.

  5. Fast Analysis and Simulation Team | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NISACFast Analysis and Simulation Team

  6. General Reactive Atomistic Simulation Program

    Energy Science and Technology Software Center (OSTI)

    2004-09-22

    GRASP (General Reactive Atomistic Simulation Program) is primarily intended as a molecular dynamics package for complex force fields, The code is designed to provide good performance for large systems, either in parallel or serial execution mode, The primary purpose of the code is to realistically represent the structural and dynamic properties of large number of atoms on timescales ranging from picoseconds up to a microsecond. Typically the atoms form a representative sample of some material,more » such as an interface between polycrystalline silicon and amorphous silica. GRASP differs from other parallel molecular dynamics codes primarily due to it’s ability to handle relatively complicated interaction potentials and it’s ability to use more than one interaction potential in a single simulation. Most of the computational effort goes into the calculation of interatomic forces, which depend in a complicated way on the positions of all the atoms. The forces are used to integrate the equations of motion forward in time using the so-called velocity Verlet integration scheme. Alternatively, the forces can be used to find a minimum energy configuration, in which case a modified steepest descent algorithm is used.« less

  7. Parallel Atomistic Simulations

    SciTech Connect (OSTI)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  8. Role of Modeling and Simulation in Scientific Discovery | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    method. Addthis Related Articles The ability to do 3D, large-scale simulations of supernovae, such as above, led to the discovery of an entirely new and unexpected explosion...

  9. Modeling & Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling & Simulation Modeling & Simulation Research into alternative forms of energy, especially energy security, is one of the major national security imperatives of this century. Get Expertise David Harradine Physical Chemistry and Applied Spectroscopy Email Josh Smith Chemistry Communications Email The inherent knowledge of transformation has beguiled sorcerers and scientists alike. Data Analysis and Modeling & Simulation for the Chemical Sciences Project Description Almos every

  10. Device Simulation Tool - JCAP

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PAZ0036_v2.jpg Device Simulation Tool Research Why Solar Fuels Goals & Objectives Thrust 1 Thrust 2 Thrust 3 Thrust 4 Publications Research Highlights Videos Innovations User Facilities Expert Team Benchmarking Database Device Simulation Tool XPS Spectral Database Research Introduction Why Solar Fuels? Goals & Objectives Thrusts Thrust 1 Thrust 2 Thrust 3 Thrust 4 Library Publications Research Highlights Videos Resources User Facilities Expert Team Benchmarking Database Device Simulation

  11. Modeling & Simulation publications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling & Simulation » Modeling & Simulation Publications Modeling & Simulation publications Research into alternative forms of energy, especially energy security, is one of the major national security imperatives of this century. Get Expertise David Harradine Physical Chemistry and Applied Spectroscopy Email Josh Smith Chemistry Email The inherent knowledge of transformation has beguiled sorcerers and scientists alike. D.A. Horner, F. Lambert, J.D. Kress, and L.A. Collins,

  12. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Large Magnetization at Carbon Surfaces Print Wednesday, 31 August 2011 00:00 From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the

  13. House Simulation Protocols Report

    Broader source: Energy.gov [DOE]

    Building America's House Simulation Protocols report is designed to assist researchers in tracking the progress of multiyear, whole-building energy reduction against research goals for new and...

  14. Consortium for Advanced Simulation ...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... | October 2015 2 of the lower core plate tends to promote manometer effects nu- merically. ... itera- tion and for this simulation the values are considered pseudo- global extremes. ...

  15. Whole Building Energy Simulation

    Broader source: Energy.gov [DOE]

    Whole building energy simulation, also referred to as energy modeling, can and should be incorporated early during project planning to provide energy impact feedback for which design considerations...

  16. Cluster computing software for GATE simulations

    SciTech Connect (OSTI)

    Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.

    2007-06-15

    Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values.

  17. Building America 2014 House Simulation Protocols | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    2014 House Simulation Protocols Building America 2014 House Simulation Protocols As Building America has grown to include a large and diverse cross-section of the home building and retrofit industries, it has become more important to develop accurate, consistent analysis techniques to measure progress towards the program's goals. The House Simulation Protocol (HSP) document provides guidance to program partners and managers so they can compare energy savings for new construction and retrofit

  18. Simulation of magnetic island dynamics under resonant magnetic perturbation with the TEAR code and validation of the results on T-10 tokamak data

    SciTech Connect (OSTI)

    Ivanov, N. V.; Kakurin, A. M.

    2014-10-15

    Simulation of the magnetic island evolution under Resonant Magnetic Perturbation (RMP) in rotating T-10 tokamak plasma is presented with intent of TEAR code experimental validation. In the T-10 experiment chosen for simulation, the RMP consists of a stationary error field, a magnetic field of the eddy current in the resistive vacuum vessel and magnetic field of the externally applied controlled halo current in the plasma scrape-off layer (SOL). The halo-current loop consists of a rail limiter, plasma SOL, vacuum vessel, and external part of the circuit. Effects of plasma resistivity, viscosity, and RMP are taken into account in the TEAR code based on the two-fluid MHD approximation. Radial distribution of the magnetic flux perturbation is calculated with account of the externally applied RMP. A good agreement is obtained between the simulation results and experimental data for the cases of preprogrammed and feedback-controlled halo current in the plasma SOL.

  19. Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces

    SciTech Connect (OSTI)

    Makarov, A. N.

    2015-09-15

    Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.

  20. Theory, Simulation, and Computation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ADTSC Theory, Simulation, and Computation Supporting the Laboratory's overarching strategy to provide cutting-edge tools to guide and interpret experiments and further our fundamental understanding and predictive capabilities for complex systems. Theory, modeling, informatics Suites of experiment data High performance computing, simulation, visualization Contacts Associate Director John Sarrao Deputy Associate Director Paul Dotson Directorate Office (505) 667-6645 Email Applying the Scientific

  1. Radiation detector spectrum simulator

    DOE Patents [OSTI]

    Wolf, Michael A.; Crowell, John M.

    1987-01-01

    A small battery operated nuclear spectrum simulator having a noise source nerates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith generates several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.

  2. Radiation detector spectrum simulator

    DOE Patents [OSTI]

    Wolf, M.A.; Crowell, J.M.

    1985-04-09

    A small battery operated nuclear spectrum simulator having a noise source generates pulses with a Gaussian distribution of amplitudes. A switched dc bias circuit cooperating therewith to generate several nominal amplitudes of such pulses and a spectral distribution of pulses that closely simulates the spectrum produced by a radiation source such as Americium 241.

  3. Damselfly Network Simulator

    Energy Science and Technology Software Center (OSTI)

    2014-04-01

    Damselfly is a model-based parallel network simulator. It can simulate communication patterns of High Performance Computing applications on different network topologies. It outputs steady-state network traffic for a communication pattern, which can help in studying network congestion and its impact on performance.

  4. Radio Channel Simulator (RCSM)

    Energy Science and Technology Software Center (OSTI)

    2007-01-31

    This is a simulation package for making site specific predictions of radio signal strength. The software computes received power at discrete grid points as a function of the transmitter location and propagation environment. It is intended for use with wireless network simulation packages and to support wireless network deployments.

  5. Duct thermal performance models for large commercial buildings

    SciTech Connect (OSTI)

    Wray, Craig P.

    2003-10-01

    Despite the potential for significant energy savings by reducing duct leakage or other thermal losses from duct systems in large commercial buildings, California Title 24 has no provisions to credit energy-efficient duct systems in these buildings. A substantial reason is the lack of readily available simulation tools to demonstrate the energy-saving benefits associated with efficient duct systems in large commercial buildings. The overall goal of the Efficient Distribution Systems (EDS) project within the PIER High Performance Commercial Building Systems Program is to bridge the gaps in current duct thermal performance modeling capabilities, and to expand our understanding of duct thermal performance in California large commercial buildings. As steps toward this goal, our strategy in the EDS project involves two parts: (1) developing a whole-building energy simulation approach for analyzing duct thermal performance in large commercial buildings, and (2) using the tool to identify the energy impacts of duct leakage in California large commercial buildings, in support of future recommendations to address duct performance in the Title 24 Energy Efficiency Standards for Nonresidential Buildings. The specific technical objectives for the EDS project were to: (1) Identify a near-term whole-building energy simulation approach that can be used in the impacts analysis task of this project (see Objective 3), with little or no modification. A secondary objective is to recommend how to proceed with long-term development of an improved compliance tool for Title 24 that addresses duct thermal performance. (2) Develop an Alternative Calculation Method (ACM) change proposal to include a new metric for thermal distribution system efficiency in the reporting requirements for the 2005 Title 24 Standards. The metric will facilitate future comparisons of different system types using a common ''yardstick''. (3) Using the selected near-term simulation approach, assess the impacts of duct system improvements in California large commercial buildings, over a range of building vintages and climates. This assessment will provide a solid foundation for future efforts that address the energy efficiency of large commercial duct systems in Title 24. This report describes our work to address Objective 1, which includes a review of past modeling efforts related to duct thermal performance, and recommends near- and long-term modeling approaches for analyzing duct thermal performance in large commercial buildings.

  6. Analysis of turbulent transport and mixing in transitional Rayleigh–Taylor unstable flow using direct numerical simulation data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Schilling, Oleg; Mueschke, Nicholas J.

    2010-10-18

    Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmore » and destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. Thus, these results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.« less

  7. Feeding a large-scale physics application to Python

    SciTech Connect (OSTI)

    Beazley, D.M.; Lomdahl, P.S.

    1997-10-01

    The authors describe their experiences using Python with the SPaSM molecular dynamics code at Los Alamos National Laboratory. Originally developed as a large monolithic application for massive parallel processing systems, they have used Python to transform their application into a flexible, highly modular, and extremely powerful system for performing simulation, data analysis, and visualization. In addition, they describe how Python has solved a number of important problems related to the development, debugging, deployment, and maintenance of scientific software.

  8. FEM assessment of large-strain thaw consolidation

    SciTech Connect (OSTI)

    Foriero, A.; Ladanyi, B.

    1995-02-01

    Finite-element simulations using a large-strain thaw-consolidation model, formulated by the writers, are presented and compared with the data obtained from a warm-oil test pipeline at Inuvik, Northwest Territory, Canada. Nondimensional design charts generated by the method are used to predict thaw-consolidation settlements. A good agreement is found between observed and predicted settlement values giving encouragement for future thaw-settlement predictions, based on finite-element-generated nondimensional charts.

  9. Chunking of Large Multidimensional Arrays

    SciTech Connect (OSTI)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  10. Converting DYNAMO simulations to Powersim Studio simulations

    SciTech Connect (OSTI)

    Walker, La Tonya Nicole; Malczynski, Leonard A.

    2014-02-01

    DYNAMO is a computer program for building and running 'continuous' simulation models. It was developed by the Industrial Dynamics Group at the Massachusetts Institute of Technology for simulating dynamic feedback models of business, economic, and social systems. The history of the system dynamics method since 1957 includes many classic models built in DYANMO. It was not until the late 1980s that software was built to take advantage of the rise of personal computers and graphical user interfaces that DYNAMO was supplanted. There is much learning and insight to be gained from examining the DYANMO models and their accompanying research papers. We believe that it is a worthwhile exercise to convert DYNAMO models to more recent software packages. We have made an attempt to make it easier to turn these models into a more current system dynamics software language, Powersim Studio produced by Powersim AS2 of Bergen, Norway. This guide shows how to convert DYNAMO syntax into Studio syntax.

  11. Large Component Removal/Disposal

    SciTech Connect (OSTI)

    Wheeler, D. M.

    2002-02-27

    This paper describes the removal and disposal of the large components from Maine Yankee Atomic Power Plant. The large components discussed include the three steam generators, pressurizer, and reactor pressure vessel. Two separate Exemption Requests, which included radiological characterizations, shielding evaluations, structural evaluations and transportation plans, were prepared and issued to the DOT for approval to ship these components; the first was for the three steam generators and one pressurizer, the second was for the reactor pressure vessel. Both Exemption Requests were submitted to the DOT in November 1999. The DOT approved the Exemption Requests in May and July of 2000, respectively. The steam generators and pressurizer have been removed from Maine Yankee and shipped to the processing facility. They were removed from Maine Yankee's Containment Building, loaded onto specially designed skid assemblies, transported onto two separate barges, tied down to the barges, th en shipped 2750 miles to Memphis, Tennessee for processing. The Reactor Pressure Vessel Removal Project is currently under way and scheduled to be completed by Fall of 2002. The planning, preparation and removal of these large components has required extensive efforts in planning and implementation on the part of all parties involved.

  12. Global Feedback Simulator

    Energy Science and Technology Software Center (OSTI)

    2015-10-29

    GFS is a simulation engine that is used for the characterization of Accelerator performance parameters based on the machine layout, configuration and noise sources. It combines extensively tested Feedback models with a longitudinal phase space tracking simulator along with the interaction between the two via beam-based feedback using a computationally efficient simulation engine. The models include beam instrumentation, considerations on loop delays for in both the R and beam-based feedback loops, as well as themore » ability to inject noise (both correlated and uncorrelated) at different points of the machine including a full characterization of the electron gun performance parameters.« less

  13. Global Feedback Simulator

    SciTech Connect (OSTI)

    2015-10-29

    GFS is a simulation engine that is used for the characterization of Accelerator performance parameters based on the machine layout, configuration and noise sources. It combines extensively tested Feedback models with a longitudinal phase space tracking simulator along with the interaction between the two via beam-based feedback using a computationally efficient simulation engine. The models include beam instrumentation, considerations on loop delays for in both the R and beam-based feedback loops, as well as the ability to inject noise (both correlated and uncorrelated) at different points of the machine including a full characterization of the electron gun performance parameters.

  14. Large-bore pipe decontamination

    SciTech Connect (OSTI)

    Ebadian, M.A.

    1998-01-01

    The decontamination and decommissioning (D and D) of 1200 buildings within the US Department of Energy-Office of Environmental Management (DOE-EM) Complex will require the disposition of miles of pipe. The disposition of large-bore pipe, in particular, presents difficulties in the area of decontamination and characterization. The pipe is potentially contaminated internally as well as externally. This situation requires a system capable of decontaminating and characterizing both the inside and outside of the pipe. Current decontamination and characterization systems are not designed for application to this geometry, making the direct disposal of piping systems necessary in many cases. The pipe often creates voids in the disposal cell, which requires the pipe to be cut in half or filled with a grout material. These methods are labor intensive and costly to perform on large volumes of pipe. Direct disposal does not take advantage of recycling, which could provide monetary dividends. To facilitate the decontamination and characterization of large-bore piping and thereby reduce the volume of piping required for disposal, a detailed analysis will be conducted to document the pipe remediation problem set; determine potential technologies to solve this remediation problem set; design and laboratory test potential decontamination and characterization technologies; fabricate a prototype system; provide a cost-benefit analysis of the proposed system; and transfer the technology to industry. This report summarizes the activities performed during fiscal year 1997 and describes the planned activities for fiscal year 1998. Accomplishments for FY97 include the development of the applicable and relevant and appropriate regulations, the screening of decontamination and characterization technologies, and the selection and initial design of the decontamination system.

  15. LHC: The Large Hadron Collider

    SciTech Connect (OSTI)

    Lincoln, Don

    2015-03-04

    The Large Hadron Collider (or LHC) is the world’s most powerful particle accelerator. In 2012, scientists used data taken by it to discover the Higgs boson, before pausing operations for upgrades and improvements. In the spring of 2015, the LHC will return to operations with 163% the energy it had before and with three times as many collisions per second. It’s essentially a new and improved version of itself. In this video, Fermilab’s Dr. Don Lincoln explains both some of the absolutely amazing scientific and engineering properties of this modern scientific wonder.

  16. the Large Aperture GRB Observatory

    SciTech Connect (OSTI)

    Bertou, Xavier

    2009-04-30

    The Large Aperture GRB Observatory (LAGO) aims at the detection of high energy photons from Gamma Ray Bursts (GRB) using the single particle technique (SPT) in ground based water Cherenkov detectors (WCD). To reach a reasonable sensitivity, high altitude mountain sites have been selected in Mexico (Sierra Negra, 4550 m a.s.l.), Bolivia (Chacaltaya, 5300 m a.s.l.) and Venezuela (Merida, 4765 m a.s.l.). We report on the project progresses and the first operation at high altitude, search for bursts in 6 months of preliminary data, as well as search for signal at ground level when satellites report a burst.

  17. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the scanning transmission x-ray microscope at ALS Beamline 11.0.2 provided

  18. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the scanning transmission x-ray microscope at ALS Beamline 11.0.2 provided

  19. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the scanning transmission x-ray microscope at ALS Beamline 11.0.2 provided

  20. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the scanning transmission x-ray microscope at ALS Beamline 11.0.2 provided

  1. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the scanning transmission x-ray microscope at ALS Beamline 11.0.2 provided

  2. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the scanning transmission x-ray microscope at ALS Beamline 11.0.2 provided

  3. Large Magnetization at Carbon Surfaces

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Magnetization at Carbon Surfaces Print From organic matter to pencil lead, carbon is a versatile element. Now, another use has been found: magnets. One would not expect pure carbon to be magnetic, but for more than ten years scientists have suspected that carbon can be made to be magnetic by doping it with nonmagnetic materials, changing its order ever so slightly. Years ago, the first x-ray images obtained using the scanning transmission x-ray microscope at ALS Beamline 11.0.2 provided

  4. Self-consistent klystron simulations

    SciTech Connect (OSTI)

    Carlsten, B.E.; Tallerico, P.J.

    1985-01-01

    A numerical analysis of large-signal klystron behavior based on general wave-particle interaction theory is presented. The computer code presented is tailored for the minimum amount of complexity needed in klystron simulation. The code includes self-consistent electron motion, space-charge fields, and intermediate and output fields. It also includes use of time periodicity to simplify the problem, accurate representation of the space-charge fields, accurate representation of the cavity standing-wave fields, and a sophisticated particle-pushing routine. In the paper, examples are given that show the effects of cavity detunings, of varying the magnetic field profile, of electron beam asymmetries from the gun, and of variations in external load impedance. 4 refs., 7 figs.

  5. Dynamic Power Grid Simulation

    Energy Science and Technology Software Center (OSTI)

    2015-09-14

    GridDyn is a part of power grid simulation toolkit. The code is designed using modern object oriented C++ methods utilizing C++11 and recent Boost libraries to ensure compatibility with multiple operating systems and environments.

  6. Compressible Astrophysics Simulation Code

    Energy Science and Technology Software Center (OSTI)

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  7. Energy Simulation Games Lesson

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ken Walz Unit Title: Energy Efficiency and Renewable Energy (EERE) Subject: Physical, Env, and Social Sciences Lesson Title: Energy Simulation Games Grade Level(s): 6-12 Lesson Length: 1 hours (+ optional time outside class) Date(s): 7/14/2014 * Learning Goal(s) By the end of this lesson, students will have a deeper understanding of Energy Management, Policy, and Decision Making. * Connection to Energy/ Renewable Energy In this assignment you will be using two different energy simulation tools

  8. Xyce parallel electronic simulator.

    SciTech Connect (OSTI)

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  9. Advanced Simulation and Computing

    National Nuclear Security Administration (NNSA)

    NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

  10. Simulation-Based Engineering

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulation-Based Engineering Simulation-Based Engineering is focused on predicting the behavior of complex multiphase flow reactors used in fossil-energy technologies. This effort combines theory, computational modeling, experiments, and industrial input. Physics- and science-based computational models and tools are needed to support the development and deployment of advanced fossil-fuel energy devices such as gasifiers and carbon capture reactors. It is critical to develop a practical framework

  11. Theory Modeling and Simulation

    SciTech Connect (OSTI)

    Shlachter, Jack

    2012-08-23

    Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.

  12. Large aperture diffractive space telescope

    DOE Patents [OSTI]

    Hyde, Roderick A.

    2001-01-01

    A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

  13. Statistical simulation ?of the magnetorotational dynamo

    SciTech Connect (OSTI)

    Squire, J.; Bhattacharjee, A.

    2014-08-01

    We analyze turbulence and dynamo induced by the magnetorotational instability (MRI) using quasi-linear statistical simulation methods. We find that homogenous turbulence is unstable to a large scale dynamo instability, which saturates to an inhomogenous equilibrium with a very strong dependence on the magnetic Prandtl number (Pm). Despite its enormously reduced nonlinearity, the quasi-linear model exhibits the same qualitative scaling of angular momentum transport with Pm as fully nonlinear turbulence. This demonstrates the relationship of recent convergence problems to the large scale dynamo and suggests possible methods for studying astrophysically relevant regimes at very low or high Pm.

  14. LHC RF System Time-Domain Simulation

    SciTech Connect (OSTI)

    Mastorides, T.; Rivetta, C.

    2010-09-14

    Non-linear time-domain simulations have been developed for the Positron-Electron Project (PEP-II) and the Large Hadron Collider (LHC). These simulations capture the dynamic behavior of the RF station-beam interaction and are structured to reproduce the technical characteristics of the system (noise contributions, non-linear elements, and more). As such, they provide useful results and insight for the development and design of future LLRF feedback systems. They are also a valuable tool for the study of diverse longitudinal beam dynamics effects such as coupled-bunch impedance driven instabilities and single bunch longitudinal emittance growth. Results from these studies and related measurements from PEP-II and LHC have been presented in multiple places. This report presents an example of the time-domain simulation implementation for the LHC.

  15. LARGE BLOCK TEST STATUS REPORT

    SciTech Connect (OSTI)

    Wilder, D. G.; Blair, S. C.; Buscheck, T.; Carloson, R. C.; Lee, K.; Meike, A.; Ramirez, J. L.; Sevougian, D.

    1997-08-26

    This report is intended to serve as a status report, which essentially transmits the data that have been collected to date on the Large Block Test (LBT). The analyses of data will be performed during FY98, and then a complete report will be prepared. This status report includes introductory material that is not needed merely to transmit data but is available at this time and therefore included. As such, this status report will serve as the template for the future report, and the information is thus preserved. The United States Department of Energy (DOE) is investigatinq the suitability of Yucca Mountain (YM) as a potential site for the nation's first high-level nuclear waste repository. As shown in Fig. 1-1, the site is located about 120 km northwest of Las Vegas, Nevada, in an area of uninhabited desert.

  16. Xyce parallel electronic simulator : users' guide.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.

  17. Self-consistency tests of large-scale dynamics parameterizations for

    Office of Scientific and Technical Information (OSTI)

    single-column modeling (Journal Article) | SciTech Connect Self-consistency tests of large-scale dynamics parameterizations for single-column modeling Citation Details In-Document Search Title: Self-consistency tests of large-scale dynamics parameterizations for single-column modeling Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the

  18. Computer simulation | Open Energy Information

    Open Energy Info (EERE)

    Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

  19. Computation & Simulation > Theory & Computation > Research >...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

  20. Ion Beam Simulator

    Energy Science and Technology Software Center (OSTI)

    2005-11-08

    IBSimu(Ion Beam Simulator) is a computer program for making two and three dimensional ion optical simulations. The program can solve electrostatic field in a rectangular mesh using Poisson equation using Finite Difference method (FDM). The mesh can consist of a coarse and a fine part so that the calculation accuracy can be increased in critical areas of the geometry, while most of the calculation is done quickly using the coarse mesh. IBSimu can launch ionmore » beam trajectories into the simulation from an injection surface or fomo plasma. Ion beam space charge of time independent simulations can be taken in account using Viasov iteration. Plasma is calculated by compensating space charge with electrons having Boltzmann energy distribution. The simulation software can also be used to calculate time dependent cases if the space charge is not calculated. Software includes diagnostic tools for plotting the geometry, electric field, space charge map, ion beam trajectories, emittance data and beam profiles.« less

  1. Simple Electric Vehicle Simulation

    Energy Science and Technology Software Center (OSTI)

    1993-07-29

    SIMPLEV2.0 is an electric vehicle simulation code which can be used with any IBM compatible personal computer. This general purpose simulation program is useful for performing parametric studies of electric and series hybrid electric vehicle performance on user input driving cycles.. The program is run interactively and guides the user through all of the necessary inputs. Driveline components and the traction battery are described and defined by ASCII files which may be customized by themore » user. Scaling of these components is also possible. Detailed simulation results are plotted on the PC monitor and may also be printed on a printer attached to the PC.« less

  2. Communication Simulations for Power System Applications

    SciTech Connect (OSTI)

    Fuller, Jason C.; Ciraci, Selim; Daily, Jeffrey A.; Fisher, Andrew R.; Hauer, Matthew L.

    2013-05-29

    New smart grid technologies and concepts, such as dynamic pricing, demand response, dynamic state estimation, and wide area monitoring, protection, and control, are expected to require considerable communication resources. As the cost of retrofit can be high, future power grids will require the integration of high-speed, secure connections with legacy communication systems, while still providing adequate system control and security. While considerable work has been performed to create co-simulators for the power domain with load models and market operations, limited work has been performed in integrating communications directly into a power domain solver. The simulation of communication and power systems will become more important as the two systems become more inter-related. This paper will discuss ongoing work at Pacific Northwest National Laboratory to create a flexible, high-speed power and communication system co-simulator for smart grid applications. The framework for the software will be described, including architecture considerations for modular, high performance computing and large-scale scalability (serialization, load balancing, partitioning, cross-platform support, etc.). The current simulator supports the ns-3 (telecommunications) and GridLAB-D (distribution systems) simulators. Ongoing and future work will be described, including planned future expansions for a traditional transmission solver. A test case using the co-simulator, utilizing a transactive demand response system created for the Olympic Peninsula and AEP gridSMART demonstrations, requiring two-way communication between distributed and centralized market devices, will be used to demonstrate the value and intended purpose of the co-simulation environment.

  3. Large optics inspection, tilting, and washing stand

    DOE Patents [OSTI]

    Ayers, Marion Jay; Ayers, Shannon Lee

    2012-10-09

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  4. Large optics inspection, tilting, and washing stand

    DOE Patents [OSTI]

    Ayers, Marion Jay; Ayers, Shannon Lee

    2010-08-24

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  5. National Geo-Database for Biofuel Simulations and Regional Analysis

    SciTech Connect (OSTI)

    Izaurralde, Roberto C.; Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, David H.

    2012-04-01

    The goal of this project undertaken by GLBRC (Great Lakes Bioenergy Research Center) Area 4 (Sustainability) modelers is to develop a national capability to model feedstock supply, ethanol production, and biogeochemical impacts of cellulosic biofuels. The results of this project contribute to sustainability goals of the GLBRC; i.e. to contribute to developing a sustainable bioenergy economy: one that is profitable to farmers and refiners, acceptable to society, and environmentally sound. A sustainable bioenergy economy will also contribute, in a fundamental way, to meeting national objectives on energy security and climate mitigation. The specific objectives of this study are to: (1) develop a spatially explicit national geodatabase for conducting biofuel simulation studies; (2) model biomass productivity and associated environmental impacts of annual cellulosic feedstocks; (3) simulate production of perennial biomass feedstocks grown on marginal lands; and (4) locate possible sites for the establishment of cellulosic ethanol biorefineries. To address the first objective, we developed SENGBEM (Spatially Explicit National Geodatabase for Biofuel and Environmental Modeling), a 60-m resolution geodatabase of the conterminous USA containing data on: (1) climate, (2) soils, (3) topography, (4) hydrography, (5) land cover/ land use (LCLU), and (6) ancillary data (e.g., road networks, federal and state lands, national and state parks, etc.). A unique feature of SENGBEM is its 2008-2010 crop rotation data, a crucially important component for simulating productivity and biogeochemical cycles as well as land-use changes associated with biofuel cropping. We used the EPIC (Environmental Policy Integrated Climate) model to simulate biomass productivity and environmental impacts of annual and perennial cellulosic feedstocks across much of the USA on both croplands and marginal lands. We used data from LTER and eddy-covariance experiments within the study region to test the performance of EPIC and, when necessary, improve its parameterization. We investigated three scenarios. In the first, we simulated a historical (current) baseline scenario composed mainly of corn-, soybean-, and wheat-based rotations as grown existing croplands east of the Rocky Mountains in 30 states. In the second scenario, we simulated a modified baseline in which we harvested corn and wheat residues to supply feedstocks to potential cellulosic ethanol biorefineries distributed within the study area. In the third scenario, we simulated the productivity of perennial cropping systems such as switchgrass or perennial mixtures grown on either marginal or Conservation Reserve Program (CRP) lands. In all cases we evaluated the environmental impacts (e.g., soil carbon changes, soil erosion, nitrate leaching, etc.) associated with the practices. In summary, we have reported on the development of a spatially explicit national geodatabase to conduct biofuel simulation studies and provided initial simulation results on the potential of annual and perennial cropping systems to serve as feedstocks for the production of cellulosic ethanol. To accomplish this, we have employed sophisticated spatial analysis methods in combination with the process-based biogeochemical model EPIC. This work provided the opportunity to test the hypothesis that marginal lands can serve as sources of cellulosic feedstocks and thus contribute to avoid potential conflicts between bioenergy and food production systems. This work, we believe, opens the door for further analysis on the characteristics of cellulosic feedstocks as major contributors to the development of a sustainable bioenergy economy.

  6. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (OSTI)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  7. Battery Particle Simulation

    SciTech Connect (OSTI)

    2014-09-15

    Two simulations show the differences between a battery being drained at a slower rate, over a full hour, versus a faster rate, only six minutes (a tenth of an hour). In both cases battery particles go from being fully charged (green) to fully drained (red), but there are significant differences in the patterns of discharge based on the rate.

  8. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect (OSTI)

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ?CDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ?. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ?, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ?. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  9. Parallel halo finding in N-body cosmology simulations

    SciTech Connect (OSTI)

    Pfitzner, D.W.; Salmon, J.K.

    1996-12-31

    Cosmological N-body simulations on parallel computers produce large datasets - about five hundred Megabytes at a single output time, or tens of Gigabytes over the course of a simulation. These large datasets require further analysis before they can be compared to astronomical observations. We have implemented two methods for performing halo finding, a key part of the knowledge discovery process, on parallel machines. One of these is a parallel implementation of the friends of friends (FOF) algorithm, widely used in the field of N-body cosmology. The new isodensity (ID) method has been developed to overcome some of the shortcomings of FOR Both have been implemented on a variety of computer systems, and successfully used to extract halos from simulations with up to 256{sup 3} (or about 16.8 million) particles, which axe among the largest N-body cosmology simulations in existence.

  10. Vehicle Technologies Office Merit Review 2014: Evaluation of VTO Benefits Using Large Scale Simulation

    Broader source: Energy.gov [DOE]

    Presentation given by Argonne National Laboratory at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about the evaluation of...

  11. Assessment of Vehicle Sizing, Energy Consumption and Cost Through Large Scale Simulation of Advanced Vehicle Technologies

    SciTech Connect (OSTI)

    Moawad, Ayman; Kim, Namdoo; Shidore, Neeraj; Rousseau, Aymeric

    2016-01-01

    The U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) has been developing more energy-efficient and environmentally friendly highway transportation technologies that will enable America to use less petroleum. The long-term aim is to develop "leapfrog" technologies that will provide Americans with greater freedom of mobility and energy security, while lowering costs and reducing impacts on the environment. This report reviews the results of the DOE VTO. It gives an assessment of the fuel and light-duty vehicle technologies that are most likely to be established, developed, and eventually commercialized during the next 30 years (up to 2045). Because of the rapid evolution of component technologies, this study is performed every two years to continuously update the results based on the latest state-of-the-art technologies.

  12. Particle simulation of auroral double layers. Doctoral thesis

    SciTech Connect (OSTI)

    Smith, B.L.

    1992-06-01

    Externally driven magnetic reconnection has been proposed as a possible mechanism for production of auroral electrons during magnetic substorms. Fluid simulations of magnetic reconnection lead to strong plasma flows towards the increasing magnetic field of the earth. These plasma flows must generate large scale potential drops to preserve global charge neutrality. We have examined currentless injection of plasma along a dipole magnetic field into a bounded region using both analytic techniques and particle simulation.

  13. Scalable Computation of Streamlines on Very Large Datasets

    SciTech Connect (OSTI)

    Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean; Weber, Gunther H.

    2009-09-01

    Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.

  14. Nitrogen expander cycles for large capacity liquefaction of natural gas

    SciTech Connect (OSTI)

    Chang, Ho-Myung; Park, Jae Hoon; Gwak, Kyung Hyun; Choe, Kun Hyung

    2014-01-29

    Thermodynamic study is performed on nitrogen expander cycles for large capacity liquefaction of natural gas. In order to substantially increase the capacity, a Brayton refrigeration cycle with nitrogen expander was recently added to the cold end of the reputable propane pre-cooled mixed-refrigerant (C3-MR) process. Similar modifications with a nitrogen expander cycle are extensively investigated on a variety of cycle configurations. The existing and modified cycles are simulated with commercial process software (Aspen HYSYS) based on selected specifications. The results are compared in terms of thermodynamic efficiency, liquefaction capacity, and estimated size of heat exchangers. The combination of C3-MR with partial regeneration and pre-cooling of nitrogen expander cycle is recommended to have a great potential for high efficiency and large capacity.

  15. Predictive Simulation of Engines

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Predictive Simulation of Engines - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs

  16. Advanced Simulation Capability

    Office of Environmental Management (EM)

    4 Status Report The Advanced Simulation Capability for Environmental Management Initiative is funded by the U.S. Department of Energy Office of Environmental Management Responding to the Challenge 4 Capability Development 4 References 14 Appendix: FY14 Publications 15 and Presentations Contents Cover photo courtesy of Daniel Scott, Savannah River Ecology Laboratory. L-Lake is a 1,000-acre, man-made lake, created to disperse and cool water in L-Reactor when it was operating. Message from the

  17. Direct Numerical Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Numerical Simulation - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

  18. Advanced Wellbore Thermal Simulator

    Energy Science and Technology Software Center (OSTI)

    1992-03-04

    GEOTEMP2, which is based on the earlier GEOTEMP program, is a wellbore thermal simulator designed for geothermal well drilling and production applications. The code treats natural and forced convection and conduction within the wellbore and heat conduction within the surrounding rock matrix. A variety of well operations can be modeled including injection, production, forward and reverse circulation with gas or liquid, gas or liquid drilling, and two-phase steam injection and production. Well completion with severalmore » different casing sizes and cement intervals can be modeled. The code allows variables, such as flow rate, to change with time enabling a realistic treatment of well operations. Provision is made in the flow equations to allow the flow areas of the tubing to vary with depth in the wellbore. Multiple liquids can exist in GEOTEMP2 simulations. Liquid interfaces are tracked through the tubing and annulus as one liquid displaces another. GEOTEMP2, however, does not attempt to simulate displacement of liquids with a gas or two-phase steam or vice versa. This means that it is not possible to simulate an operation where the type of drilling fluid changes, e.g. mud going to air. GEOTEMP2 was designed primarily for use in predicting the behavior of geothermal wells, but it is flexible enough to handle many typical drilling, production, and injection problems in the oil industry as well. However, GEOTEMP2 does not allow the modeling of gas-filled annuli in production or injection problems. In gas or mist drilling, no radiation losses are included in the energy balance. No attempt is made to model flow in the formation. Average execution time is 50 CP seconds on a CDC CYBER170. This edition of GEOTEMP2 is designated as Version 2.0 by the contributors.« less

  19. Fast Analysis and Simulation Team | NISAC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SheetsFast Analysis and Simulation Team content top Fast Analysis and Simulation Team

  20. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  1. Numerical simulation of the compressor coil of the plasma dynamic accelerator

    SciTech Connect (OSTI)

    Thomas, P.

    1997-01-01

    The plasma dynamic accelerator accelerates a plasma to very high velocities in a coaxial accelerator and then compresses it in a compressor coil, achieving high densities. The axial component of the current distribution, extending from the tip of the coaxial accelerator`s center electrode to the coil turns, causes compressing forces, the radial component yields accelerating forces. The rapid change of the coil current induces azimuthal eddy currents in the plasma that interact with the coil`s magnetic field, again yielding Lorentz forces. Aerodynamic compression may also be an important effect. A new two-dimensional magnetohydrodynamics code is used to investigate which of these effects are really important for the compression. The code allows one to simulate all effects mentioned separately and in combination. In a first step only aerodynamic compression is considered. Then each electromagnetic effect is imposed on the system. Finally, a complete simulation of the compressor coil is performed. The analysis of the results provides new insights in the way the coil operates. This paper presents important aspects of the mathematical model and of the numerical implementation and reports results.

  2. A Hierarchical Evaluation of Regional Climate Simulations

    SciTech Connect (OSTI)

    Leung, Lai-Yung R.; Ringler, Todd; Collins, William D.; Taylor, Mark; Ashfaq, Moetasim

    2013-08-20

    Global climate models (GCMs) are the primary tools for predicting the evolution of the climate system. Through decades of development, GCMs have demonstrated useful skill in simulating climate at continental to global scales. However, large uncertainties remain in projecting climate change at regional scales, which limit our ability to inform decisions on climate change adaptation and mitigation. To bridge this gap, different modeling approaches including nested regional climate models (RCMs), global stretch-grid models, and global high-resolution atmospheric models have been used to provide regional climate simulations (Leung et al. 2003). In previous efforts to evaluate these approaches, isolating their relative merits was not possible because factors such as dynamical frameworks, physics parameterizations, and model resolutions were not systematically constrained. With advances in high performance computing, it is now feasible to run coupled atmosphere-ocean GCMs at horizontal resolution comparable to what RCMs use today. Global models with local refinement using unstructured grids have become available for modeling regional climate (e.g., Rauscher et al. 2012; Ringler et al. 2013). While they offer opportunities to improve climate simulations, significant efforts are needed to test their veracity for regional-scale climate simulations.

  3. Xyce parallel electronic simulator : reference guide.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  4. Managing Large Capital Projects | Department of Energy

    Energy Savers [EERE]

    Managing Large Capital Projects Managing Large Capital Projects Presentation from the 2015 DOE National Cleanup Workshop by Ken Picha, Deputy Assistant Secretary for Tank Waste and Nuclear Material, Office of Environmental Management. PDF icon Managing Large Capital Projects More Documents & Publications Waste Treatment Plant Project Construction of Salt Waste Processing Facility (SWPF) 2013 Congressional Nuclear Cleanup Caucus Briefings

  5. Large-Scale Renewable Energy Guide Webinar

    Broader source: Energy.gov [DOE]

    Webinar introduces the “Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

  6. Structural Simulation Toolkit. Lunch & Learn

    SciTech Connect (OSTI)

    Moore, Branden J.; Voskuilen, Gwendolyn Renae; Rodrigues, Arun F.; Hammond, Simon David; Hemmert, Karl Scott

    2015-09-01

    This is a presentation outlining a lunch and learn lecture for the Structural Simulation Toolkit, supported by Sandia National Laboratories.

  7. Plasma Simulation Program

    SciTech Connect (OSTI)

    Greenwald, Martin

    2011-10-04

    Many others in the fusion energy and advanced scientific computing communities participated in the development of this plan. The core planning team is grateful for their important contributions. This summary is meant as a quick overview the Fusion Simulation Program's (FSP's) purpose and intentions. There are several additional documents referenced within this one and all are supplemental or flow down from this Program Plan. The overall science goal of the DOE Office of Fusion Energy Sciences (FES) Fusion Simulation Program (FSP) is to develop predictive simulation capability for magnetically confined fusion plasmas at an unprecedented level of integration and fidelity. This will directly support and enable effective U.S. participation in International Thermonuclear Experimental Reactor (ITER) research and the overall mission of delivering practical fusion energy. The FSP will address a rich set of scientific issues together with experimental programs, producing validated integrated physics results. This is very well aligned with the mission of the ITER Organization to coordinate with its members the integrated modeling and control of fusion plasmas, including benchmarking and validation activities. [1]. Initial FSP research will focus on two critical Integrated Science Application (ISA) areas: ISA1, the plasma edge; and ISA2, whole device modeling (WDM) including disruption avoidance. The first of these problems involves the narrow plasma boundary layer and its complex interactions with the plasma core and the surrounding material wall. The second requires development of a computationally tractable, but comprehensive model that describes all equilibrium and dynamic processes at a sufficient level of detail to provide useful prediction of the temporal evolution of fusion plasma experiments. The initial driver for the whole device model will be prediction and avoidance of discharge-terminating disruptions, especially at high performance, which are a critical impediment to successful operation of machines like ITER. If disruptions prove unable to be avoided, their associated dynamics and effects will be addressed in the next phase of the FSP.

  8. Plasma theory and simulation research

    SciTech Connect (OSTI)

    Birdsall, C.K.

    1989-01-01

    Our research group uses both theory and simulation as tools in order to increase the understanding of instabilities, heating, diffusion, transport and other phenomena in plasmas. We also work on the improvement of simulation, both theoretically and practically. Our focus has been more and more on the plasma edge (the sheath''), interactions with boundaries, leading to simulations of whole devices (someday a numerical tokamak).

  9. Bio-threat microparticle simulants

    DOE Patents [OSTI]

    Farquar, George Roy; Leif, Roald N

    2012-10-23

    A bio-threat simulant that includes a carrier and DNA encapsulated in the carrier. Also a method of making a simulant including the steps of providing a carrier and encapsulating DNA in the carrier to produce the bio-threat simulant.

  10. Bio-threat microparticle simulants

    DOE Patents [OSTI]

    Farquar, George Roy; Leif, Roald

    2014-09-16

    A bio-threat simulant that includes a carrier and DNA encapsulated in the carrier. Also a method of making a simulant including the steps of providing a carrier and encapsulating DNA in the carrier to produce the bio-threat simulant.

  11. Traffic Modeling and Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Traffic Modeling and Simulation This email address is being protected from spambots. You need JavaScript enabled to view it. - TRACC Director Background The problems facing the country's transportation system are enormous. Over 40,000 fatalities occur each year in traffic accidents. Vehicle emissions are the leading cause of air pollution. With travel demand expected to increase more than 50% by 2020, it becomes apparent that we can't just "build our way out" of the problem. We need to

  12. Distributed Sensors Simulator

    Energy Science and Technology Software Center (OSTI)

    2003-08-30

    The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for distributed sensor networks without the commitment inherent in using hardware. The flexibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness, and scaling issues; explore arbitrary algorithms for DSNs; and is particularly useful as a proof-of-concept tool. The user provides data on node location and specifications, defines event phenomena, and plugs in the application(s)more » to run. DSS in turn provides the virtual environmental embedding — but exposed to the user like no true embedding could ever be.« less

  13. Fusion Simulation Program

    SciTech Connect (OSTI)

    Project Staff

    2012-02-29

    Under this project, General Atomics (GA) was tasked to develop the experimental validation plans for two high priority ISAs, Boundary and Pedestal and Whole Device Modeling in collaboration with the theory, simulation and experimental communities. The following sections have been incorporated into the final FSP Program Plan (www.pppl.gov/fsp), which was delivered to the US Department of Energy (DOE). Additional deliverables by GA include guidance for validation, development of metrics to evaluate success and procedures for collaboration with experiments. These are also part of the final report.

  14. Animations/simulations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Numeric data Data plots and fgures Genome/genetics data Interactive data maps Animations/simulations Still images and photos Find scientifc research data resulting from DOE-funded research. u u u u u u Find www.osti.gov/dataexplorer Search DOE Data Explorer for Energy and Science Data + Advanced Search DOE/OSTI--C205 02/16 Explore DOE Data Explorer View the most recently added datasets or collections. Browse by titles or subjects. Discover the organizations sponsoring the data. Check out

  15. 2014 Building America House Simulation Protocols

    SciTech Connect (OSTI)

    Wilson, E.; Engebrecht-Metzger, C.; Horowitz, S.; Hendron, R.

    2014-03-01

    As BA has grown to include a large and diverse cross-section of the home building and retrofit industries, it has become more important to develop accurate, consistent analysis techniques to measure progress towards the program's goals. The House Simulation Protocol (HSP) document provides guidance to program partners and managers so they can compare energy savings for new construction and retrofit projects. The HSP provides the program with analysis methods that are proven to be effective and reliable in investigating the energy use of advanced energy systems and of entire houses.

  16. 2014 Building America House Simulation Protocols

    SciTech Connect (OSTI)

    Wilson, E.; Engebrecht, C. Metzger; Horowitz, S.; Hendron, R.

    2014-03-01

    As Building America has grown to include a large and diverse cross-section of the home building and retrofit industries, it has become more important to develop accurate, consistent analysis techniques to measure progress towards the program's goals. The House Simulation Protocol (HSP) document provides guidance to program partners and managers so they can compare energy savings for new construction and retrofit projects. The HSP provides the program with analysis methods that are proven to be effective and reliable in investigating the energy use of advanced energy systems and of entire houses.

  17. Laparoscopic simulation interface

    DOE Patents [OSTI]

    Rosenberg, Louis B.

    2006-04-04

    A method and apparatus for providing high bandwidth and low noise mechanical input and output for computer systems. A gimbal mechanism provides two revolute degrees of freedom to an object about two axes of rotation. A linear axis member is coupled to the gimbal mechanism at the intersection of the two axes of rotation. The linear axis member is capable of being translated along a third axis to provide a third degree of freedom. The user object is coupled to the linear axis member and is thus translatable along the third axis so that the object can be moved along all three degrees of freedom. Transducers associated with the provided degrees of freedom include sensors and actuators and provide an electromechanical interface between the object and a digital processing system. Capstan drive mechanisms transmit forces between the transducers and the object. The linear axis member can also be rotated about its lengthwise axis to provide a fourth degree of freedom, and, optionally, a floating gimbal mechanism is coupled to the linear axis member to provide fifth and sixth degrees of freedom to an object. Transducer sensors are associated with the fourth, fifth, and sixth degrees of freedom. The interface is well suited for simulations of medical procedures and simulations in which an object such as a stylus or a joystick is moved and manipulated by the user.

  18. Confidence in Numerical Simulations

    SciTech Connect (OSTI)

    Hemez, Francois M.

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  19. Energy Simulator Residential Buildings

    Energy Science and Technology Software Center (OSTI)

    1992-02-24

    SERI-RES performs thermal energy analysis of residential or small commercial buildings and has the capability of modeling passive solar equipment such as rock beds, trombe walls, and phase change material. The analysis is accomplished by simulation. A thermal model of the building is created by the user and translated into mathematical form by the program. The mathematical equations are solved repeatedly at time intervals of one hour or less for the period of simulation. Themore » mathematical representation of the building is a thermal network with nonlinear, temperature-dependent controls. A combination of forward finite differences, Jacobian iteration, and constrained optimization techniques is used to obtain a solution. An auxiliary interactive editing program, EDITOR, is included for creating building descriptions. EDITOR checks the validity of the input data and also provides facilities for storing and referencing several types of building description files. Some of the data files used by SERI-RES need to be implemented as direct-access files. Programs are included to convert sequential files to direct-access files and vice versa.« less

  20. NII Simulator 1.0

    Energy Science and Technology Software Center (OSTI)

    2009-12-02

    The software listed here is a simulator for SAIC P7500 VACIS non intrusive inspection system. The simulator provides messages similar to those provided by this piece of equipment.To facilitate testing of the Second Line of Defense systems and similar software products from commercial software vendors, this software simulation application has been developed to simulate the P7500 that the Second Line of Defense communications software system must interface with. The primary use of this simulator ismore » for testing of both Sandia developed and DOE contractor developed software.« less

  1. Transportation Anslysis Simulation System

    Energy Science and Technology Software Center (OSTI)

    2004-08-23

    TRANSIMS version 3.1 is an integrated set of analytical and simulation models and supporting databases. The system is designed to create a virtual metropolitan region with representation of each of the region’s individuals, their activities and the transportation infrastructure they use. TRANSIMS puts into practice a new, disaggregate approach to travel demand modeling using agent-based micro-simulation technology. TRANSIMS methodology creates a virtual metropolitan region with representation of the transportation infrastructure and the population, at themore » level of households and individual travelers. Trips a planned to satisfy the population’s activity pattems at the individual traveler level. TRANSIMS then simulates the movement of travelers and vehicles across the transportation network using multiple modes, including car, transit, bike and walk, on a second-by-second basis. Metropolitan planners must plan growth of their cities according to the stringent transportation system planning requirements of the Interniodal Surface Transportation Efficiency Act of 1991, the Clean Air Act Amendments of 1990 and other similar laws and regulations. These require each state and its metropotitan regions to work together to develop short and long term transportation improvement plans. The plans must (1) estimate the future transportation needs for travelers and goods movements, (2) evaluate ways to manage and reduce congestion, (3) examine the effectiveness of building new roads and transit systems, and (4) limit the environmental impact of the various strategies. The needed consistent and accurate transportation improvement plans require an analytical capability that properly accounts for travel demand, human behavior, traffic and transit operations, major investments, and environmental effects. Other existing planning tools use aggregated information and representative behavior to predict average response and average use of transportation facilities. They do not account for individual traveler response to the dynamic transportation environment. In contrast, TRANSIMS provides disaggregated information that more explicitly represents the complex nature of humans interacting with the transportation system. It first generates a synthetic population that represents individuals and their households in the metropolitan region in a statistically valid way. The demographic makeup and spatial distribution of this synthetic population is derived from census data so that it matches that of the region’s real population. From survey data, a model is built of household and individual activities that may occur at home, in the workplace, school or shopping centers, for example. Trip plans including departure times, travel modes, and specific routes are created for each individual to get to his or her daily activities. TRANSIMS then simulates the movement of millions of individuals, following their trip plans throughout the transportation network, including their use of vehicles such as cars or buses, on a second-by-second basis. The virtual travel in TRANSIMS mimics the traveling and driving behavior of real people in the metropolitan region. The interactions of individual vehicles produce realistic traffic dynamics from which analysts can judge to performance of the transportation sysime and estimate vehicle emissions. Los Alamos, in cooperation with the Department of Transportation, Federal HIghway Administration and the local Metropolitan Planning Offices, has done TRANSIMS micro-simulations of auto traffic patterns in these two urban areas and completed associated scenario-based studies.« less

  2. Director's colloquium March 18 large hadron collider

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Director's colloquium large hadron collider Director's colloquium March 18 large hadron collider Lyndon Evans of CERN will talk about the most complex scientific instrument ever built-the Large Hadron Collider (LHC). March 10, 2010 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and new materials. Los

  3. Final Report: Quantifying Prediction Fidelity in Multiscale Multiphysics Simulations

    SciTech Connect (OSTI)

    Long, Kevin

    2014-09-30

    We have developed algorithms and software in support of uncertainty quantification in nonlinear multiphysics simulations. This work includes high-level, high-performance software for large-scale, matrix-free linear algebra and a new algorithm for fast computation of transcendental functions of stochastic variables.

  4. Idaho Power- Large Commercial Custom Efficiency Program

    Broader source: Energy.gov [DOE]

    Large commercial and industrial Idaho Power customers that reduce energy usage through more efficient electrical commercial and industrial processes may qualify for an incentive that is the lesser...

  5. Use of fine gridding in full field simulation

    SciTech Connect (OSTI)

    Greaser, G.R.; Doerr, T.C.; Chea, C.; Parvez, N.

    1995-10-01

    A full field 3D simulation study was completed for a large Saudi Arabian oilfield located in the Arabian Gulf. The subject field produced from a highly layered Arab D carbonate reservoir which exhibited a strong water drive. The objective of the study was to determine future platform locations and timing with respect to water encroachment. The large areal extent (13{times}23 km) and highly layered nature of this reservoir necessitated use of coarse grids in order to obtain a reasonable model size. The coarse grid model was constructed with 86,000 grid cells. Using the coarse model, prediction studies showed an advantage to future platform development with horizontal wells. However, these results were suspect since it was thought that the coarse cell model may not properly model water coning and encroachment around the horizontal wellbores. To improve the modeling of water movement, fine grid numerical simulation techniques were investigated. This paper discusses the use of sector and local grid refinement modeling techniques with commercially available software. Fine grid simulation studies were conducted for a proposed new platform. The fine grid simulation studies showed significantly different results compared with the coarse model predictions. The fine grid simulation results will be discussed, the two fine grid simulation techniques will be compared, and reasons presented why performance differences exist. Performance of the fine grid models on an Unix RISC based workstation is included.

  6. VHDL Control Routing Simulator

    Energy Science and Technology Software Center (OSTI)

    1995-07-10

    The control router simulates a backplane consisting of up to 16 slot. Slot 0, reserved for a control module (cr-ctrl), generates the system clocks and provides the serial interface to the Gating Logic. The remaining 15 slots (1-15) contain routing modules (cr mod), each having up to 64 serial inputs and outputs with FIFOs. Messages to be transmitted to the Control Router are taken from text files. There are currently 17 such source files. Inmore » the model, the serial output of each source is connected to multiple receivers, so that there are 8 identical messages transmitted to the router for each message file entry.« less

  7. The effective field theory of cosmological large scale structures

    SciTech Connect (OSTI)

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ? 106c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations ?(k) for all the observables. As an example, we calculate the correction to the power spectrum at order ?(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ? 0.24h Mpc1.

  8. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect (OSTI)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  9. Design Considerations for Large Mass Ultra-Low Background Experiments

    SciTech Connect (OSTI)

    Aguayo Navarrete, Estanislao; Reid, Douglas J.; Fast, James E.; Orrell, John L.

    2011-07-01

    Summary The objective of this document is to present the designers of the next generation of large-mass, ultra-low background experiments with lessons learned and design strategies from previous experimental work. Design issues divided by topic into mechanical, thermal and electrical requirements are addressed. Large mass low-background experiments have been recognized by the scientific community as appropriate tools to aid in the refinement of the standard model. The design of these experiments is very costly and a rigorous engineering review is required for their success. The extreme conditions that the components of the experiment must withstand (heavy shielding, vacuum/pressure and temperature gradients), in combination with unprecedented noise levels, necessitate engineering guidance to support quality construction and safe operating conditions. Physical properties and analytical results of typical construction materials are presented. Design considerations for achieving ultra-low-noise data acquisition systems are addressed. Five large-mass, low-background conceptual designs for the one-tonne scale germanium experiment are proposed and analyzed. The result is a series of recommendations for future experiments engineering and for the Majorana simulation task group to evaluate the different design approaches.

  10. LARGE-AMPLITUDE LONGITUDINAL OSCILLATIONS IN A SOLAR FILAMENT

    SciTech Connect (OSTI)

    Luna, M.

    2012-05-01

    We have developed the first self-consistent model for the observed large-amplitude oscillations along filament axes that explains the restoring force and damping mechanism. We have investigated the oscillations of multiple threads formed in long, dipped flux tubes through the thermal nonequilibrium process, and found that the oscillation properties predicted by our simulations agree with the observed behavior. We then constructed a model for the large-amplitude longitudinal oscillations that demonstrates that the restoring force is the projected gravity in the tube where the threads oscillate. Although the period is independent of the tube length and the constantly growing mass, the motions are strongly damped by the steady accretion of mass onto the threads by thermal nonequilibrium. The observations and our model suggest that a nearby impulsive event drives the existing prominence threads along their supporting tubes, away from the heating deposition site, without destroying them. The subsequent oscillations occur because the displaced threads reside in magnetic concavities with large radii of curvature. Our model yields a powerful seismological method for constraining the coronal magnetic field and radius of curvature of dips. Furthermore, these results indicate that the magnetic structure is most consistent with the sheared-arcade model for filament channels.

  11. Analysis of large structures in separated shear layers

    SciTech Connect (OSTI)

    Panigrahi, P.; Acharya, S.

    1999-07-01

    Large scale structures play an important role in the development of free shear layers and jets, and there is a large body of literature dealing with this subject. For meaningful interpretation of data, different analysis techniques have been used. However, these methods have been plagued with problems associated with phase jitter in the coherent modes. The primary goal of the data analysis techniques is to identify the individual modes present and to accurately determine the evolution of the amplitudes and phases of these modes. The goal of the present work is to develop suitable data analysis techniques for accurately evaluating the amplitude and phases of the coherent structures. In this paper, a pattern recognition technique that has the potential of computing the amplitudes of the large-scale structures correctly has been developed and further extended to include the calculation of the phase jitter. The pattern recognition technique is based on characterizing the coherent components in the form of a Fourier-cosine series with each mode identified by a frequency, amplitude and phase. The series is truncated by pre-selecting the modes (based on a spectral analysis of the signal). The evaluation of the Fourier components for the different modes is then made by segmenting the whole time-series into different segments such that in one segment one period of the corresponding wave is present. The mode corresponding to the lowest frequency is evaluated first, the coherent components corresponding to this mode is then subtracted from the signal, and then the components of the next higher mode is evaluated, and the process continued till all modes have been determined. A second approach has been used in the evaluation of phase jitter, and is based on an extension of a method proposed by Ho and co-workers (referred to as the HZFB method) in this paper. Using simulated data, the HZFB method is shown to produce inaccurate results in the presence of multiple modes and small scales. The HZFB technique is modified in this paper to eliminate the small scale effects in the phase jitter calculation. Both the pattern recognition technique and the modified-HZFB method were evaluated using simulated data and measurements for separated flow behind a rib mounted on the surface of a test plate. A hot-wire anemometer was used to collect the experimental data. The results of these techniques were compared with those obtained from a traditional Fourier Analysis and the HZFB method. The superior performance of the improved data analysis techniques was clearly demonstrated both in the simulated data and in the measurements.

  12. Mesh infrastructure for coupled multiprocess geophysical simulations

    SciTech Connect (OSTI)

    Garimella, Rao V.; Perkins, William A.; Buksas, Mike W.; Berndt, Markus; Lipnikov, Konstantin; Coon, Ethan; Moulton, John D.; Painter, Scott L.

    2014-01-01

    We have developed a sophisticated mesh infrastructure capability to support large scale multiphysics simulations such as subsurface flow and reactive contaminant transport at storage sites as well as the analysis of the effects of a warming climate on the terrestrial arctic. These simulations involve a wide range of coupled processes including overland flow, subsurface flow, freezing and thawing of ice rich soil, accumulation, redistribution and melting of snow, biogeochemical processes involving plant matter and finally, microtopography evolution due to melting and degradation of ice wedges below the surface. In addition to supporting the usual topological and geometric queries about the mesh, the mesh infrastructure adds capabilities such as identifying columnar structures in the mesh, enabling deforming of the mesh subject to constraints and enabling the simultaneous use of meshes of different dimensionality for subsurface and surface processes. The generic mesh interface is capable of using three different open source mesh frameworks (MSTK, MOAB and STKmesh) under the hood allowing the developers to directly compare them and choose one that is best suited for the application's needs. We demonstrate the results of some simulations using these capabilities as well as present a comparison of the performance of the different mesh frameworks.

  13. Mesh infrastructure for coupled multiprocess geophysical simulations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Garimella, Rao V.; Perkins, William A.; Buksas, Mike W.; Berndt, Markus; Lipnikov, Konstantin; Coon, Ethan; Moulton, John D.; Painter, Scott L.

    2014-01-01

    We have developed a sophisticated mesh infrastructure capability to support large scale multiphysics simulations such as subsurface flow and reactive contaminant transport at storage sites as well as the analysis of the effects of a warming climate on the terrestrial arctic. These simulations involve a wide range of coupled processes including overland flow, subsurface flow, freezing and thawing of ice rich soil, accumulation, redistribution and melting of snow, biogeochemical processes involving plant matter and finally, microtopography evolution due to melting and degradation of ice wedges below the surface. In addition to supporting the usual topological and geometric queries about themore » mesh, the mesh infrastructure adds capabilities such as identifying columnar structures in the mesh, enabling deforming of the mesh subject to constraints and enabling the simultaneous use of meshes of different dimensionality for subsurface and surface processes. The generic mesh interface is capable of using three different open source mesh frameworks (MSTK, MOAB and STKmesh) under the hood allowing the developers to directly compare them and choose one that is best suited for the application's needs. We demonstrate the results of some simulations using these capabilities as well as present a comparison of the performance of the different mesh frameworks.« less

  14. Generically large nongaussianity in small multifield inflation

    SciTech Connect (OSTI)

    Bramante, Joseph

    2015-07-07

    If forthcoming measurements of cosmic photon polarization restrict the primordial tensor-to-scalar ratio to r<0.01, small field inflation will be a principal candidate for the origin of the universe. Here we show that small multifield inflation, without the hybrid mechanism, typically results in large squeezed nongaussianity. Small multifield potentials contain multiple flat field directions, often identified with the gauge invariant field directions in supersymmetric potentials. We find that unless these field directions have equal slopes, large nongaussianity arises. After identifying relevant differences between large and small two-field potentials, we demonstrate that the latter naturally fulfill the Byrnes-Choi-Hall large nongaussianity conditions. Computations of the primordial power spectrum, spectral index, and squeezed bispectrum, reveal that small two-field models which otherwise match observed primordial perturbations, produce excludably large nongaussianity if the inflatons’ field directions have unequal slopes.

  15. Development of CFD-Based Simulation Tools for In-Situ Thermal...

    Office of Scientific and Technical Information (OSTI)

    The simulation tools being developed capture the relevant physical processes and data from a large-scale system. The modified in-situ application is a pilot-scale heat transfer ...

  16. Aggregate Building Simulator (ABS) Methodology Development, Application, and User Manual

    SciTech Connect (OSTI)

    Dirks, James A.; Gorrissen, Willy J.

    2011-11-30

    As the relationship between the national building stock and various global energy issues becomes a greater concern, it has been deemed necessary to develop a system of predicting the energy consumption of large groups of buildings. Ideally this system is to take advantage of the most advanced energy simulation software available, be able to execute runs quickly, and provide concise and useful results at a level of detail that meets the users needs without inundating them with data. The resulting methodology that was developed allows the user to quickly develop and execute energy simulations of many buildings simultaneously, taking advantage of parallel processing to greatly reduce total simulation times. The result of these simulations can then be rapidly condensed and presented in a useful and intuitive manner.

  17. Parallel Power Grid Simulation Toolkit

    Energy Science and Technology Software Center (OSTI)

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  18. Welcome - Modeling and Simulation Group

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CCS Directorate ORNL Modeling and Simulation Group Computational Sciences & Engineering Division Home Organization Chart Staff Research Areas Major Projects Fact Sheets Publications M&S News Awards Contacts Intership Programs ORNL has lots of opportunities for students to conduct research in scientific fields. Check out our Fellowship and Intership programs Fellowships Interships RAMS Program Modeling and Simulation Group The ORNL Modeling and Simulation Group (MSG) develops

  19. The promise of quantum simulation

    SciTech Connect (OSTI)

    Muller, Richard P.; Blume-Kohout, Robin

    2015-07-21

    In this study, quantum simulations promise to be one of the primary applications of quantum computers, should one be constructed. This article briefly summarizes the history of quantum simulation in light of the recent result of Wang and co-workers, demonstrating calculation of the ground and excited states for a HeH+ molecule, and concludes with a discussion of why this and other recent progress in the field suggest that quantum simulations of quantum chemistry have a bright future.

  20. Lubricant characterization by molecular simulation

    SciTech Connect (OSTI)

    Moore, J.D.; Cui, S.T.; Cummings, P.T.; Cochran, H.D.

    1997-12-01

    The authors have reported the calculation of the kinematic viscosity index of squalane from nonequilibrium molecular dynamics simulations. This represents the first accurate quantitative prediction of this measure of lubricant performance by molecular simulation. Using the same general alkane potential model, this computational approach offers the possibility of predicting the performance of potential lubricants prior to synthesis. Consequently, molecular simulation is poised to become an important tool for future lubricant development.

  1. The promise of quantum simulation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Muller, Richard P.; Blume-Kohout, Robin

    2015-07-21

    In this study, quantum simulations promise to be one of the primary applications of quantum computers, should one be constructed. This article briefly summarizes the history of quantum simulation in light of the recent result of Wang and co-workers, demonstrating calculation of the ground and excited states for a HeH+ molecule, and concludes with a discussion of why this and other recent progress in the field suggest that quantum simulations of quantum chemistry have a bright future.

  2. BERNAS ION SOURCE DISCHARGE SIMULATION

    SciTech Connect (OSTI)

    RUDSKOY,I.; KULEVOY, T.V.; PETRENKO, S.V.; KUIBEDA, R.P.; SELEZNEV, D.N.; PERSHIN, V.I.; HERSHCOVITCH, A.; JOHNSON, B.M.; GUSHENETS, V.I.; OKS, E.M.; POOLE, H.J.

    2007-08-26

    The joint research and development program is continued to develop steady-state ion source of decaborane beam for ion implantation industry. Bemas ion source is the wide used ion source for ion implantation industry. The new simulation code was developed for the Bemas ion source discharge simulation. We present first results of the simulation for several materials interested in semiconductors. As well the comparison of results obtained with experimental data obtained at the ITEP ion source test-bench is presented.

  3. Non-detonable explosive simulators

    DOE Patents [OSTI]

    Simpson, Randall L.; Pruneda, Cesar O.

    1994-01-01

    A simulator which is chemically equivalent to an explosive, but is not detonable. The simulator has particular use in the training of explosives detecting dogs and calibrating sensitive analytical instruments. The explosive simulants may be fabricated by different techniques, a first involves the use of standard slurry coatings to produce a material with a very high binder to explosive ratio without masking the explosive vapor, and the second involves coating inert beads with thin layers of explosive molecules.

  4. Non-detonable explosive simulators

    DOE Patents [OSTI]

    Simpson, R.L.; Pruneda, C.O.

    1994-11-01

    A simulator which is chemically equivalent to an explosive, but is not detonable. The simulator has particular use in the training of explosives detecting dogs and calibrating sensitive analytical instruments. The explosive simulants may be fabricated by different techniques, a first involves the use of standard slurry coatings to produce a material with a very high binder to explosive ratio without masking the explosive vapor, and the second involves coating inert beads with thin layers of explosive molecules. 5 figs.

  5. Accelerating Subsurface Transport Simulation on Heterogeneous Clusters

    SciTech Connect (OSTI)

    Villa, Oreste; Gawande, Nitin A.; Tumeo, Antonino

    2013-09-23

    Reactive transport numerical models simulate chemical and microbiological reactions that occur along a flowpath. These models have to compute reactions for a large number of locations. They solve the set of ordinary differential equations (ODEs) that describes the reaction for each location through the Newton-Raphson technique. This technique involves computing a Jacobian matrix and a residual vector for each set of equation, and then solving iteratively the linearized system by performing Gaussian Elimination and LU decomposition until convergence. STOMP, a well known subsurface flow simulation tool, employs matrices with sizes in the order of 100x100 elements and, for numerical accuracy, LU factorization with full pivoting instead of the faster partial pivoting. Modern high performance computing systems are heterogeneous machines whose nodes integrate both CPUs and GPUs, exposing unprecedented amounts of parallelism. To exploit all their computational power, applications must use both the types of processing elements. For the case of subsurface flow simulation, this mainly requires implementing efficient batched LU-based solvers and identifying efficient solutions for enabling load balancing among the different processors of the system. In this paper we discuss two approaches that allows scaling STOMP's performance on heterogeneous clusters. We initially identify the challenges in implementing batched LU-based solvers for small matrices on GPUs, and propose an implementation that fulfills STOMP's requirements. We compare this implementation to other existing solutions. Then, we combine the batched GPU solver with an OpenMP-based CPU solver, and present an adaptive load balancer that dynamically distributes the linear systems to solve between the two components inside a node. We show how these approaches, integrated into the full application, provide speed ups from 6 to 7 times on large problems, executed on up to 16 nodes of a cluster with two AMD Opteron 6272 and a Tesla M2090 per node.

  6. Computer interactive resistance simulator (CIRS)

    DOE Patents [OSTI]

    Mayn, Bobby G.

    1976-01-01

    A system for simulating the insertion of electric resistance values of either positive or negative quantity into an electric circuit and for cancelling drift errors therefrom.

  7. Power Plant Modeling and Simulation

    ScienceCinema (OSTI)

    None

    2010-01-08

    The National Energy Technology Laboratory's Office of Research and Development provides open source tools and expetise for modeling and simulating power plants and carbon sequestration technologies.

  8. Power Plant Modeling and Simulation

    SciTech Connect (OSTI)

    2008-07-21

    The National Energy Technology Laboratory's Office of Research and Development provides open source tools and expetise for modeling and simulating power plants and carbon sequestration technologies.

  9. Distributed Energy Technology Simulator: Microturbine Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Simulator: Microturbine Demonstration, October 2001 Distributed Energy Technology Simulator: Microturbine Demonstration, October 2001 This 2001 paper discusses the National Rural ...

  10. Experiments ✚ Simulations = Better Nuclear Power Research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Experiments + Simulations Better Nuclear Power Research Experiments Simulations ... An international collaboration of physicists is working to improve the safety and ...

  11. Building Energy Simulation & Modeling | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Simulation & Modeling Building Energy Simulation & Modeling Lead Performer: Lawrence ... Development (CBERD) conducts energy efficiency research and development with a focus ...

  12. Multidimensional simulation and chemical kinetics development...

    Office of Environmental Management (EM)

    Multidimensional simulation and chemical kinetics development for high efficiency clean combustion engines Multidimensional simulation and chemical kinetics development for high ...

  13. Eddy Correlation Systems Receive Upgrade

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ANLERNL-02-05 Figure 1. ARM field technician Mike Rainwater (left) and ECOR instrument mentor Dr. Mikhail S. Pekour install new computer equipment in the ECOR shelter....

  14. Exploring a Multi-resolution Approach Using AMIP Simulations

    SciTech Connect (OSTI)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun; Yang, Qing; Lu, Jian; Hagos, Samson M.; Rauscher, Sara; Dong, Li; Ringler, Todd; Lauritzen, P. H.

    2015-07-31

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulations reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.

  15. Cantera Aerosol Dynamics Simulator

    Energy Science and Technology Software Center (OSTI)

    2004-09-01

    The Cantera Aerosol Dynamics Simulator (CADS) package is a general library for aerosol modeling to address aerosol general dynamics, including formation from gas phase reactions, surface chemistry (growth and oxidation), bulk particle chemistry, transport by Brownian diffusion, thermophoresis, and diffusiophoresis with linkage to DSMC studies, and thermal radiative transport. The library is based upon Cantera, a C++ Cal Tech code that handles gas phase species transport, reaction, and thermodynamics. The method uses a discontinuous galerkinmore » formulation for the condensation and coagulation operator that conserves particles, elements, and enthalpy up to round-off error. Both O-D and 1-D time dependent applications have been developed with the library. Multiple species in the solid phase are handled as well. The O-D application, called Tdcads (Time Dependent CADS) is distributed with the library. Tdcads can address both constant volume and constant pressure adiabatic homogeneous problems. An extensive set of sample problems for Tdcads is also provided.« less

  16. MJO Simulation Diagnostics

    SciTech Connect (OSTI)

    Waliser, D; Sperber, K; Hendon, H; Kim, D; Maloney, E; Wheeler, M; Weickmann, K; Zhang, C; Donner, L; Gottschalck, J; Higgins, W; Kang, I; Legler, D; Moncrieff, M; Schubert, S; Stern, W; Vitart, F; Wang, B; Wang, W; Woolnough, S

    2008-06-02

    The Madden-Julian Oscillation (MJO) interacts with, and influences, a wide range of weather and climate phenomena (e.g., monsoons, ENSO, tropical storms, mid-latitude weather), and represents an important, and as yet unexploited, source of predictability at the subseasonal time scale. Despite the important role of the MJO in our climate and weather systems, current global circulation models (GCMs) exhibit considerable shortcomings in representing this phenomenon. These shortcomings have been documented in a number of multi-model comparison studies over the last decade. However, diagnosis of model performance has been challenging, and model progress has been difficult to track, due to the lack of a coherent and standardized set of MJO diagnostics. One of the chief objectives of the US CLIVAR MJO Working Group is the development of observation-based diagnostics for objectively evaluating global model simulations of the MJO in a consistent framework. Motivation for this activity is reviewed, and the intent and justification for a set of diagnostics is provided, along with specification for their calculation, and illustrations of their application. The diagnostics range from relatively simple analyses of variance and correlation, to more sophisticated space-time spectral and empirical orthogonal function analyses. These diagnostic techniques are used to detect MJO signals, to construct composite life-cycles, to identify associations of MJO activity with the mean state, and to describe interannual variability of the MJO.

  17. Electricity Portfolio Simulation Model

    Energy Science and Technology Software Center (OSTI)

    2005-09-01

    Stakeholders often have competing interests when selecting or planning new power plants. The purpose of developing this preliminary Electricity Portfolio Simulation Model (EPSim) is to provide a first cut, dynamic methodology and approach to this problem, that can subsequently be refined and validated, that may help energy planners, policy makers, and energy students better understand the tradeoffs associated with competing electricity portfolios. EPSim allows the user to explore competing electricity portfolios annually from 2002 tomore » 2025 in terms of five different criteria: cost, environmental impacts, energy dependence, health and safety, and sustainability. Four additional criteria (infrastructure vulnerability, service limitations, policy needs and science and technology needs) may be added in future versions of the model. Using an analytic hierarchy process (AHP) approach, users or groups of users apply weights to each of the criteria. The default energy assumptions of the model mimic Department of Energy’s (DOE) electricity portfolio to 2025 (EIA, 2005). At any time, the user can compare alternative portfolios to this reference case portfolio.« less

  18. S-SEED Simulator

    Energy Science and Technology Software Center (OSTI)

    2008-11-21

    This code simulates the transient response of two self-electrooptic-effect devices (SEEDs) connected in series to form an S-SEED pair as used in all-optical high-speed switching. Both optical beam propagation and carrier motion is assumed to be normal to the epi plane, so the code is inherently 1D in nature. For each SEED, an optical input in W/cm**2 is specified as a function of time (usually a step function input). The signal is absorbed during amore » double pass through the intrinsic region, with a spatially-dependent absorption coefficient that is dependent on the transient local electric field. This absorption generates electron-hole pairs that then contribute to the device current, and a transient optical output is predicted. Carriers in the semiconductor layers are generated through thermal excitation or optical absorption, move under the action of diffusion and self-consistent electric fields updated at each time step by a 1D Poisson solver, and recombine at density-dependent rates. The different epi layers are independently specified by position, thickness, doping type and density, and thus space charge effects and junction capacitance are included automatically.« less

  19. Fading channel simulator

    DOE Patents [OSTI]

    Argo, Paul E.; Fitzgerald, T. Joseph

    1993-01-01

    Fading channel effects on a transmitted communication signal are simulated with both frequency and time variations using a channel scattering function to affect the transmitted signal. A conventional channel scattering function is converted to a series of channel realizations by multiplying the square root of the channel scattering function by a complex number of which the real and imaginary parts are each independent variables. The two-dimensional inverse-FFT of this complex-valued channel realization yields a matrix of channel coefficients that provide a complete frequency-time description of the channel. The transmitted radio signal is segmented to provide a series of transmitted signal and each segment is subject to FFT to generate a series of signal coefficient matrices. The channel coefficient matrices and signal coefficient matrices are then multiplied and subjected to inverse-FFT to output a signal representing the received affected radio signal. A variety of channel scattering functions can be used to characterize the response of a transmitter-receiver system to such atmospheric effects.

  20. Coal Preparation Plant Simulation

    Energy Science and Technology Software Center (OSTI)

    1992-02-25

    COALPREP assesses the degree of cleaning obtained with different coal feeds for a given plant configuration and mode of operation. It allows the user to simulate coal preparation plants to determine an optimum plant configuration for a given degree of cleaning. The user can compare the performance of alternative plant configurations as well as determine the impact of various modes of operation for a proposed configuration. The devices that can be modelled include froth flotationmore » devices, washers, dewatering equipment, thermal dryers, rotary breakers, roll crushers, classifiers, screens, blenders and splitters, and gravity thickeners. The user must specify the plant configuration and operating conditions and a description of the coal feed. COALPREP then determines the flowrates within the plant and a description of each flow stream (i.e. the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, BTU content, recoveries, and specific gravity of separation). COALPREP also includes a capability for calculating the cleaning cost per ton of coal.« less

  1. Coal Preparation Plant Simulation

    Energy Science and Technology Software Center (OSTI)

    1992-02-25

    COALPREP assesses the degree of cleaning obtained with different coal feeds for a given plant configuration and mode of operation. It allows the user to simulate coal preparation plants to determine an optimum plant configuration for a given degree of cleaning. The user can compare the performance of alternative plant configurations as well as determine the impact of various modes of operation for a proposed configuration. The devices that can be modelled include froth flotationmore » devices, washers, dewatering equipment, thermal dryers, rotary breakers, roll crushers, classifiers, screens, blenders and splitters, and gravity thickeners. The user must specify the plant configuration and operating conditions and a description of the coal feed. COALPREP then determines the flowrates within the plant and a description of each flow stream (i.e. the weight distribution, percent ash, pyritic sulfur and total sulfur, moisture, BTU content, recoveries, and specific gravity of separation). COALPREP also includes a capability for calculating the cleaning cost per ton of coal. The IBM PC version contains two auxiliary programs, DATAPREP and FORLIST. DATAPREP is an interactive preprocessor for creating and editing COALPREP input data. FORLIST converts carriage-control characters in FORTRAN output data to ASCII line-feed (X''0A'') characters.« less

  2. STOMP: A Software Architecture for the Design and Simulation UAV-Based Sensor Networks

    SciTech Connect (OSTI)

    Jones, E D; Roberts, R S; Hsia, T C S

    2002-10-28

    This paper presents the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture and framework for simulating, controlling and communicating with unmanned air vehicles (UAVs) servicing large distributed sensor networks. STOMP provides hardware-in-the-loop capability enabling real UAVs and sensors to feedback state information, route data and receive command and control requests while interacting with other real or virtual objects thereby enhancing support for simulation of dynamic and complex events.

  3. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and rectangular slots. The round holes ranged in size from 0.2 to 4.46 mm. The slots ranged from (width × length) 0.3 × 5 to 2.74 × 76.2 mm. Most slots were oriented longitudinally along the pipe, but some were oriented circumferentially. In addition, a limited number of multi-hole test pieces were tested in an attempt to assess the impact of a more complex breach. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the large size of some of the WTP equipment. The purpose of this report is to present the experimental results and analyses for the aerosol measurements obtained in the large-scale test stand. The report includes a description of the simulants used and their properties, equipment and operations, data analysis methodology, and test results. The results of tests investigating the role of slurry particles in plugging of small breaches are reported in Mahoney et al. 2012a. The results of the aerosol measurements in the small-scale test stand are reported in Mahoney et al. (2012b).

  4. Very Large System Dynamics Models - Lessons Learned

    SciTech Connect (OSTI)

    Jacob J. Jacobson; Leonard Malczynski

    2008-10-01

    This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

  5. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    SciTech Connect (OSTI)

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  6. Large-Scale Wind Training Program

    SciTech Connect (OSTI)

    Porter, Richard L.

    2013-07-01

    Project objective is to develop a credit-bearing wind technician program and a non-credit safety training program, train faculty, and purchase/install large wind training equipment.

  7. large-point | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Carbon Dioxide Capture from Large Point Sources Project No.: FG02-04ER83925 SBIR Commercial hollow fiber membrane cartridge. Commercial hollow fiber membrane cartridge [6"(D) X 17"(L)]. (click on image to enlarge) Compact Membrane Systems, Inc. developed and tested a carbon dioxide (CO2) removal system for flue gas streams from large point sources that offers improved mass transfer rates compared to conventional technologies. The project fabricated perfluorinated membranes on

  8. 2007 CBECS Large Hospital Building FAQs

    Gasoline and Diesel Fuel Update (EIA)

    FAQs Main Report | Methodology | FAQ | List of Tables CBECS 2007 - Release date: August 17, 2012 How were the data collected for this study? These data were collected with the 2007 Commercial Building Energy Consumption Survey (CBECS). See the 2007 CBECS Large Hospital Building Methodology Report for details. Why are you publishing estimates only for large hospitals and not the rest of the commercial building population? A majority of the 2007 CBECS buildings were sampled from a frame that used

  9. Project Profile: Improved Large Aperture Collector Manufacturing |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Concentrating Solar Power » Project Profile: Improved Large Aperture Collector Manufacturing Project Profile: Improved Large Aperture Collector Manufacturing Abengoa logo -- This project is inactive -- Abengoa Solar, under the Solar Manufacturing Technology (SolarMat) program, will be investigating the use of an automotive-style high-rate fabrication and automated assembly techniques to achieve a substantial reduction in the deployment cost of their new SpaceTube

  10. A COMPARISON OF GADRAS SIMULATED AND MEASURED GAMMA RAY SPECTRA

    SciTech Connect (OSTI)

    Jeffcoat, R.; Salaymeh, S.

    2010-06-28

    Gamma-ray radiation detection systems are continuously being developed and improved for detecting the presence of radioactive material and for identifying isotopes present. Gamma-ray spectra, from many different isotopes and in different types and thicknesses of attenuation material and matrixes, are needed to evaluate the performance of these devices. Recently, a test and evaluation exercise was performed by the Savannah River National Laboratory that required a large number of gamma-ray spectra. Simulated spectra were used for a major portion of the testing in order to provide a pool of data large enough for the results to be statistically significant. The test data set was comprised of two types of data, measured and simulated. The measured data were acquired with a hand-held Radioisotope Identification Device (RIID) and simulated spectra were created using Gamma Detector Response and Analysis Software (GADRAS, Mitchell and Mattingly, Sandia National Laboratory). GADRAS uses a one-dimensional discrete ordinate calculation to simulate gamma-ray spectra. The measured and simulated spectra have been analyzed and compared. This paper will discuss the results of the comparison and offer explanations for spectral differences.

  11. Consortium for Advanced Simulation of Light Water Reactors (CASL...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    D.M. Vigil, T.M. Wildey, W.J. Bohnhoff, K.R. Dalbey, J.P. Eddy, K.T. Hu, L.E. Bauman and P.D. Hough, DAKOTA: A Multilevel Parallel Object-Oriented Framework for Design...

  12. Epidemilogical Simulation System, Version 2.4

    Energy Science and Technology Software Center (OSTI)

    2004-01-30

    EpiSims uses a detailed simulation of disease spread to evaluate demographically and geographically targeted biological threat reduction strategies. Abstract: EpiSims simulates the spread of disease and analyzes the consequences of intervention strategies in a large urban area at the level of individuals. The simulation combines models of three dynamical systems: urban social networks, disease transmission, and within-host progression of a disease. Validated population mobility and activity generation technology provides the social network models, Disease modelsmore » are based on fusion of expert opinion and available data. EpiSims provides a previously unavailable detailed representation of the course of an outbreak in urban area. A letter of August 16, 2002 from the Office of Homeland Security states: "Ability of EpiSims to provide comprehensive data on daily activity patterns of individuals makes it far superior to traditional SIR models — clearly had an impact on pre-attack smallpox vaccination policy." EpiSims leverages a unique Los Alamos National Laboratory resource — the population mobility and activity data developed by TRANSIMS (Transportation Analysis and SiMulation System) — to create epidemiological analyses at an unprecedented level of detail. We create models of microscopic (individual-level) physical and biological processes from which, through simulation, emerge the macroscopic (urban regional level) quantities that are the inputs to alternative models. For example, the contact patterns of individuals in different demographic groups determine the overall mixing rates those groups. The characteristics of a person-to-person transmission together with their contact patterns determine the reproductive numbers — how many people will be infected on average by each case. Mixing rates and reproductive numbers are the basic parameters of other epidemiological models. Because interventions — and people’s reactions to them — are ultimately applied at the individual level, EpiSims is uniquely suited to evaluate their macroscopic consequences. For example, the debate over the logistics of targeted vaccination for smallpox, and thus the magnitude of the threat it poses, can best be resolved through an individual- based approach. EpiSims is the only available analytical tool using the individual-based approach that can scale to populations of a million or more without introducing ad-hoc assumptions about the nature of the social network. Impact: The first study commissioned for the EpiSims project was to analyze the effectiveness of targeted vaccination and isolation strategies in the aftermath of a covert release of smallpox at a crowded urban location. In particular we compared casualties and resources required for targeted strategies with those in the case of large-scale quarantine and/or mass vaccination campaigns. We produced this analysis in a sixty-day effort, while prototype software was still under development and delivered it to the Office of Homeland Security in June 2002. More recently, EpiSims provided casualty estimates and cost/benefit analyses for various proposed responses to an attack with pneumonic plague during the TOPOFF-2 exercise. Capabilities: EpiSims is designed to simulate human-human transmissible disease, but it is part of a suite of tools that naturally allow it to estimate individual exposures to air-borne or water-borne spread. Combined with data on animal density and mobility, EpiSims could simulate diseases spread by non-human vectors. EpiSims incorporates reactions of individuals, and is particularly powerful if those reactions are correlated with demographics. It provides a standard for modeling scenarios that cuts across agencies.« less

  13. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    SciTech Connect (OSTI)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i.e., Newtonian or non-Newtonian). The most important properties for testing with Newtonian slurries are the Archimedes number distribution and the particle concentration. For some test objectives, the shear strength is important. In the testing to collect data for CFD V and V and CFD comparison, the liquid density and liquid viscosity are important. In the high temperature testing, the liquid density and liquid viscosity are important. The Archimedes number distribution combines effects of particle size distribution, solid-liquid density difference, and kinematic viscosity. The most important properties for testing with non-Newtonian slurries are the slurry yield stress, the slurry consistency, and the shear strength. The solid-liquid density difference and the particle size are also important. It is also important to match multiple properties within the same simulant to achieve behavior representative of the waste. Other properties such as particle shape, concentration, surface charge, and size distribution breadth, as well as slurry cohesiveness and adhesiveness, liquid pH and ionic strength also influence the simulant properties either directly or through other physical properties such as yield stress.

  14. Beam simulation tools for GEANT4 (and neutrino source applications)

    SciTech Connect (OSTI)

    V.Daniel Elvira, Paul Lebrun and Panagiotis Spentzouris

    2002-12-03

    Geant4 is a tool kit developed by a collaboration of physicists and computer professionals in the High Energy Physics field for simulation of the passage of particles through matter. The motivation for the development of the Beam Tools is to extend the Geant4 applications to accelerator physics. Although there are many computer programs for beam physics simulations, Geant4 is ideal to model a beam going through material or a system with a beam line integrated to a complex detector. There are many examples in the current international High Energy Physics programs, such as studies related to a future Neutrino Factory, a Linear Collider, and a very Large Hadron Collider.

  15. Simulation of neutron radiation damage in silicon semiconductor devices.

    SciTech Connect (OSTI)

    Shadid, John Nicolas; Hoekstra, Robert John; Hennigan, Gary Lee; Castro, Joseph Pete Jr.; Fixel, Deborah A.

    2007-10-01

    A code, Charon, is described which simulates the effects that neutron damage has on silicon semiconductor devices. The code uses a stabilized, finite-element discretization of the semiconductor drift-diffusion equations. The mathematical model used to simulate semiconductor devices in both normal and radiation environments will be described. Modeling of defect complexes is accomplished by adding an additional drift-diffusion equation for each of the defect species. Additionally, details are given describing how Charon can efficiently solve very large problems using modern parallel computers. Comparison between Charon and experiment will be given, as well as comparison with results from commercially-available TCAD codes.

  16. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect (OSTI)

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both contexts. One of the most interesting models we propose attempt to marry the two by obtaining a coarse global representation using prediction models, and a detailed local representation based on topology. Our methods are validated on several analytical test functions as well as a small nuclear simulation dataset modeled after a simplified Pressurized Water Reactor.

  17. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect (OSTI)

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibrationan external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  18. Large-scale anisotropy in stably stratified rotating flows

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  19. The Xygra gun simulation tool.

    SciTech Connect (OSTI)

    Garasi, Christopher Joseph; Lamppa, Derek C.; Aubuchon, Matthew S.; Shirley, David Noyes; Robinson, Allen Conrad; Russo, Thomas V.

    2008-12-01

    Inductive electromagnetic launchers, or coilguns, use discrete solenoidal coils to accelerate a coaxial conductive armature. To date, Sandia has been using an internally developed code, SLINGSHOT, as a point-mass lumped circuit element simulation tool for modeling coilgun behavior for design and verification purposes. This code has shortcomings in terms of accurately modeling gun performance under stressful electromagnetic propulsion environments. To correct for these limitations, it was decided to attempt to closely couple two Sandia simulation codes, Xyce and ALEGRA, to develop a more rigorous simulation capability for demanding launch applications. This report summarizes the modifications made to each respective code and the path forward to completing interfacing between them.

  20. Terascale Simulation Tolls and Technologies

    Energy Science and Technology Software Center (OSTI)

    2006-11-01

    The Terascale Simulation Tools and Technologies (TSTT) center is a collaboration between several universities and DOE laboratories, and is funded by the DOE Scientific Discovery for Advanced Computing (SciDAC) program. The primary objective of the (TSTT) center is to develop technologies taht enable application scientists to easily use multiple mesh and discretization strageties within a single simulation on terascale computeres. This is accomplished through the development of common functional interfaces to geometry, mesh, and othermore » simulation data. This package is Sandia's implementation of these interfaces.« less

  1. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect (OSTI)

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  2. Method and apparatus for extruding large honeycombs

    DOE Patents [OSTI]

    Kragle, Harry A.; Lambert, David W.; Lipp, G. Daniel

    1996-09-03

    Extrusion die apparatus and an extrusion method for extruding large-cross-section honeycomb structures from plasticized ceramic batch materials are described, the apparatus comprising a die having a support rod connected to its central portion, the support rod being anchored to support means upstream of the die. The support rod and support means act to limit die distortion during extrusion, reducing die strain and stress to levels permitting large honeycomb extrusion without die failure. Dies of optimal thickness are disclosed which reduce the maximum stresses exerted on the die during extrusion.

  3. Large volume flow-through scintillating detector

    DOE Patents [OSTI]

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  4. Presentation on the Large-Scale Renewable Energy Guide | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Presentation on the Large-Scale Renewable Energy Guide Presentation on the Large-Scale Renewable Energy Guide Presentation covers the Large-Scale RE Guide: Developing Renewable ...

  5. LARGE INDUSTRIAL FACILITIES BY STATE | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    LARGE INDUSTRIAL FACILITIES BY STATE LARGE INDUSTRIAL FACILITIES BY STATE PDF icon Number of Large Energy User Manufacturing Facilities by Sector and State (with Industrial Energy...

  6. Simulation of katabatic flow and mountain waves

    SciTech Connect (OSTI)

    Poulos, G.S.

    1995-05-01

    It is well-known that both mountain waves and katabatic flows frequently form in the severe relief of the Front Range of the Rocky Mountains. Occasionally these phenomena have been found to occur simultaneously. Generally, however, the large body of literature regarding them has treated each individually, seldom venturing into the regime of their potential interaction. The exceptions to this rule are Arritt and Pielke (1986), Barr and Orgill (1989). Gudiksen et al. (1992), Moriarty (1984), Orgill et al. (1992), Orgill and Schreck (1985). Neff and King (1988), Stone and Hoard (1989), Whiteman and Doran (1993) and Ying and Baopu (1993). The simulations overviewed here attempt to reproduce both atmospheric features simultaneously for two case days during the 1993 ASCOT observational program near Rocky Flats, Colorado.

  7. Simulating Afterburn with LLNL Hydrocodes

    SciTech Connect (OSTI)

    Daily, L D

    2004-06-11

    Presented here is a working methodology for adapting a Lawrence Livermore National Laboratory (LLNL) developed hydrocode, ALE3D, to simulate weapon damage effects when afterburn is a consideration in the blast propagation. Experiments have shown that afterburn is of great consequence in enclosed environments (i.e. bomb in tunnel scenario, penetrating conventional munition in a bunker, or satchel charge placed in a deep underground facility). This empirical energy deposition methodology simulates the anticipated addition of kinetic energy that has been demonstrated by experiment (Kuhl, et. al. 1998), without explicitly solving the chemistry, or resolving the mesh to capture small-scale vorticity. This effort is intended to complement the existing capability of either coupling ALE3D blast simulations with DYNA3D or performing fully coupled ALE3D simulations to predict building or component failure, for applications in National Security offensive strike planning as well as Homeland Defense infrastructure protection.

  8. Predictive Simulation | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Predictive Simulation Predictive Simulation Empirical To First Principle Models Computing tools currently used in nuclear industry and regulatory practice are based primarily on empirical math models to approximate, or fit, existing experimental data. Many have a pedigree reaching back to the 1970s and 1980s and were designed to support decision making and evaluate everything from behavior of individual fuel pellets to severe accident scenarios for an entire power plant. Programs like SAPHIRE,

  9. TREAT Modeling and Simulation Strategy

    SciTech Connect (OSTI)

    DeHart, Mark David

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  10. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect (OSTI)

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  11. Aerodynamic beam generator for large particles

    DOE Patents [OSTI]

    Brockmann, John E. (Albuquerque, NM); Torczynski, John R. (Albuquerque, NM); Dykhuizen, Ronald C. (Albuquerque, NM); Neiser, Richard A. (Albuquerque, NM); Smith, Mark F. (Albuquerque, NM)

    2002-01-01

    A new type of aerodynamic particle beam generator is disclosed. This generator produces a tightly focused beam of large material particles at velocities ranging from a few feet per second to supersonic speeds, depending on the exact configuration and operating conditions. Such generators are of particular interest for use in additive fabrication techniques.

  12. Analysis of large soil samples for actinides

    DOE Patents [OSTI]

    Maxwell, III; Sherrod L.

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  13. Global Alignment System for Large Genomic Sequencing

    Energy Science and Technology Software Center (OSTI)

    2002-03-01

    AVID is a global alignment system tailored for the alignment of large genomic sequences up to megabases in length. Features include the possibility of one sequence being in draft form, fast alignment, robustness and accuracy. The method is an anchor based alignment using maximal matches derived from suffix trees.

  14. Simulating Blade-Strike on Fish passing through Marine Hydrokinetic Turbines

    SciTech Connect (OSTI)

    Romero Gomez, Pedro DJ; Richmond, Marshall C.

    2014-06-16

    The study reported here evaluated the occurrence, frequency, and intensity of blade strike of fish on an axial-flow marine hydrokinetic turbine by using two modeling approaches: a conventional kinematic formulation and a proposed Lagrangian particle- based scheme. The kinematic model included simplifying assumptions of fish trajectories such as distribution and velocity. The proposed method overcame the need for such simplifications by integrating the following components into a computational fluid dynamics (CFD) model: (i) advanced eddy-resolving flow simulation, (ii) generation of ambient turbulence based on field data, (iii) moving turbine blades in highly transient flows, and (iv) Lagrangian particles to mimic the potential fish pathways. The test conditions to evaluate the blade-strike probability and fish survival rate were: (i) the turbulent environment, (ii) the fish size, and (iii) the approaching flow velocity. The proposed method offered the ability to produce potential fish trajectories and their interaction with the rotating turbine. Depending upon the scenario, the percentile of particles that registered a collision event ranged from 6% to 19% of the released sample size. Next, by using a set of experimental correlations of the exposure-response of living fish colliding with moving blades, the simulated collision data were used as input variables to estimate the survival rate of fish passing through the operating turbine. The resulting survival rates were greater than 96% in all scenarios, which is comparable to or better than known survival rates for conventional hydropower turbines. The figures of strike probability and mortality rate were amplified by the kinematic model. The proposed method offered the advantage of expanding the evaluation of other mechanisms of stress and injury on fish derived from hydrokinetic turbines and related devices.

  15. Structural simulations of nanomaterials self-assembled from ionic macrocycles.

    SciTech Connect (OSTI)

    van Swol, Frank B.; Medforth, Craig John

    2010-10-01

    Recent research at Sandia has discovered a new class of organic binary ionic solids with tunable optical, electronic, and photochemical properties. These nanomaterials, consisting of a novel class of organic binary ionic solids, are currently being developed at Sandia for applications in batteries, supercapacitors, and solar energy technologies. They are composed of self-assembled oligomeric arrays of very large anions and large cations, but their crucial internal arrangement is thus far unknown. This report describes (a) the development of a relevant model of nonconvex particles decorated with ions interacting through short-ranged Yukawa potentials, and (b) the results of initial Monte Carlo simulations of the self-assembly binary ionic solids.

  16. 2016-05-13 Energy Conservation Standards for Small, Large, and Very Large

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Air-Cooled Commercial Package Air Conditioning and Heating Equipment and Commercial Warm Air Furnaces; Notice of Effective Date and Compliance Dates for Direct Final Rule | Department of Energy 5-13 Energy Conservation Standards for Small, Large, and Very Large Air-Cooled Commercial Package Air Conditioning and Heating Equipment and Commercial Warm Air Furnaces; Notice of Effective Date and Compliance Dates for Direct Final Rule 2016-05-13 Energy Conservation Standards for Small, Large, and

  17. Building Simulation Modelers are we big-data ready?

    SciTech Connect (OSTI)

    Sanyal, Jibonananda; New, Joshua Ryan

    2014-01-01

    Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performance simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.

  18. Fully kinetic simulations of megajoule-scale dense plasma focus

    SciTech Connect (OSTI)

    Schmidt, A.; Link, A.; Tang, V.; Halvorson, C.; May, M.; Welch, D.; Meehan, B. T.; Hagen, E. C.

    2014-10-15

    Dense plasma focus (DPF) Z-pinch devices are sources of copious high energy electrons and ions, x-rays, and neutrons. Megajoule-scale DPFs can generate 10{sup 12} neutrons per pulse in deuterium gas through a combination of thermonuclear and beam-target fusion. However, the details of the neutron production are not fully understood and past optimization efforts of these devices have been largely empirical. Previously, we reported on the first fully kinetic simulations of a kilojoule-scale DPF and demonstrated that both kinetic ions and kinetic electrons are needed to reproduce experimentally observed features, such as charged-particle beam formation and anomalous resistivity. Here, we present the first fully kinetic simulation of a MegaJoule DPF, with predicted ion and neutron spectra, neutron anisotropy, neutron spot size, and time history of neutron production. The total yield predicted by the simulation is in agreement with measured values, validating the kinetic model in a second energy regime.

  19. Hierarchical Petascale Simulation Framework for Stress Corrosion Cracking

    SciTech Connect (OSTI)

    Vashishta, Priya

    2014-12-01

    Reaction Dynamics in Energetic Materials: Detonation is a prototype of mechanochemistry, in which mechanically and thermally induced chemical reactions far from equilibrium exhibit vastly different behaviors. It is also one of the hardest multiscale physics problems, in which diverse length and time scales play important roles. The CACS group has performed multimillion-atom reactive MD simulations to reveal a novel two-stage reaction mechanism during the detonation of cyclotrimethylenetrinitramine (RDX) crystal. Rapid production of N2 and H2O within ~10 ps is followed by delayed production of CO molecules within ~ 1 ns. They found that further decomposition towards the final products is inhibited by the formation of large metastable C- and O-rich clusters with fractal geometry. The CACS group has also simulated the oxidation dynamics of close-packed aggregates of aluminum nanoparticles passivated by oxide shells. Their simulation results suggest an unexpectedly active role of the oxide shell as a nanoreactor.

  20. Offshore Wind Market and Economic Analysis

    Energy Savers [EERE]

    of Minnesota's Virtual Wind Simulator | Department of Energy Offshore Wind Farm Model Development - Upcoming Release of the University of Minnesota's Virtual Wind Simulator Offshore Wind Farm Model Development - Upcoming Release of the University of Minnesota's Virtual Wind Simulator September 16, 2015 - 1:14pm Addthis Large-eddy simulation of wind farms with parameterization of wind turbines is emerging as a powerful tool for improving the performance and lowering the maintenance cost of

  1. Variability of Load and Net Load in Case of Large Scale Distributed Wind Power

    SciTech Connect (OSTI)

    Holttinen, H.; Kiviluoma, J.; Estanqueiro, A.; Gomez-Lazaro, E.; Rawn, B.; Dobschinski, J.; Meibom, P.; Lannoye, E.; Aigner, T.; Wan, Y. H.; Milligan, M.

    2011-01-01

    Large scale wind power production and its variability is one of the major inputs to wind integration studies. This paper analyses measured data from large scale wind power production. Comparisons of variability are made across several variables: time scale (10-60 minute ramp rates), number of wind farms, and simulated vs. modeled data. Ramp rates for Wind power production, Load (total system load) and Net load (load minus wind power production) demonstrate how wind power increases the net load variability. Wind power will also change the timing of daily ramps.

  2. LyMAS: Predicting large-scale Ly? forest statistics from the dark matter density field

    SciTech Connect (OSTI)

    Peirani, Sbastien; Colombi, Stphane; Dubois, Yohan; Pichon, Christophe; Weinberg, David H.; Blaizot, Jrmy

    2014-03-20

    We describe Ly? Mass Association Scheme (LyMAS), a method of predicting clustering statistics in the Ly? forest on large scales from moderate-resolution simulations of the dark matter (DM) distribution, with calibration from high-resolution hydrodynamic simulations of smaller volumes. We use the 'Horizon-MareNostrum' simulation, a 50 h {sup 1} Mpc comoving volume evolved with the adaptive mesh hydrodynamic code RAMSES, to compute the conditional probability distribution P(F{sub s} |? {sub s}) of the transmitted flux F{sub s} , smoothed (one-dimensionally, 1D) over the spectral resolution scale, on the DM density contrast ? {sub s}, smoothed (three-dimensionally, 3D) over a similar scale. In this study we adopt the spectral resolution of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) at z = 2.5, and we find optimal results for a DM smoothing length ? = 0.3 h {sup 1} Mpc (comoving). In its simplest form, LyMAS draws randomly from the hydro-calibrated P(F{sub s} |? {sub s}) to convert DM skewers into Ly? forest pseudo-spectra, which are then used to compute cross-sightline flux statistics. In extended form, LyMAS exactly reproduces both the 1D power spectrum and one-point flux distribution of the hydro simulation spectra. Applied to the MareNostrum DM field, LyMAS accurately predicts the two-point conditional flux distribution and flux correlation function of the full hydro simulation for transverse sightline separations as small as 1 h {sup 1} Mpc, including redshift-space distortion effects. It is substantially more accurate than a deterministic density-flux mapping ({sup F}luctuating Gunn-Peterson Approximation{sup )}, often used for large-volume simulations of the forest. With the MareNostrum calibration, we apply LyMAS to 1024{sup 3} N-body simulations of a 300 h {sup 1} Mpc and 1.0 h {sup 1} Gpc cube to produce large, publicly available catalogs of mock BOSS spectra that probe a large comoving volume. LyMAS will be a powerful tool for interpreting 3D Ly? forest data, thereby transforming measurements from BOSS and other massive quasar absorption surveys into constraints on dark energy, DM, space geometry, and intergalactic medium physics.

  3. Simulation of pyroshock environments using a tunable resonant fixture

    DOE Patents [OSTI]

    Davie, Neil T.

    1996-01-01

    Disclosed are a method and apparatus for simulating pyrotechnic shock for the purpose of qualifying electronic components for use in weapons, satellite, and aerospace applications. According to the invention, a single resonant bar fixture has an adjustable resonant frequency in order to exhibit a desired shock response spectrum upon mechanical impact. The invention eliminates the need for availability of a large number of different fixtures, capable of exhibiting a range of shock response characteristics, in favor of a single tunable system.

  4. Simulation of pyroshock environments using a tunable resonant fixture

    DOE Patents [OSTI]

    Davie, N.T.

    1996-10-15

    Disclosed are a method and apparatus for simulating pyrotechnic shock for the purpose of qualifying electronic components for use in weapons, satellite, and aerospace applications. According to the invention, a single resonant bar fixture has an adjustable resonant frequency in order to exhibit a desired shock response spectrum upon mechanical impact. The invention eliminates the need for availability of a large number of different fixtures, capable of exhibiting a range of shock response characteristics, in favor of a single tunable system. 32 figs.

  5. Tropical anvil cirrus evolution from observations and numerical simulations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Tropical anvil cirrus evolution from observations and numerical simulations Deng, Min University of Utah Mace, Gerald University of Utah Category: Modeling The tropical anvil cirrus formation and maintenance mechanism evolves during the life cycle of the mesoscale complexes. The large heating-rate gradients within the cloud may induce dynamical responses which would tend to lift and spread the anvils (Ackerman, 1988). The radiation heating can act as sources of turbulence and affect the anvil

  6. Simulation of pyroshock environments using a tunable resonant fixture

    SciTech Connect (OSTI)

    Davie, N.T.

    1993-09-30

    Disclosed are a method and apparatus for simulating pyrotechnic shock for the purpose of qualifying electronic components for use in weapons, satellite, and aerospace applications. According to the invention, a single resonant bar fixture has an adjustable resonant frequency in order to exhibit a desired shock response spectrum upon mechanical impact. The invention eliminates the need for availability of a large number of different fixtures, capable of exhibiting a range of shock response characteristics, in favor of a single tunable system.

  7. Synergia: a modern tool for accelerator physics simulation

    SciTech Connect (OSTI)

    Spentzouris, P.; Amundson, J.; /Fermilab

    2004-10-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. Synergia is a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles.

  8. Sandia National Laboratories: Advanced Simulation and Computing: Physics &

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Engineering Modeling Physics & Engineering Models Crack Modeling The Physics & Engineering Models program provides the models and databases used in simulations supporting the U.S. stockpile. These models and databases describe a large variety of physical and engineering processes that occur during the operation of a nuclear weapon. In addition to supporting the stockpile, a number of other national security missions use Physics & Engineering Models. Sandia's contributions and

  9. Xyce Parallel Electronic Simulator : users' guide, version 4.1.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2009-02-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.

  10. Xyce parallel electronic simulator : users' guide. Version 5.1.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2009-11-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.

  11. Sample variance in weak lensing: How many simulations are required?

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Petri, Andrea; May, Morgan; Haiman, Zoltan

    2016-03-24

    Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension Nb with its simulated counterpart. An accurate estimate of the Nb × Nb feature covariance matrix C is essential to obtain accurate parameter confidence intervals. When C is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of Nr realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number Ns ≤ Nr of independent ray-tracing N-body simulations. We study parameter confidence intervalsmore » as a function of (Ns, Nr) in the range 1 ≤ Ns ≤ 200 and 1 ≤ Nr ≲ 105. Previous work [S. Dodelson and M. D. Schneider, Phys. Rev. D 88, 063537 (2013)] has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an O(1/Nr) degradation of the parameter confidence intervals. Using a variety of lensing features measured in our simulations, including shear-shear power spectra and peak counts, we show that cubic and quartic covariance fluctuations lead to additional O(1/N2r) error degradation that is not negligible when Nr is only a factor of few larger than Nb. We study the large Nr limit, and find that a single, 240 Mpc/h sized 5123-particle N-body simulation (Ns = 1) can be repeatedly recycled to produce as many as Nr = few × 104 shear maps whose power spectra and high-significance peak counts can be treated as statistically independent. Lastly, a small number of simulations (Ns = 1 or 2) is sufficient to forecast parameter confidence intervals at percent accuracy.« less

  12. A Comparison of Short Rayleigh Range FEL Performance with Simulations

    SciTech Connect (OSTI)

    Benson, Stephen; Evtushenko, Pavel; Michelle D. Shinn; Neil, George; Blau, Joe; Burggraff, D.; Colson, William; Crooker, P.P.; Sans Aguilar, J.

    2007-08-01

    One approach to attaining very high power in a free-electron laser (FEL) is to operate with a Rayleigh range much smaller than the wiggler length. Previously, 3D simulations of Free-electron laser (FEL) oscillators showed that FEL gain doesn't fall off with Rayleigh range as predicted by one-dimensional simulations*. They also predict that the angular tolerance for the mirrors is much large than simplistic theory predicts. Using the IR Upgrade laser at Jefferson Lab lasing at 935 nm we have studied the performance of an FEL with very short Rayleigh range. We also looked at the angular sensitivity for several different Rayleigh ranges. We find very good agreement between simulations and measured gain and angular sensitivities. Surprisingly the gain continues to rise as the Rayleigh range is shortened and continues to grow even when the resonator becomes geometrically unstable. The same behavior is seen in both the experiment and simulations. We also find that, even for large Rayleigh r

  13. Dynamics of Molecular Clouds: Observations, Simulations, and...

    Office of Scientific and Technical Information (OSTI)

    Simulations, and NIF Experiments Citation Details In-Document Search Title: Dynamics of Molecular Clouds: Observations, Simulations, and NIF Experiments You are ...

  14. First trillion particle cosmological simulation completed

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    public data release. A paper describes the research and data release. Significance of the research The Dark Sky Simulations are an ongoing series of cosmological simulations...

  15. Clot Busting Simulations Test Potential Stroke Treatment

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Clot Busting Simulations Test Potential Stroke Treatment Clot Busting Simulations Test Potential Stroke Treatment September 24, 2013 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov ...

  16. Energy Choice Simulator | Open Energy Information

    Open Energy Info (EERE)

    Choice Simulator Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Choice Simulator AgencyCompany Organization: Great Plains Institute Sector: Energy Focus Area:...

  17. Climate Change Simulations with CCSM & CESM

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Climate Change Simulations with CCSM & CESM Climate Change Simulations with CCSM & CESM Key Challenges: Perform fundamental research on the processes that influence the natural...

  18. Mesoscale Simulations of Coarsening in GB Networks

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mukul Kumar is the Principal Investigator for Mesoscale Simulations of Coarsening in GB Networks LLNL BES Programs Highlight Mesoscale Simulations of Coarsening in GB Networks The...

  19. Decades of Wind Turbine Load Simulation

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Decades of Wind Turbine Load Simulation Matthew Barone , Joshua Paquette , Brian ... was used to simulate ninety-six years of operation of a five megawatt wind turbine. ...

  20. Dynamics of Molecular Clouds: Observations, Simulations, and...

    Office of Scientific and Technical Information (OSTI)

    Simulations, and NIF Experiments Citation Details In-Document Search Title: Dynamics of Molecular Clouds: Observations, Simulations, and NIF Experiments Authors: Kane, J ...