National Library of Energy BETA

Sample records for model based predictive

  1. An approach to model validation and model-based prediction -- polyurethane foam case study.

    SciTech Connect (OSTI)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical analyses and hypothesis tests as a part of the validation step to provide feedback to analysts and modelers. Decisions on how to proceed in making model-based predictions are made based on these analyses together with the application requirements. Updating modifying and understanding the boundaries associated with the model are also assisted through this feedback. (4) We include a ''model supplement term'' when model problems are indicated. This term provides a (bias) correction to the model so that it will better match the experimental results and more accurately account for uncertainty. Presumably, as the models continue to develop and are used for future applications, the causes for these apparent biases will be identified and the need for this supplementary modeling will diminish. (5) We use a response-modeling approach for our predictions that allows for general types of prediction and for assessment of prediction uncertainty. This approach is demonstrated through a case study supporting the assessment of a weapons response when subjected to a hydrocarbon fuel fire. The foam decomposition model provides an important element of the response of a weapon system in this abnormal thermal environment. Rigid foam is used to encapsulate critical components in the weapon system providing the needed mechanical support as well as thermal isolation. Because the foam begins to decompose at temperatures above 250 C, modeling the decomposition is critical to assessing a weapons response. In the validation analysis it is indicated that the model tends to ''exaggerate'' the effect of temperature changes when compared to the experimental results. The data, however, are too few and to restricted in terms of experimental design to make confident statements regarding modeling problems. For illustration, we assume these indications are correct and compensate for this apparent bias by constructing a model supplement term for use in the model-based predictions. Several hypothetical prediction problems are created and addressed. Hypothetical problems are used because no guidance was provided concern

  2. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    SciTech Connect (OSTI)

    Kohler, Christian

    2012-08-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  3. Battery Life Predictive Model

    Energy Science and Technology Software Center (OSTI)

    2009-12-31

    The Software consists of a model used to predict battery capacity fade and resistance growth for arbitrary cycling and temperature profiles. It allows the user to extrapolate from experimental data to predict actual life cycle.

  4. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    SciTech Connect (OSTI)

    Singh, Kunwar P. Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the constructed (c) DTB and (d) DTF regression models to predict the T. pyriformis toxicity of diverse chemicals. - Highlights: • Ensemble learning (EL) based models constructed for toxicity prediction of chemicals • Predictive models used a few simple non-quantum mechanical molecular descriptors. • EL-based DTB/DTF models successfully discriminated toxic and non-toxic chemicals. • DTB/DTF regression models precisely predicted toxicity of chemicals in multi-species. • Proposed EL based models can be used as tool to predict toxicity of new chemicals.

  5. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    SciTech Connect (OSTI)

    Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  6. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    SciTech Connect (OSTI)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.

  7. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    SciTech Connect (OSTI)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.

  8. Prediction of Lumen Output and Chromaticity Shift in LEDs Using Kalman Filter and Extended Kalman Filter Based Models

    SciTech Connect (OSTI)

    Lall, Pradeep; Wei, Junchao; Davis, J Lynn

    2014-06-24

    Abstract— Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.

  9. Midtemperature solar systems test facility predictions for thermal performance based on test data. Polisolar Model POL solar collector with glass reflector surface

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-05-01

    Thermal performance predictions based on test data are presented for the Polisolar Model POL solar collector, with glass reflector surfaces, for three output temperatures at five cities in the United States.

  10. SIMPLIFIED PREDICTIVE MODELS FOR CO₂ SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #3 STATISTICAL LEARNING BASED MODELS

    SciTech Connect (OSTI)

    Mishra, Srikanta; Schuetter, Jared

    2014-11-01

    We compare two approaches for building a statistical proxy model (metamodel) for CO₂ geologic sequestration from the results of full-physics compositional simulations. The first approach involves a classical Box-Behnken or Augmented Pairs experimental design with a quadratic polynomial response surface. The second approach used a space-filling maxmin Latin Hypercube sampling or maximum entropy design with the choice of five different meta-modeling techniques: quadratic polynomial, kriging with constant and quadratic trend terms, multivariate adaptive regression spline (MARS) and additivity and variance stabilization (AVAS). Simulations results for CO₂ injection into a reservoir-caprock system with 9 design variables (and 97 samples) were used to generate the data for developing the proxy models. The fitted models were validated with using an independent data set and a cross-validation approach for three different performance metrics: total storage efficiency, CO₂ plume radius and average reservoir pressure. The Box-Behnken–quadratic polynomial metamodel performed the best, followed closely by the maximin LHS–kriging metamodel.

  11. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; Crous, Kristine Y.; Reich, Peter B.

    2015-10-06

    Differential species responses to atmospheric CO2 concentration (Ca) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated Ca (eCa) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eCa by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model. The model makes several novelmore » predictions for the impact of eCa on plant community composition. Using resource use theory, the model predicts that eCa is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eCa may increase community evenness. Collectively, both theories suggest that eCa will favor coexistence and hence that species diversity should increase with eCa. Our theoretical analysis leads to a novel hypothesis for the impact of eCa on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eCa experiments.« less

  12. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    SciTech Connect (OSTI)

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; Crous, Kristine Y.; Reich, Peter B.

    2015-10-06

    Differential species responses to atmospheric CO2 concentration (Ca) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated Ca (eCa) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eCa by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model. The model makes several novel predictions for the impact of eCa on plant community composition. Using resource use theory, the model predicts that eCa is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eCa may increase community evenness. Collectively, both theories suggest that eCa will favor coexistence and hence that species diversity should increase with eCa. Our theoretical analysis leads to a novel hypothesis for the impact of eCa on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eCa experiments.

  13. predictive-models | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Predictive Models. BPO Staff. February 1988. 76 pp. NTIS Order No. DE89001204. FORTRAN source code and executable programs for the five EOR Predictive Models shown below...

  14. predictive modeling | National Nuclear Security Administration

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Apply for Our Jobs Our Jobs Working at NNSA Blog Home predictive modeling predictive modeling Fukushima: Five Years Later After the March 11, 2011, Japan earthquake, tsunami, and ...

  15. Prediction of pure water stress corrosion cracking (PWSCC) in nickel base alloys using crack growth rate models

    SciTech Connect (OSTI)

    Thompson, C.D.; Krasodomski, H.T.; Lewis, N.; Makar, G.L.

    1995-02-22

    The Ford/Andresen slip dissolution SCC model, originally developed for stainless steel components in BWR environments, has been applied to Alloy 600 and Alloy X-750 tested in deaerated pure water chemistry. A method is described whereby the crack growth rates measured in compact tension specimens can be used to estimate crack growth in a component. Good agreement was found between model prediction and measured SCC in X-750 threaded fasteners over a wide range of temperatures, stresses, and material condition. Most data support the basic assumption of this model that cracks initiate early in life. The evidence supporting a particular SCC mechanism is mixed. Electrochemical repassivation data and estimates of oxide fracture strain indicate that the slip dissolution model can account for the observed crack growth rates, provided primary rather than secondary creep rates are used. However, approximately 100 cross-sectional TEM foils of SCC cracks including crack tips reveal no evidence of enhanced plasticity or unique dislocation patterns at the crack tip or along the crack to support a classic slip dissolution mechanism. No voids, hydrides, or microcracks are found in the vicinity of the crack tips creating doubt about classic hydrogen related mechanisms. The bulk oxide films exhibit a surface oxide which is often different than the oxides found within a crack. Although bulk chromium concentration affects the rate of SCC, analytical data indicates the mechanism does not result from chromium depletion at the grain boundaries. The overall findings support a corrosion/dissolution mechanism but not one necessarily related to slip at the crack tip.

  16. A predictive standard model for heavy electron systems (Conference) |

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect SciTech Connect Search Results Conference: A predictive standard model for heavy electron systems Citation Details In-Document Search Title: A predictive standard model for heavy electron systems We propose a predictive standard model for heavy electron systems based on a detailed phenomenological two-fluid description of existing experimental data. It leads to a new phase diagram that replaces the Doniach picture, describes the emergent anomalous scaling behavior of the

  17. Midtemperature Solar Systems Test Facility predictions for thermal performance based on test data. Alpha Solarco Model 104 solar collector with 0. 125-inch Schott low-iron glass reflector surface

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-04-01

    Thermal performance predictions based on test data are presented for the Alpha Solarco Model 104 solar collector, with 0.125-inch Schott low-iron glass reflector surface, for three output temperatures at five cities in the United States.

  18. Progress towards a PETN Lifetime Prediction Model

    SciTech Connect (OSTI)

    Burnham, A K; Overturf III, G E; Gee, R; Lewis, P; Qiu, R; Phillips, D; Weeks, B; Pitchimani, R; Maiti, A; Zepeda-Ruiz, L; Hrousis, C

    2006-09-11

    Dinegar (1) showed that decreases in PETN surface area causes EBW detonator function times to increase. Thermal aging causes PETN to agglomerate, shrink, and densify indicating a ''sintering'' process. It has long been a concern that the formation of a gap between the PETN and the bridgewire may lead to EBW detonator failure. These concerns have led us to develop a model to predict the rate of coarsening that occurs with age for thermally driven PETN powder (50% TMD). To understand PETN contributions to detonator aging we need three things: (1) Curves describing function time dependence on specific surface area, density, and gap. (2) A measurement of the critical gap distance for no fire as a function of density and surface area for various wire configurations. (3) A model describing how specific surface area, density and gap change with time and temperature. We've had good success modeling high temperature surface area reduction and function time increase using a phenomenological deceleratory kinetic model based on a distribution of parallel nth-order reactions having evenly spaced activation energies where weighing factors of the reactions follows a Gaussian distribution about the reaction with the mean activation energy (Figure 1). Unfortunately, the mean activation energy derived from this approach is high (typically {approx}75 kcal/mol) so that negligible sintering is predicted for temperatures below 40 C. To make more reliable predictions, we've established a three-part effort to understand PETN mobility. First, we've measured the rates of step movement and pit nucleation as a function of temperature from 30 to 50 C for single crystals. Second, we've measured the evaporation rate from single crystals and powders from 105 to 135 C to obtain an activation energy for evaporation. Third, we've pursued mechanistic kinetic modeling of surface mobility, evaporation, and ripening.

  19. Standardized Software for Wind Load Forecast Error Analyses and Predictions Based on Wavelet-ARIMA Models - Applications at Multiple Geographically Distributed Wind Farms

    SciTech Connect (OSTI)

    Hou, Zhangshuan; Makarov, Yuri V.; Samaan, Nader A.; Etingov, Pavel V.

    2013-03-19

    Given the multi-scale variability and uncertainty of wind generation and forecast errors, it is a natural choice to use time-frequency representation (TFR) as a view of the corresponding time series represented over both time and frequency. Here we use wavelet transform (WT) to expand the signal in terms of wavelet functions which are localized in both time and frequency. Each WT component is more stationary and has consistent auto-correlation pattern. We combined wavelet analyses with time series forecast approaches such as ARIMA, and tested the approach at three different wind farms located far away from each other. The prediction capability is satisfactory -- the day-ahead prediction of errors match the original error values very well, including the patterns. The observations are well located within the predictive intervals. Integrating our wavelet-ARIMA (‘stochastic’) model with the weather forecast model (‘deterministic’) will improve our ability significantly to predict wind power generation and reduce predictive uncertainty.

  20. SIMPLIFIED PREDICTIVE MODELS FOR CO2 SEQUESTRATION PERFORMANCE ASSESSMENT RESEARCH TOPICAL REPORT ON TASK #4 REDUCED-ORDER METHOD (ROM) BASED MODELS

    SciTech Connect (OSTI)

    Mishra, Srikanta; Jin, Larry; He, Jincong; Durlofsky, Louis

    2015-06-30

    Reduced-order models provide a means for greatly accelerating the detailed simulations that will be required to manage CO2 storage operations. In this work, we investigate the use of one such method, POD-TPWL, which has previously been shown to be effective in oil reservoir simulation problems. This method combines trajectory piecewise linearization (TPWL), in which the solution to a new (test) problem is represented through a linearization around the solution to a previously-simulated (training) problem, with proper orthogonal decomposition (POD), which enables solution states to be expressed in terms of a relatively small number of parameters. We describe the application of POD-TPWL for CO2-water systems simulated using a compositional procedure. Stanford’s Automatic Differentiation-based General Purpose Research Simulator (AD-GPRS) performs the full-order training simulations and provides the output (derivative matrices and system states) required by the POD-TPWL method. A new POD-TPWL capability introduced in this work is the use of horizontal injection wells that operate under rate (rather than bottom-hole pressure) control. Simulation results are presented for CO2 injection into a synthetic aquifer and into a simplified model of the Mount Simon formation. Test cases involve the use of time-varying well controls that differ from those used in training runs. Results of reasonable accuracy are consistently achieved for relevant well quantities. Runtime speedups of around a factor of 370 relative to full- order AD-GPRS simulations are achieved, though the preprocessing needed for POD-TPWL model construction corresponds to the computational requirements for about 2.3 full-order simulation runs. A preliminary treatment for POD-TPWL modeling in which test cases differ from training runs in terms of geological parameters (rather than well controls) is also presented. Results in this case involve only small differences between training and test runs, though they do demonstrate that the approach is able to capture basic solution trends. The impact of some of the detailed numerical treatments within the POD-TPWL formulation is considered in an Appendix. ii

  1. Statistical surrogate models for prediction of high-consequence climate

    Office of Scientific and Technical Information (OSTI)

    change. (Technical Report) | SciTech Connect Technical Report: Statistical surrogate models for prediction of high-consequence climate change. Citation Details In-Document Search Title: Statistical surrogate models for prediction of high-consequence climate change. In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on

  2. Satellite Collision Modeling with Physics-Based Hydrocodes: Debris Generation Predictions of the Iridium-Cosmos Collision Event and Other Impact Events

    SciTech Connect (OSTI)

    Springer, H K; Miller, W O; Levatin, J L; Pertica, A J; Olivier, S S

    2010-09-06

    Satellite collision debris poses risks to existing space assets and future space missions. Predictive models of debris generated from these hypervelocity collisions are critical for developing accurate space situational awareness tools and effective mitigation strategies. Hypervelocity collisions involve complex phenomenon that spans several time- and length-scales. We have developed a satellite collision debris modeling approach consisting of a Lagrangian hydrocode enriched with smooth particle hydrodynamics (SPH), advanced material failure models, detailed satellite mesh models, and massively parallel computers. These computational studies enable us to investigate the influence of satellite center-of-mass (CM) overlap and orientation, relative velocity, and material composition on the size, velocity, and material type distributions of collision debris. We have applied our debris modeling capability to the recent Iridium 33-Cosmos 2251 collision event. While the relative velocity was well understood in this event, the degree of satellite CM overlap and orientation was ill-defined. In our simulations, we varied the collision CM overlap and orientation of the satellites from nearly maximum overlap to partial overlap on the outermost extents of the satellites (i.e, solar panels and gravity boom). As expected, we found that with increased satellite overlap, the overall debris cloud mass and momentum (transfer) increases, the average debris size decreases, and the debris velocity increases. The largest predicted debris can also provide insight into which satellite components were further removed from the impact location. A significant fraction of the momentum transfer is imparted to the smallest debris (< 1-5mm, dependent on mesh resolution), especially in large CM overlap simulations. While the inclusion of the smallest debris is critical to enforcing mass and momentum conservation in hydrocode simulations, there seems to be relatively little interest in their disposition. Based on comparing our results to observations, it is unlikely that the Iridium 33-Cosmos 2251 collision event was a large mass-overlap collision. We also performed separate simulations studying the debris generated by the collision of 5 and 10 cm spherical projectiles on the Iridium 33 satellite at closing velocities of 5, 10, and 15 km/s. It is important to understand the vulnerability of satellites to small debris threats, given their pervasiveness in orbit. These studies can also be merged with probabilistic conjunction analysis to better understand the risk to space assets. In these computational studies, we found that momentum transfer, kinetic energy losses due to dissipative mechanisms (e.g., fracture), fragment number, and fragment velocity increases with increasing velocity for a fixed projectile size. For a fixed velocity, we found that the smaller projectile size more efficiently transfers momentum to the satellite. This latter point has an important implication: Eight (spaced) 5 cm debris objects can impart more momentum to the satellite, and likely cause more damage, than a single 10 cm debris object at the same velocity. Further studies are required to assess the satellite damage induced by 1-5 cm sized debris objects, as well as multiple debris objects, in this velocity range.

  3. PNNL: Mechanistic-Based Ductility Prediction for Complex Mg Castings...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    PNNL: Mechanistic-Based Ductility Prediction for Complex Mg Castings PNNL: Mechanistic-Based Ductility Prediction for Complex Mg Castings 2012 DOE Hydrogen and Fuel Cells Program ...

  4. In silico modeling to predict drug-induced phospholipidosis

    SciTech Connect (OSTI)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G. Sadrieh, Nakissa

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ? 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.

  5. Comparison of Uncertainty of Two Precipitation Prediction Models...

    Office of Scientific and Technical Information (OSTI)

    Prediction Models at Los Alamos National Lab Technical Area 54 Citation Details In-Document Search Title: Comparison of Uncertainty of Two Precipitation Prediction Models ...

  6. Statistical surrogate models for prediction of high-consequence...

    Office of Scientific and Technical Information (OSTI)

    Statistical surrogate models for prediction of high-consequence climate change. Citation Details In-Document Search Title: Statistical surrogate models for prediction of ...

  7. Statistical surrogate models for prediction of high-consequence...

    Office of Scientific and Technical Information (OSTI)

    surrogate models for prediction of high-consequence climate change. Citation Details In-Document Search Title: Statistical surrogate models for prediction of high-consequence ...

  8. Predictive Models for Target Response During Penetration (Technical...

    Office of Scientific and Technical Information (OSTI)

    Predictive Models for Target Response During Penetration Citation Details In-Document Search Title: Predictive Models for Target Response During Penetration You are accessing a...

  9. New climate model predicts likelihood of Greenland ice melt,...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New climate model predicts likelihood of Greenland ice melt New climate model predicts likelihood of Greenland ice melt, sea level rise and dangerous temperatures A new computer ...

  10. Simplified Protein Models: Predicting Folding Pathways and Structure...

    Office of Scientific and Technical Information (OSTI)

    Simplified Protein Models: Predicting Folding Pathways and Structure Using Amino Acid Sequences Title: Simplified Protein Models: Predicting Folding Pathways and Structure Using ...

  11. Predictive Models of Li-ion Battery Lifetime (Presentation) ...

    Office of Scientific and Technical Information (OSTI)

    Predictive Models of Li-ion Battery Lifetime (Presentation) Citation Details In-Document Search Title: Predictive Models of Li-ion Battery Lifetime (Presentation) You are ...

  12. Eulerian CFD Models to Predict Thermophoretic Deposition of Soot...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Eulerian CFD Models to Predict Thermophoretic Deposition of Soot Particles in EGR Coolers Eulerian CFD Models to Predict Thermophoretic Deposition of Soot Particles in EGR Coolers ...

  13. Predictive Models of Li-ion Battery Lifetime

    SciTech Connect (OSTI)

    Smith, Kandler; Wood, Eric; Santhanagopalan, Shriram; Kim, Gi-heon; Shi, Ying; Pesaran, Ahmad

    2015-06-15

    It remains an open question how best to predict real-world battery lifetime based on accelerated calendar and cycle aging data from the laboratory. Multiple degradation mechanisms due to (electro)chemical, thermal, and mechanical coupled phenomena influence Li-ion battery lifetime, each with different dependence on time, cycling and thermal environment. The standardization of life predictive models would benefit the industry by reducing test time and streamlining development of system controls.

  14. Testing model for predicting spillway cavitation damage

    SciTech Connect (OSTI)

    Lee, W.; Hoopes, J.A.

    1995-12-31

    Using fuzzy mathematics a comprehensive model has been developed to predict the time, location and level (intensity) of spillway cavitation damage. Five damage levels and four factors affecting damage are used. Membership functions express the degree that each factor effects damage, and weights express the relative importance of each factor. The model has been calibrated and tested with operating data and experience from the Glen Canyon Dam left tunnel spillway, which had major cavitation damage in 1983. An error analysis for the Glen Canyon Dam left tunnel spillway gave the best ranges for model weights. Prediction of damage at other spillways (4 tunnels, 3 chutes) with functions and parameters as for the Glen Canyon Dam left tunnel spillway gave reasonable predictions of damage intensity and location and poor estimates of occurrence time in the tunnels. Chute predictions were in poor agreement with observations, indicating need for different parameter values. Finally, two membership functions with constant or time varying parameters are compared with observed results from the Glen Canyon Dam left tunnel spillway.

  15. GIS-BASED PREDICTION OF HURRICANE FLOOD INUNDATION

    SciTech Connect (OSTI)

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    A simulation environment is being developed for the prediction and analysis of the inundation consequences for infrastructure systems from extreme flood events. This decision support architecture includes a GIS-based environment for model input development, simulation integration tools for meteorological, hydrologic, and infrastructure system models and damage assessment tools for infrastructure systems. The GIS-based environment processes digital elevation models (30-m from the USGS), land use/cover (30-m NLCD), stream networks from the National Hydrography Dataset (NHD) and soils data from the NRCS (STATSGO) to create stream network, subbasins, and cross-section shapefiles for drainage basins selected for analysis. Rainfall predictions are made by a numerical weather model and ingested in gridded format into the simulation environment. Runoff hydrographs are estimated using Green-Ampt infiltration excess runoff prediction and a 1D diffusive wave overland flow routing approach. The hydrographs are fed into the stream network and integrated in a dynamic wave routing module using the EPA's Storm Water Management Model (SWMM) to predict flood depth. The flood depths are then transformed into inundation maps and exported for damage assessment. Hydrologic/hydraulic results are presented for Tropical Storm Allison.

  16. Illustrating the future prediction of performance based on computer...

    Office of Scientific and Technical Information (OSTI)

    Illustrating the future prediction of performance based on computer code, physical ... Citation Details In-Document Search Title: Illustrating the future prediction of ...

  17. LLNL-TR-411072 A Predictive Model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    072 A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model A. E. Koniges, N. D. Masters, A. C. Fisher, R. W. Anderson, D. C. Eder, D. Benson, T. B. Kaiser, B. T. Gunney, P. Wang, B. R. Maddox, J. F. Hansen, D. H. Kalantar, P. Dixit, H. Jarmakani, M. A. Meyers March 5, 2009 -2- Disclaimer This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore

  18. New model accurately predicts reformate composition

    SciTech Connect (OSTI)

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  19. An Anisotropic Hardening Model for Springback Prediction

    SciTech Connect (OSTI)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-05

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  20. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    SciTech Connect (OSTI)

    Edeling, W.N.; Cinnella, P.; Dwight, R.P.

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  1. Stimulation Prediction Models | Open Energy Information

    Open Energy Info (EERE)

    Predictive Simulator for Enhanced Geothermal Systems California Science Applications International Corporation Recovery Act: Enhanced Geothermal Systems Component Research and...

  2. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect (OSTI)

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  3. A Piezoelectric Sensor Based Smart-Die Structure for Predicting...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Piezoelectric Sensor Based Smart-Die Structure for Predicting the Onset of Failure During Die Casting Operations Colorado School of Mines Contact CSM About This Technology...

  4. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect (OSTI)

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the jâ€Č-th time point where j≀jâ€Č≀j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},
, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≄2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  5. THE EFFECT OF UNCERTAINTY IN MODELING COEFFICIENTS USED TO PREDICT...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    UNCERTAINTY IN MODELING COEFFICIENTS USED TO PREDICT ENERGY PRODUCTION USING THE SANDIA ARRAY ... relating voltage and current to solar irradiance, for crystalline silicon modules. ...

  6. Predictive Models of Li-ion Battery Lifetime (Presentation) Smith...

    Office of Scientific and Technical Information (OSTI)

    Predictive Models of Li-ion Battery Lifetime (Presentation) Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G.; Shi, Y.; Pesaran, A. 25 ENERGY STORAGE; 33 ADVANCED PROPULSION...

  7. A predictive standard model for heavy electron systems

    SciTech Connect (OSTI)

    Yang, Yifeng; Curro, N J; Fisk, Z; Pines, D

    2010-01-01

    We propose a predictive standard model for heavy electron systems based on a detailed phenomenological two-fluid description of existing experimental data. It leads to a new phase diagram that replaces the Doniach picture, describes the emergent anomalous scaling behavior of the heavy electron (Kondo) liquid measured below the lattice coherence temperature, T*, seen by many different experimental probes, that marks the onset of collective hybridization, and enables one to obtain important information on quantum criticality and the superconducting/antiferromagnetic states at low temperatures. Because T* is {approx} J{sup 2} {rho}/2, the nearest neighbor RKKY interaction, a knowledge of the single-ion Kondo coupling, J, to the background conduction electron density of states, {rho}, makes it possible to predict Kondo liquid behavior, and to estimate its maximum superconducting transition temperature in both existing and newly discovered heavy electron families.

  8. Model-based tomographic reconstruction

    DOE Patents [OSTI]

    Chambers, David H.; Lehman, Sean K.; Goodman, Dennis M.

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  9. Demonstrating the improvement of predictive maturity of a computational model

    SciTech Connect (OSTI)

    Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  10. Predicting laser weld reliability with stochastic reduced-order models. Predicting laser weld reliability

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Emery, John M.; Field, Richard V.; Foulk, James W.; Karlson, Kyle N.; Grigoriu, Mircea D.

    2015-05-26

    Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not bemore » efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.« less

  11. Predicting laser weld reliability with stochastic reduced-order models. Predicting laser weld reliability

    SciTech Connect (OSTI)

    Emery, John M.; Field, Richard V.; Foulk, James W.; Karlson, Kyle N.; Grigoriu, Mircea D.

    2015-05-26

    Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not be efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.

  12. Illustrating the future prediction of performance based on computer code,

    Office of Scientific and Technical Information (OSTI)

    physical experiments, and critical performance parameter samples (Journal Article) | SciTech Connect Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples Citation Details In-Document Search Title: Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples In this paper, we present a generic example to illustrate various points

  13. Illustrating the future prediction of performance based on computer code,

    Office of Scientific and Technical Information (OSTI)

    physical experiments, and critical performance parameter samples (Journal Article) | SciTech Connect Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples Citation Details In-Document Search Title: Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples × You are accessing a document from the Department of Energy's (DOE)

  14. LHC diphoton Higgs signal predicted by little Higgs models

    SciTech Connect (OSTI)

    Wang Lei; Yang Jinmin

    2011-10-01

    Little Higgs theory naturally predicts a light Higgs boson whose most important discovery channel at the LHC is the diphoton signal pp{yields}h{yields}{gamma}{gamma}. In this work, we perform a comparative study for this signal in some typical little Higgs models, namely, the littlest Higgs model, two littlest Higgs models with T-parity (named LHT-I and LHT-II), and the simplest little Higgs models. We find that compared with the standard model prediction, the diphoton signal rate is always suppressed and the suppression extent can be quite different for different models. The suppression is mild (< or approx. 10%) in the littlest Higgs model but can be quite severe ({approx_equal}90%) in other three models. This means that discovering the light Higgs boson predicted by the little Higgs theory through the diphoton channel at the LHC will be more difficult than discovering the standard model Higgs boson.

  15. Dynamic model predicts well bore surge and swab pressures

    SciTech Connect (OSTI)

    Bing, Z.; Kaiji, Z.

    1996-12-30

    A dynamic well control model predicts surge and swab pressures more accurately than a steady-state model, thereby providing better estimates of pressure fluctuations when pipe is tripped. Pressure fluctuations from tripping pipe into a well can contribute to lost circulation, kicks,and well control problems. This dynamic method of predicting surge and swab pressures was verified in a full-scale test well in the Zhong Yuan oil field in China. Both the dynamic model and steady state model were verified through the test data. The test data showed the dynamic model can correctly predict downhole pressures from running or pulling pipe in a well; steady state models may result in relatively large prediction errors, especially in deeper wells.

  16. Comparison of Uncertainty of Two Precipitation Prediction Models at Los

    Office of Scientific and Technical Information (OSTI)

    Alamos National Lab Technical Area 54 (Technical Report) | SciTech Connect Comparison of Uncertainty of Two Precipitation Prediction Models at Los Alamos National Lab Technical Area 54 Citation Details In-Document Search Title: Comparison of Uncertainty of Two Precipitation Prediction Models at Los Alamos National Lab Technical Area 54 Meteorological inputs are an important part of subsurface flow and transport modeling. The choice of source for meteorological data used as inputs has

  17. A predictive ocean oil spill model

    SciTech Connect (OSTI)

    Sanderson, J.; Barnette, D.; Papodopoulos, P.; Schaudt, K.; Szabo, D.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Initially, the project focused on creating an ocean oil spill model and working with the major oil companies to compare their data with the Los Alamos global ocean model. As a result of this initial effort, Los Alamos worked closely with the Eddy Joint Industry Project (EJIP), a consortium oil and gas producing companies in the US. The central theme of the project was to use output produced from LANL`s global ocean model to look in detail at ocean currents in selected geographic areas of the world of interest to consortium members. Once ocean currents are well understood this information could be used to create oil spill models, improve offshore exploration and drilling equipment, and aid in the design of semi-permanent offshore production platforms.

  18. SimTable helps firefighters model and predict fire direction

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SimTable models and predicts fire path SimTable helps firefighters model and predict fire direction In 2009, SimTable received $100,000 from the LANS Venture Acceleration Fund to improve the user interface and seed firefighting academies with customized set ups. April 3, 2012 Stephen Guerin (L) and Chip Garner (R) with SimTable Stephen Guerin (L), and Chip Garner (R), with SimTable, a Santa Fe company helping firefighters model and predict where a fire is most likely to spread, received support

  19. Project Profile: Predictive Physico-Chemical Modeling of Intrinsic Degradation Mechanisms for Advanced Reflector Materials

    Broader source: Energy.gov [DOE]

    NREL, under the Physics of Reliability: Evaluating Design Insights for Component Technologies in Solar (PREDICTS) Program will be developing a physics-based computational degradation model to assess the kinetic oxidation rates; realistic model light attenuation and transport; and multi-layer treatment with variable properties Simulation based experimental design.

  20. Using a Simple Binomial Model to Assess Improvement in Predictive...

    Office of Scientific and Technical Information (OSTI)

    simulation codes and uses a simple binomial model for the probability, theta, that, in an experiment chosen at random, the new code will provide a better prediction than the old. ...

  1. Predictive models of circulating fluidized bed combustors

    SciTech Connect (OSTI)

    Gidaspow, D.

    1992-07-01

    Steady flows influenced by walls cannot be described by inviscid models. Flows in circulating fluidized beds have significant wall effects. Particles in the form of clusters or layers can be seen to run down the walls. Hence modeling of circulating fluidized beds (CFB) without a viscosity is not possible. However, in interpreting Equations (8-1) and (8-2) it must be kept in mind that CFB or most other two phase flows are never in a true steady state. Then the viscosity in Equations (8-1) and (8-2) may not be the true fluid viscosity to be discussed next, but an Eddy type viscosity caused by two phase flow oscillations usually referred to as turbulence. In view of the transient nature of two-phase flow, the drag and the boundary layer thickness may not be proportional to the square root of the intrinsic viscosity but depend upon it to a much smaller extent. As another example, liquid-solid flow and settling of colloidal particles in a lamella electrosettler the settling process is only moderately affected by viscosity. Inviscid flow with settling is a good first approximation to this electric field driven process. The physical meaning of the particulate phase viscosity is described in detail in the chapter on kinetic theory. Here the conventional derivation resented in single phase fluid mechanics is generalized to multiphase flow.

  2. Simplified Protein Models: Predicting Folding Pathways and Structure Using

    Office of Scientific and Technical Information (OSTI)

    Amino Acid Sequences (Journal Article) | DOE PAGES Simplified Protein Models: Predicting Folding Pathways and Structure Using Amino Acid Sequences Title: Simplified Protein Models: Predicting Folding Pathways and Structure Using Amino Acid Sequences Authors: Adhikari, Aashish N. ; Freed, Karl F. ; Sosnick, Tobin R. Publication Date: 2013-07-11 OSTI Identifier: 1103786 Type: Publisher's Accepted Manuscript Journal Name: Physical Review Letters Additional Journal Information: Journal Volume:

  3. LIFETIME PREDICTION FOR MODEL 9975 O-RINGS IN KAMS

    SciTech Connect (OSTI)

    Hoffman, E.; Skidmore, E.

    2009-11-24

    The Savannah River Site (SRS) is currently storing plutonium materials in the K-Area Materials Storage (KAMS) facility. The materials are packaged per the DOE 3013 Standard and transported and stored in KAMS in Model 9975 shipping packages, which include double containment vessels sealed with dual O-rings made of Parker Seals compound V0835-75 (based on Viton{reg_sign} GLT). The outer O-ring of each containment vessel is credited for leaktight containment per ANSI N14.5. O-ring service life depends on many factors, including the failure criterion, environmental conditions, overall design, fabrication quality and assembly practices. A preliminary life prediction model has been developed for the V0835-75 O-rings in KAMS. The conservative model is based primarily on long-term compression stress relaxation (CSR) experiments and Arrhenius accelerated-aging methodology. For model development purposes, seal lifetime is defined as a 90% loss of measurable sealing force. Thus far, CSR experiments have only reached this target level of degradation at temperatures {ge} 300 F. At lower temperatures, relaxation values are more tolerable. Using time-temperature superposition principles, the conservative model predicts a service life of approximately 20-25 years at a constant seal temperature of 175 F. This represents a maximum payload package at a constant ambient temperature of 104 F, the highest recorded in KAMS to date. This is considered a highly conservative value as such ambient temperatures are only reached on occasion and for short durations. The presence of fiberboard in the package minimizes the impact of such temperature swings, with many hours to several days required for seal temperatures to respond proportionately. At 85 F ambient, a more realistic but still conservative value, bounding seal temperatures are reduced to {approx}158 F, with an estimated seal lifetime of {approx}35-45 years. The actual service life for O-rings in a maximum wattage package likely lies higher than the estimates due to the conservative assumptions used for the model. For lower heat loads at similar ambient temperatures, seal lifetime is further increased. The preliminary model is based on several assumptions that require validation with additional experiments and longer exposures at more realistic conditions. The assumption of constant exposure at peak temperature is believed to be conservative. Cumulative damage at more realistic conditions will likely be less severe but is more difficult to assess based on available data. Arrhenius aging behavior is expected, but non-Arrhenius behavior is possible. Validation of Arrhenius behavior is ideally determined from longer tests at temperatures closer to actual service conditions. CSR experiments will therefore continue at lower temperatures to validate the model. Ultrasensitive oxygen consumption analysis has been shown to be useful in identifying non-Arrhenius behavior within reasonable test periods. Therefore, additional experiments are recommended and planned to validate the model.

  4. Project Profile: Predictive Physico-Chemical Modeling of Intrinsic...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    will be developing a physics-based computational degradation model to assess the ... conditions, material properties and device geometry into lifetime performancecost models. ...

  5. The selection of turbulence models for prediction of room airflow

    SciTech Connect (OSTI)

    Nielsen, P.V.

    1998-10-01

    The airflow in buildings involves a combination of many different flow elements. It is, therefore, difficult to find an adequate, all-round turbulence model covering all aspects. Consequently, it is appropriate and economical to choose turbulence models according to the situation that is to be predicted. This paper discusses the use of different turbulence models and their advantages in given situations. As an example, it is shown that a simple zero-equation model can be used for the prediction of special situations as flow with a low level of turbulence. A zero-equation model with compensation for room dimensions and velocity level also is discussed. A {kappa}-{epsilon} model expanded by damping functions is used to improve the prediction of the flow in a room ventilated by displacement ventilation. The damping functions especially take into account the turbulence level and the vertical temperature gradient. Low Reynolds number models (LNR models) are used to improve the prediction of evaporation-controlled emissions from building material, which is shown by an example. Finally, large eddy simulation (LES) of room airflow is discussed and demonstrated.

  6. A comparison of Unified creep-plasticity and conventional creep models for rock salt based on predictions of creep behavior measured in several in situ and bench-scale experiments

    SciTech Connect (OSTI)

    Morgan, H.S.; Krieg, R.D.

    1988-04-01

    A unified creep-plasticity (UCP) model, a conventional elastic-secondary creep (ESC) model, and an elastic-secondary creep model with greatly reduced elastic moduli (RESC model) are used to compute creep responses for five experimental configurations in which rock salt is subjected to several different complex loadings. The UCP model is exercised with three sets of model parameters. Two sets are for salt from the site of the Waste Isolation Pilot Plant (WIPP) in southeastern New Mexico, and the third is for salt from Avery Island, Louisiana. The WIPP reference secondary creep parameters are used in both the EC and RESC models. The WIPP reference values for the elastic moduli are also used in the ESC model. These moduli are divided by 12.5 in the RESC model. The geometrical configurations include the South Drift at the WIPP site, a hypothetical shaft in rock salt, a large hollow cylinder of rock salt subjected to external pressure while still in the floor of a drift at Avery Island, Louisiana, a laboratory-scale hollow cylinder subjected to external pressure, and a model pillar of salt subjected to axial load. Measured creep responses are available for all of these experiments except the hypothetical shaft. In all cases, deformations computed with the UCP model are much larger than the ESC predictions and are in better agreement with the data. The RESC model also produces larger deformations than the ESC model, and for the South Drift, the RESC predictions agree well with measured closures. 46 refs., 19 figs., 2 tabs.

  7. Estimating vehicle roadside encroachment frequency using accident prediction models

    SciTech Connect (OSTI)

    Miaou, S.-P.

    1996-07-01

    The existing data to support the development of roadside encroachment- based accident models are extremely limited and largely outdated. Under the sponsorship of the Federal Highway Administration and Transportation Research Board, several roadside safety projects have attempted to address this issue by providing rather comprehensive data collection plans and conducting pilot data collection efforts. It is clear from the results of these studies that the required field data collection efforts will be expensive. Furthermore, the validity of any field collected encroachment data may be questionable because of the technical difficulty to distinguish intentional from unintentional encroachments. This paper proposes an alternative method for estimating the basic roadside encroachment data without actually field collecting them. The method is developed by exploring the probabilistic relationships between a roadside encroachment event and a run-off-the-road event With some mild assumptions, the method is capable of providing a wide range of basic encroachment data from conventional accident prediction models. To illustrate the concept and use of such a method, some basic encroachment data are estimated for rural two-lane undivided roads. In addition, the estimated encroachment data are compared with the existing collected data. The illustration shows that the method described in this paper can be a viable approach to estimating basic encroachment data without actually collecting them which can be very costly.

  8. Sandia's ice sheet modeling of Greenland, Antarctica helps predict

    National Nuclear Security Administration (NNSA)

    sea-level rise | National Nuclear Security Administration Sandia's ice sheet modeling of Greenland, Antarctica helps predict sea-level rise Wednesday, March 2, 2016 - 12:00am Sandia California researchers Irina Tezaur and Ray Tuminaro analyze a model of Antarctica. They are part of a Sandia team working to improve the reliability and efficiency of computational models that describe ice sheet behavior and dynamics. The Greenland and Antarctic ice sheets will make a dominant contribution to

  9. Microsoft Word - NRAP-TRS-I-005-2014_Use of Science-Based Prediction...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Use of Science-Based Prediction to Characterize Reservoir Behavior as a Function of ... H.; Zhang, Y.; Guthrie, G. Use of Science-Based Prediction to Characterize ...

  10. Product component genealogy modeling and field-failure prediction

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    King, Caleb; Hong, Yili; Meeker, William Q.

    2016-04-13

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  11. The origins of computer weather prediction and climate modeling

    SciTech Connect (OSTI)

    Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], E-mail: Peter.Lynch@ucd.ie

    2008-03-20

    Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

  12. Principles of models based engineering

    SciTech Connect (OSTI)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  13. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    SciTech Connect (OSTI)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  14. Predictive based monitoring of nuclear plant component degradation using support vector regression

    SciTech Connect (OSTI)

    Agarwal, Vivek; Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.

    2015-02-01

    Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component’s respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.

  15. NREL: Transportation Research - NREL's Battery Life Predictive Model Helps

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Companies Take Charge NREL's Battery Life Predictive Model Helps Companies Take Charge October 26, 2015 A series of batteries hooked together next to a monitor. An example of a stationary, grid-connected battery is the NREL project from Erigo/EaglePicher Technologies, LLC Technologies. Inverters and nickel cadmium batteries inside of a utility scale 300 kW battery storage system will support Department of Defense micro-grids. Photo by Dennis Schroeder / NREL 32696 Companies that rely on

  16. Parametric Adaptive Model Based Diagnostics

    Broader source: Energy.gov [DOE]

    A model-based adaptive, robust technology is presented for on-board diagnostics of failure of diesel engine emission control devices and ethanol estimation of flex-fuel vehicles.

  17. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    SciTech Connect (OSTI)

    Shiraishi, Satomi; Moore, Kevin L.; Tan, Jun; Olsen, Lindsey A.

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, ÎŽQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are stratified based on proximity to OARs and their PTV volume sizes. Volumes are categorized into small (V{sub PTV} < 2 cm{sup 3}), medium (2 cm{sup 3} < V{sub PTV} < 25 cm{sup 3}), and large (25 cm{sup 3} < V{sub PTV}). The unfiltered models demonstrate the ability to predict GMs to ∌1 mm and fractional brain V{sub 10Gy} to ∌25% for plans with large V{sub PTV} and critical OAR involvements. Increased accuracy and precision of QM predictions are obtained when high quality plans are selected for the model training. For the small and medium V{sub PTV} plans without critical OAR involvement, predictive ability was evaluated using the refined model. For training plans, the model predicted GM to an accuracy of 0.2 ± 0.3 mm and fractional brain V{sub 10Gy} to 0.04 ± 0.12, suggesting highly accurate predictive ability. For excluded plans, the average ÎŽGM was 1.1 mm and fractional brain V{sub 10Gy} was 0.20. These ÎŽQM are significantly greater than those of the model training plans (p < 0.001). For CI, predictions are close to clinical values and no significant difference was observed between the training and excluded plans (p = 0.19). Twenty outliers with ÎŽGM > 1.35 mm were identified as potentially suboptimal, and replanning these cases using predicted target objectives demonstrates significant improvements on QMs: on average, 1.1 mm reduction in GM (p < 0.001) and 23% reduction in brain V{sub 10Gy} (p < 0.001). After replanning, the difference of ÎŽGM distribution between the 20 replans and the refined model training plans was marginal. Conclusions: The results demonstrate the ability to predict SRS QMs precisely and to identify suboptimal plans. Furthermore, the knowledge-based DVH predictions were directly used as target optimization objectives and allowed a standardized planning process that bettered the clinically approved plans. Full clinical application of this methodology can improve consistency of SRS plan quality in a wide range of PTV volume and proximity to OARs and facilitate automated treatment planning for this critical treatment site.

  18. An Energy Based Fatigue Life Prediction Framework for In-Service Structural Components

    SciTech Connect (OSTI)

    H. Ozaltun; M. H.H. Shen; T. George; C. Cross

    2011-06-01

    An energy based fatigue life prediction framework has been developed for calculation of remaining fatigue life of in service gas turbine materials. The purpose of the life prediction framework is to account aging effect caused by cyclic loadings on fatigue strength of gas turbine engines structural components which are usually designed for very long life. Previous studies indicate the total strain energy dissipated during a monotonic fracture process and a cyclic process is a material property that can be determined by measuring the area underneath the monotonic true stress-strain curve and the sum of the area within each hysteresis loop in the cyclic process, respectively. The energy-based fatigue life prediction framework consists of the following entities: (1) development of a testing procedure to achieve plastic energy dissipation per life cycle and (2) incorporation of an energy-based fatigue life calculation scheme to determine the remaining fatigue life of in-service gas turbine materials. The accuracy of the remaining fatigue life prediction method was verified by comparison between model approximation and experimental results of Aluminum 6061-T6. The comparison shows promising agreement, thus validating the capability of the framework to produce accurate fatigue life prediction.

  19. Mining Behavior Based Safety Data to Predict Safety Performance

    SciTech Connect (OSTI)

    Jeffrey C. Joe

    2010-06-01

    The Idaho National Laboratory (INL) operates a behavior based safety program called Safety Observations Achieve Results (SOAR). This peer-to-peer observation program encourages employees to perform in-field observations of each other's work practices and habits (i.e., behaviors). The underlying premise of conducting these observations is that more serious accidents are prevented from occurring because lower level “at risk” behaviors are identified and corrected before they can propagate into culturally accepted “unsafe” behaviors that result in injuries or fatalities. Although the approach increases employee involvement in safety, the premise of the program has not been subject to sufficient empirical evaluation. The INL now has a significant amount of SOAR data on these lower level “at risk” behaviors. This paper describes the use of data mining techniques to analyze these data to determine whether they can predict if and when a more serious accident will occur.

  20. Model Predictive Control of Integrated Gasification Combined Cycle Power Plants

    SciTech Connect (OSTI)

    B. Wayne Bequette; Priyadarshi Mahapatra

    2010-08-31

    The primary project objectives were to understand how the process design of an integrated gasification combined cycle (IGCC) power plant affects the dynamic operability and controllability of the process. Steady-state and dynamic simulation models were developed to predict the process behavior during typical transients that occur in plant operation. Advanced control strategies were developed to improve the ability of the process to follow changes in the power load demand, and to improve performance during transitions between power levels. Another objective of the proposed work was to educate graduate and undergraduate students in the application of process systems and control to coal technology. Educational materials were developed for use in engineering courses to further broaden this exposure to many students. ASPENTECH software was used to perform steady-state and dynamic simulations of an IGCC power plant. Linear systems analysis techniques were used to assess the steady-state and dynamic operability of the power plant under various plant operating conditions. Model predictive control (MPC) strategies were developed to improve the dynamic operation of the power plants. MATLAB and SIMULINK software were used for systems analysis and control system design, and the SIMULINK functionality in ASPEN DYNAMICS was used to test the control strategies on the simulated process. Project funds were used to support a Ph.D. student to receive education and training in coal technology and the application of modeling and simulation techniques.

  1. Predictive modeling of reactive wetting and metal joining.

    SciTech Connect (OSTI)

    van Swol, Frank B.

    2013-09-01

    The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.

  2. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less

  3. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    SciTech Connect (OSTI)

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidate inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.

  4. Collaborative Research. Separating Forced and Unforced Decadal Predictability in Models and Observations

    SciTech Connect (OSTI)

    DelSole, Timothy

    2015-08-31

    The purpose of the proposed research was to identify unforced predictable components on decadal time scales, distinguish these components from forced predictable components, and to assess the reliability of model predictions of these components. The question of whether anthropogenic forcing changes decadal predictability, or gives rise to new forms of decadal predictability, also will be

  5. Development of a land ice core for the Model for Prediction Across...

    Office of Scientific and Technical Information (OSTI)

    for the Model for Prediction Across Scales (MPAS) Citation Details In-Document Search Title: Development of a land ice core for the Model for Prediction Across Scales (MPAS) No ...

  6. Development of a land ice core for the Model for Prediction Across...

    Office of Scientific and Technical Information (OSTI)

    for the Model for Prediction Across Scales (MPAS) Citation Details In-Document Search Title: Development of a land ice core for the Model for Prediction Across Scales (MPAS) ...

  7. Optimal Control of Distributed Energy Resources using Model Predictive Control

    SciTech Connect (OSTI)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen

    2012-07-22

    In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.

  8. RESIDUA UPGRADING EFFICIENCY IMPROVEMENT MODELS: COKE FORMATION PREDICTABILITY MAPS

    SciTech Connect (OSTI)

    John F. Schabron; A. Troy Pauli; Joseph F. Rovani Jr.

    2002-05-01

    The dispersed particle solution model of petroleum residua structure was used to develop predictors for pyrolytic coke formation. Coking Indexes were developed in prior years that measure how near a pyrolysis system is to coke formation during the coke formation induction period. These have been demonstrated to be universally applicable for residua regardless of the source of the material. Coking onset is coincidental with the destruction of the ordered structure and the formation of a multiphase system. The amount of coke initially formed appears to be a function of the free solvent volume of the original residua. In the current work, three-dimensional coke make predictability maps were developed at 400 C, 450 C, and 500 C (752 F, 842 F, and 932 F). These relate residence time and free solvent volume to the amount of coke formed at a particular pyrolysis temperature. Activation energies for two apparent types of zero-order coke formation reactions were estimated. The results provide a new tool for ranking residua, gauging proximity to coke formation, and predicting initial coke make tendencies.

  9. Adaptive model predictive process control using neural networks

    DOE Patents [OSTI]

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  10. Adaptive model predictive process control using neural networks

    DOE Patents [OSTI]

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  11. Results from baseline tests of the SPRE I and comparison with code model predictions

    SciTech Connect (OSTI)

    Cairelli, J.E.; Geng, S.M.; Skupinski, R.C.

    1994-09-01

    The Space Power Research Engine (SPRE), a free-piston Stirling engine with linear alternator, is being tested at the NASA Lewis Research Center as part of the Civil Space Technology Initiative (CSTI) as a candidate for high capacity space power. This paper presents results of base-line engine tests at design and off-design operating conditions. The test results are compared with code model predictions.

  12. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    SciTech Connect (OSTI)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-06-19

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading.

  13. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    SciTech Connect (OSTI)

    Xavier, MA; Trimboli, MS

    2015-07-01

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models. (C) 2015 Elsevier B.V. All rights reserved.

  14. Physics-based models of the plasmasphere

    SciTech Connect (OSTI)

    Jordanova, Vania K; Pierrard, Vivane; Goldstein, Jerry; Andr'e, Nicolas; Lemaire, Joseph F; Liemohn, Mike W; Matsui, H

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  15. DEFINING THE PLAYERS IN HIGHER-ORDER NETWORKS: PREDICTIVE MODELING FOR REVERSE ENGINEERING FUNCTIONAL INFLUENCE NETWORKS

    SciTech Connect (OSTI)

    McDermott, Jason E.; Costa, Michelle N.; Stevens, S.L.; Stenzel-Poore, Mary; Sanfilippo, Antonio P.

    2011-01-20

    A difficult problem that is currently growing rapidly due to the sharp increase in the amount of high-throughput data available for many systems is that of determining useful and informative causative influence networks. These networks can be used to predict behavior given observation of a small number of components, predict behavior at a future time point, or identify components that are critical to the functioning of the system under particular conditions. In these endeavors incorporating observations of systems from a wide variety of viewpoints can be particularly beneficial, but has often been undertaken with the objective of inferring networks that are generally applicable. The focus of the current work is to integrate both general observations and measurements taken for a particular pathology, that of ischemic stroke, to provide improved ability to produce useful predictions of systems behavior. A number of hybrid approaches have recently been proposed for network generation in which the Gene Ontology is used to filter or enrich network links inferred from gene expression data through reverse engineering methods. These approaches have been shown to improve the biological plausibility of the inferred relationships determined, but still treat knowledge-based and machine-learning inferences as incommensurable inputs. In this paper, we explore how further improvements may be achieved through a full integration of network inference insights achieved through application of the Gene Ontology and reverse engineering methods with specific reference to the construction of dynamic models of transcriptional regulatory networks. We show that integrating two approaches to network construction, one based on reverse-engineering from conditional transcriptional data, one based on reverse-engineering from in situ hybridization data, and another based on functional associations derived from Gene Ontology, using probabilities can improve results of clustering as evaluated by a predictive model of transcriptional expression levels.

  16. Modelling hepatitis C therapy—predicting effects of treatment

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Perelson, Alan S.; Guedj, Jeremie

    2015-06-30

    Mathematically modelling changes in HCV RNA levels measured in patients who receive antiviral therapy has yielded many insights into the pathogenesis and effects of treatment on the virus. By determining how rapidly HCV is cleared when viral replication is interrupted by a therapy, one can deduce how rapidly the virus is produced in patients before treatment. This knowledge, coupled with estimates of the HCV mutation rate, enables one to estimate the frequency with which drug resistant variants arise. Modelling HCV also permits the deduction of the effectiveness of an antiviral agent at blocking HCV replication from the magnitude of themore » initial viral decline. One can also estimate the lifespan of an HCV-infected cell from the slope of the subsequent viral decline and determine the duration of therapy needed to cure infection. The original understanding of HCV RNA decline under interferon-based therapies obtained by modelling needed to be revised in order to interpret the HCV RNA decline kinetics seen when using direct-acting antiviral agents (DAAs). In addition, there also exist unresolved issues involving understanding therapies with combinations of DAAs, such as the presence of detectable HCV RNA at the end of therapy in patients who nonetheless have a sustained virologic response.« less

  17. Mechanism-based classification of PAH mixtures to predict carcinogenic potential

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Tilton, Susan C.; Siddens, Lisbeth K.; Krueger, Sharon K.; Larkin, Andrew J.; Löhr, Christiane V.; Williams, David E.; Baird, William M.; Waters, Katrina M.

    2015-04-22

    We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP or environmental PAH mixtures (Mix 1-3) following a two-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC>>BaP=Mix2=Mix3>Mix1=Control, based on statistical significance. Gene expression profiles measured inmore » skin of mice collected 12 h post-initiation were compared to tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (p<0.05) for DNA damage, apoptosis, response to chemical stimulus and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. As a result, these data further provide a ‘source-to outcome’ model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action based risk assessment could be employed for environmental PAH mixtures.« less

  18. Mechanism-based classification of PAH mixtures to predict carcinogenic potential

    SciTech Connect (OSTI)

    Tilton, Susan C.; Siddens, Lisbeth K.; Krueger, Sharon K.; Larkin, Andrew J.; Löhr, Christiane V.; Williams, David E.; Baird, William M.; Waters, Katrina M.

    2015-04-22

    We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP or environmental PAH mixtures (Mix 1-3) following a two-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC>>BaP=Mix2=Mix3>Mix1=Control, based on statistical significance. Gene expression profiles measured in skin of mice collected 12 h post-initiation were compared to tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (p<0.05) for DNA damage, apoptosis, response to chemical stimulus and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. As a result, these data further provide a ‘source-to outcome’ model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action based risk assessment could be employed for environmental PAH mixtures.

  19. Mechanism-based classification of PAH mixtures to predict carcinogenic potential

    SciTech Connect (OSTI)

    Tilton, Susan C.; Siddens, Lisbeth K.; Krueger, Sharon K.; Larkin, Andrew J.; Löhr, Christiane V.; Williams, David E.; Baird, William M.; Waters, Katrina M.

    2015-04-22

    We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP or environmental PAH mixtures (Mix 1-3) following a two-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC>>BaP=Mix2=Mix3>Mix1=Control, based on statistical significance. Gene expression profiles measured in skin of mice collected 12 h post-initiation were compared to tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (p<0.05) for DNA damage, apoptosis, response to chemical stimulus and interferon gamma signaling resulted in the highest classification accuracy with leave-one-out cross validation. This pathway-driven approach was successfully utilized to distinguish early regulatory events during initiation prognostic for tumor outcome and provides proof-of-concept for using short-term initiation studies to classify carcinogenic potential of environmental PAH mixtures. As a result, these data further provide a ‘source-to outcome’ model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action based risk assessment could be employed for environmental PAH mixtures.

  20. Prediction of rodent carcinogenic potential of naturally occurring chemicals in the human diet using high-throughput QSAR predictive modeling

    SciTech Connect (OSTI)

    Valerio, Luis G. . E-mail: luis.valerio@FDA.HHS.gov; Arvidson, Kirk B.; Chanderbhan, Ronald F.; Contrera, Joseph F.

    2007-07-01

    Consistent with the U.S. Food and Drug Administration (FDA) Critical Path Initiative, predictive toxicology software programs employing quantitative structure-activity relationship (QSAR) models are currently under evaluation for regulatory risk assessment and scientific decision support for highly sensitive endpoints such as carcinogenicity, mutagenicity and reproductive toxicity. At the FDA's Center for Food Safety and Applied Nutrition's Office of Food Additive Safety and the Center for Drug Evaluation and Research's Informatics and Computational Safety Analysis Staff (ICSAS), the use of computational SAR tools for both qualitative and quantitative risk assessment applications are being developed and evaluated. One tool of current interest is MDL-QSAR predictive discriminant analysis modeling of rodent carcinogenicity, which has been previously evaluated for pharmaceutical applications by the FDA ICSAS. The study described in this paper aims to evaluate the utility of this software to estimate the carcinogenic potential of small, organic, naturally occurring chemicals found in the human diet. In addition, a group of 19 known synthetic dietary constituents that were positive in rodent carcinogenicity studies served as a control group. In the test group of naturally occurring chemicals, 101 were found to be suitable for predictive modeling using this software's discriminant analysis modeling approach. Predictions performed on these compounds were compared to published experimental evidence of each compound's carcinogenic potential. Experimental evidence included relevant toxicological studies such as rodent cancer bioassays, rodent anti-carcinogenicity studies, genotoxic studies, and the presence of chemical structural alerts. Statistical indices of predictive performance were calculated to assess the utility of the predictive modeling method. Results revealed good predictive performance using this software's rodent carcinogenicity module of over 1200 chemicals, comprised primarily of pharmaceutical, industrial and some natural products developed under an FDA-MDL cooperative research and development agreement (CRADA). The predictive performance for this group of dietary natural products and the control group was 97% sensitivity and 80% concordance. Specificity was marginal at 53%. This study finds that the in silico QSAR analysis employing this software's rodent carcinogenicity database is capable of identifying the rodent carcinogenic potential of naturally occurring organic molecules found in the human diet with a high degree of sensitivity. It is the first study to demonstrate successful QSAR predictive modeling of naturally occurring carcinogens found in the human diet using an external validation test. Further test validation of this software and expansion of the training data set for dietary chemicals will help to support the future use of such QSAR methods for screening and prioritizing the risk of dietary chemicals when actual animal data are inadequate, equivocal, or absent.

  1. Impact of heterogeneous chemistry on model predictions of ozone changes

    SciTech Connect (OSTI)

    Granier, C.; Brasseur, G. )

    1992-11-20

    A two-dimensional chemical/transport model of the middle atmosphere is used to assess the importance of chemical heterogeneous processes in the polar regions (on polar stratospheric clouds (PSCs)) and at other latitudes (on sulfate aerosols). When conversion on type I and type II PSCs of N[sub 2]O[sub 5] into HNO[sub 3] and of CIONO[sub 2] into reactive forms of chlorine is taken into account, enhanced CIO concentrations lead to the formation of a springtime ozone hole over the Antarctic continent; no such major reduction in the ozone column is found in the Arctic region. When conversion of nitrogen and chlorine compounds is assumed to occur on sulfate particles in the lower stratosphere, significant perturbations in the chemistry are also found. For background aerosol conditions, the concentration of nitric acid is enhanced and agrees with observed values, while that of nitrogen oxides is reduced and agrees less than if heterogeneous processes are ignored in the calculations. The concentration of the OH radical is significantly increased. Ozone number density appears to become larger between 16 and 30 km but smaller below 16 km, especially at high latitudes. The ozone column is only slightly modified, except at high latitudes where it is substantially reduced if the CIONO[sub 2] conversion into reactive chlorine is considered. After a large volcanic eruption these changes are further exacerbated. The ozone budget in the lower stratrosphere becomes less affected by nitrogen oxides but is largely controlled by the CIO[sub x] and HO[sub x] chemistries. A substantial decrease in the ozone column is predicted as a result of the Pinatubo volcanic eruption, mostly in winter at middle and high latitudes. 62 refs., 18 figs., 3 tabs.

  2. PNNL: Mechanistic-Based Ductility Prediction for Complex Mg Castings |

    Broader source: Energy.gov (indexed) [DOE]

    Energy PNNL Support of the DOE GTO Model Comparison Activity presentation at the April 2013 peer review meeting held in Denver, Colorado. PDF icon scheibe_pnnl_support_peer2013.pdf More Documents & Publications track 4: enhanced geothermal systems (EGS) | geothermal 2015 peer review Monitoring and Modeling Fluid Flow in a Developing EGS Reservoir Reservoir Modeling Working Group Meeting

    Presenter: Bing Liu, PNNL View the Presentation PDF icon PNNL: Codes Portfolio - 2015 Peer Review

  3. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

    SciTech Connect (OSTI)

    G. R. Odette; G. E. Lucas

    2005-11-15

    This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

  4. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    SciTech Connect (OSTI)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  5. Characterization and validation of an in silico toxicology model to predict the mutagenic potential of drug impurities*

    SciTech Connect (OSTI)

    Valerio, Luis G.; Cross, Kevin P.

    2012-05-01

    Control and minimization of human exposure to potential genotoxic impurities found in drug substances and products is an important part of preclinical safety assessments of new drug products. The FDA's 2008 draft guidance on genotoxic and carcinogenic impurities in drug substances and products allows use of computational quantitative structure–activity relationships (QSAR) to identify structural alerts for known and expected impurities present at levels below qualified thresholds. This study provides the information necessary to establish the practical use of a new in silico toxicology model for predicting Salmonella t. mutagenicity (Ames assay outcome) of drug impurities and other chemicals. We describe the model's chemical content and toxicity fingerprint in terms of compound space, molecular and structural toxicophores, and have rigorously tested its predictive power using both cross-validation and external validation experiments, as well as case studies. Consistent with desired regulatory use, the model performs with high sensitivity (81%) and high negative predictivity (81%) based on external validation with 2368 compounds foreign to the model and having known mutagenicity. A database of drug impurities was created from proprietary FDA submissions and the public literature which found significant overlap between the structural features of drug impurities and training set chemicals in the QSAR model. Overall, the model's predictive performance was found to be acceptable for screening drug impurities for Salmonella mutagenicity. -- Highlights: ? We characterize a new in silico model to predict mutagenicity of drug impurities. ? The model predicts Salmonella mutagenicity and will be useful for safety assessment. ? We examine toxicity fingerprints and toxicophores of this Ames assay model. ? We compare these attributes to those found in drug impurities known to FDA/CDER. ? We validate the model and find it has a desired predictive performance.

  6. Modeling the Number of Ignitions Following an Earthquake: Developing Prediction Limits for Overdispersed Count Data

    Broader source: Energy.gov [DOE]

    Modeling the Number of Ignitions Following an Earthquake: Developing Prediction Limits for Overdispersed Count Data Elizabeth J. Kelly and Raymond N. Tell

  7. Predictive Theory and Modeling| U.S. DOE Office of Science (SC...

    Office of Science (SC) Website

    Research Leading to Predictive Theory and Modeling for Materials and Chemical Sciences BES ... biosciences - are those that discover new materials and design new chemical processes. ...

  8. Prediction of global solar irradiance based on time series analysis: Application to solar thermal power plants energy production planning

    SciTech Connect (OSTI)

    Martin, Luis; Marchante, Ruth; Cony, Marco; Zarzalejo, Luis F.; Polo, Jesus; Navarro, Ana

    2010-10-15

    Due to strong increase of solar power generation, the predictions of incoming solar energy are acquiring more importance. Photovoltaic and solar thermal are the main sources of electricity generation from solar energy. In the case of solar thermal energy plants with storage energy system, its management and operation need reliable predictions of solar irradiance with the same temporal resolution as the temporal capacity of the back-up system. These plants can work like a conventional power plant and compete in the energy stock market avoiding intermittence in electricity production. This work presents a comparisons of statistical models based on time series applied to predict half daily values of global solar irradiance with a temporal horizon of 3 days. Half daily values consist of accumulated hourly global solar irradiance from solar raise to solar noon and from noon until dawn for each day. The dataset of ground solar radiation used belongs to stations of Spanish National Weather Service (AEMet). The models tested are autoregressive, neural networks and fuzzy logic models. Due to the fact that half daily solar irradiance time series is non-stationary, it has been necessary to transform it to two new stationary variables (clearness index and lost component) which are used as input of the predictive models. Improvement in terms of RMSD of the models essayed is compared against the model based on persistence. The validation process shows that all models essayed improve persistence. The best approach to forecast half daily values of solar irradiance is neural network models with lost component as input, except Lerida station where models based on clearness index have less uncertainty because this magnitude has a linear behaviour and it is easier to simulate by models. (author)

  9. Bulalo field, Philippines: Reservoir modeling for prediction of limits to sustainable generation

    SciTech Connect (OSTI)

    Strobel, Calvin J.

    1993-01-28

    The Bulalo geothermal field, located in Laguna province, Philippines, supplies 12% of the electricity on the island of Luzon. The first 110 MWe power plant was on line May 1979; current 330 MWe (gross) installed capacity was reached in 1984. Since then, the field has operated at an average plant factor of 76%. The National Power Corporation plans to add 40 MWe base load and 40 MWe standby in 1995. A numerical simulation model for the Bulalo field has been created that matches historic pressure changes, enthalpy and steam flash trends and cumulative steam production. Gravity modeling provided independent verification of mass balances and time rate of change of liquid desaturation in the rock matrix. Gravity modeling, in conjunction with reservoir simulation provides a means of predicting matrix dry out and the time to limiting conditions for sustainable levelized steam deliverability and power generation.

  10. Agent-based Infrastructure Interdependency Model

    Energy Science and Technology Software Center (OSTI)

    2003-10-01

    The software is used to analyze infrastructure interdependencies. Agent-based modeling is used for the analysis.

  11. Development of a land ice core for the Model for Prediction Across...

    Office of Scientific and Technical Information (OSTI)

    Conference: Development of a land ice core for the Model for Prediction Across Scales (MPAS) Citation Details In-Document Search Title: Development of a land ice core for the Model ...

  12. Eulerian CFD Models to Predict Thermophoretic Deposition of Soot Particles

    Broader source: Energy.gov (indexed) [DOE]

    in EGR Coolers | Department of Energy This paper describes an Eulerian axisymmetric method in Fluent(R) to predict the overall heat transfer reduction of a surrogate tube due to thermophoretic deposition of submicron particles. PDF icon deer11_abarham.pdf More Documents & Publications Vehicle Technologies Office Merit Review 2014: Materials Issues Associated with EGR Systems (Agreement ID:18571) Project ID:18518 EGR Cooler Fouling - Visualization of Deposition and Removal Mechanis EGR

  13. Kitaev models based on unitary quantum groupoids

    SciTech Connect (OSTI)

    Chang, Liang, E-mail: liangchang@math.tamu.edu [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States)] [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States)

    2014-04-15

    We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid H{sub C}, we show that the ground state manifold of the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category C. Therefore, the generalized Kitaev models provide realizations of the target space of the Turaev-Viro topological quantum field theory based on C.

  14. Comparison of Uncertainty of Two Precipitation Prediction Models...

    Office of Scientific and Technical Information (OSTI)

    subsurface flow and transport modeling. The choice of source for meteorological data used as inputs has significant impacts on the results of subsurface flow and transport studies. ...

  15. Sandia's ice sheet modeling of Greenland, Antarctica helps predict...

    National Nuclear Security Administration (NNSA)

    They are part of a Sandia team working to improve the reliability and efficiency of ... researchers has been improving the reliability and efficiency of computational models ...

  16. SimTable helps firefighters model and predict fire direction

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for modeling lung cancer. In other news December, 1 2015 - Novel therapy for stomach cancer; grand opening of Manhattan Project National Historical Park; 2015 Northern New...

  17. Performance of corrosion inhibiting admixtures for structural concrete -- assessment methods and predictive modeling

    SciTech Connect (OSTI)

    Yunovich, M.; Thompson, N.G.

    1998-12-31

    During the past fifteen years corrosion inhibiting admixtures (CIAs) have become increasingly popular for protection of reinforced components of highway bridges and other structures from damage induced by chlorides. However, there remains considerable debate about the benefits of CIAs in concrete. A variety of testing methods to assess the performance of CIA have been reported in the literature, ranging from tests in simulated pore solutions to long-term exposures of concrete slabs. The paper reviews the published techniques and recommends the methods which would make up a comprehensive CIA effectiveness testing program. The results of this set of tests would provide the data which can be used to rank the presently commercially available CIA and future candidate formulations utilizing a proposed predictive model. The model is based on relatively short-term laboratory testing and considers several phases of a service life of a structure (corrosion initiation, corrosion propagation without damage, and damage to the structure).

  18. Prediction of hydrocarbon-bearing structures based on remote sensing

    SciTech Connect (OSTI)

    Smirnova, I.; Gololobov, Yu.; Rusanova, A. )

    1993-09-01

    The technology we developed is based on the use of remotely sensed data and has proved to be effective for identification of structures that appear promising for oil and gas, in particular, reefs in the hydrocarbon-bearing basin of central Asia (Turkmenistan and Uzbekistan). It implements the [open quotes]geoindication[close quotes] concept, the main idea being that landscape components (geoindicators) and subsurface geological features are correlated and depend on each other. Subsurface features (uplifts, depressions, faults, reefs, and other lithological and structural heterogeneities) cause physical and chemical alterations in overlying rocks up to the land surface; thus, they are reflected in distribution of landscape components and observed on airborne and satellite images as specific patterns. The following identified geoindicators are related to different subsurface geological features: definite formations, anticlines, and reefs (barrier, atoll, and bioherm). The geoindicators are extracted from images either visually or by using computer systems. Specially developed software is applied to analyze geoindicator distribution and calculate their characteristics. In the course of processing, it is possible to distinguish folds from reefs. Distribution of geoindicator characteristics is examined on the well studied reefs, and from the regularities, established promising areas with reefs are revealed. When applying the technology in central Asia, the results were successfully verified by field works, seismic methods, and drilling.

  19. Vehicle Technologies Office Merit Review 2014: Mechanistic-based Ductility Prediction for Complex Mg Castings

    Broader source: Energy.gov [DOE]

    Presentation given by USAMP at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about mechanistic-based ductility prediction...

  20. Simplified predictive models for CO2 sequestration performance assessment

    SciTech Connect (OSTI)

    Mishra, Srikanta; Ganesh, Priya; Schuetter, Jared; He, Jincong; Jin, Zhaoyang; Durlofsky, Louis J.

    2015-09-30

    CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessment modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin Latin Hypercube sampling (LHS) based design with a multidimensional kriging metamodel fit. For roughly the same number of simulations, the LHS-based metamodel yields a more robust predictive model, as verified by a k-fold cross-validation approach (with data split into training and test sets) as well by validation with an independent dataset. In the third category, a reduced-order modeling procedure is utilized that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) in order to represent system response at new control settings from a limited number of training runs. Significant savings in computational time are observed with reasonable accuracy from the PODTPWL reduced-order model for both vertical and horizontal well problems – which could be important in the context of history matching, uncertainty quantification and optimization problems. The simplified physics and statistical learning based models are also validated using an uncertainty analysis framework. Reference cumulative distribution functions of key model outcomes (i.e., plume radius and reservoir pressure buildup) generated using a 97-run full-physics simulation are successfully validated against the CDF from 10,000 sample probabilistic simulations using the simplified models. The main contribution of this research project is the development and validation of a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formations.

  1. Injection-Molded Long-Fiber Thermoplastic Composites: From Process Modeling to Prediction of Mechanical Properties

    SciTech Connect (OSTI)

    Nguyen, Ba Nghiep; Kunc, Vlastimil; Jin, Xiaoshi; Tucker III, Charles L.; Costa, Franco

    2013-12-18

    This article illustrates the predictive capabilities for long-fiber thermoplastic (LFT) composites that first simulate the injection molding of LFT structures by Autodeskź Simulation Moldflowź Insight (ASMI) to accurately predict fiber orientation and length distributions in these structures. After validating fiber orientation and length predictions against the experimental data, the predicted results are used by ASMI to compute distributions of elastic properties in the molded structures. In addition, local stress-strain responses and damage accumulation under tensile loading are predicted by an elastic-plastic damage model of EMTA-NLA, a nonlinear analysis tool implemented in ABAQUSź via user-subroutines using an incremental Eshelby-Mori-Tanaka approach. Predicted stress-strain responses up to failure and damage accumulations are compared to the experimental results to validate the model.

  2. Predictive Modeling of Terrestrial Radiation Exposure from Geologic Materials

    SciTech Connect (OSTI)

    Malchow, Russell L.; Haber, Daniel University of Nevada, Las Vegas; Burnley, Pamela; Marsac, Kara; Hausrath, Elisabeth; Adcock, Christopher

    2015-01-01

    Aerial gamma ray surveys are important for those working in nuclear security and industry for determining locations of both anthropogenic radiological sources and natural occurrences of radionuclides. During an aerial gamma ray survey, a low flying aircraft, such as a helicopter, flies in a linear pattern across the survey area while measuring the gamma emissions with a sodium iodide (NaI) detector. Currently, if a gamma ray survey is being flown in an area, the only way to correct for geologic sources of gamma rays is to have flown the area previously. This is prohibitively expensive and would require complete national coverage. This project’s goal is to model the geologic contribution to radiological backgrounds using published geochemical data, GIS software, remote sensing, calculations, and modeling software. K, U and Th are the three major gamma emitters in geologic material. U and Th are assumed to be in secular equilibrium with their daughter isotopes. If K, U, and Th abundance values are known for a given geologic unit the expected gamma ray exposure rate can be calculated using the Grasty equation or by modeling software. Monte Carlo N-Particle Transport software (MCNP), developed by Los Alamos National Laboratory, is modeling software designed to simulate particles and their interactions with matter. Using this software, models have been created that represent various lithologies. These simulations randomly generate gamma ray photons at energy levels expected from natural radiologic sources. The photons take a random path through the simulated geologic media and deposit their energy at the end of their track. A series of nested spheres have been created and filled with simulated atmosphere to record energy deposition. Energies deposited are binned in the same manner as the NaI detectors used during an aerial survey. These models are used in place of the simplistic Grasty equation as they take into account absorption properties of the lithology which the simplistic equation ignores.

  3. A Geometric Rendezvous-Based Domain Model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    University of Wisconsin - Madison 1500 Engineering Dr. Madison, WI 53716 sslattery@wisc.edu March 20, 2013 1 A Geometric Rendezvous-Based Domain Model for Data Transfer...

  4. Prediction of turbulent buoyant flow using an RNG {kappa}-{epsilon} model

    SciTech Connect (OSTI)

    Gan, G.

    1998-02-06

    Buoyant flows occur in various engineering practices such as heating, ventilation, and air-conditioning of buildings. This phenomenon is particularly important in rooms with displacement ventilation, where supply air velocities are generally very low (< 0.2 m/s) so that the predominant indoor airflow is largely due to thermal buoyancy created by internal heat sources such as occupants and equipment. This type of ventilation system has been shown to be an effective means to remove excess heat and achieve good indoor air quality. Here, numerical predictions were carried out for turbulent natural convection in two tall air cavities. The standard and RNG {kappa}-{epsilon} turbulence models were used for the predictions. The predicted results were compared with experimental data from the literature, and good agreement between prediction and measurement was obtained. Improved prediction was achieved using the RNG {kappa}-{epsilon} model in comparison with the standard {kappa}-{epsilon} model. The principal parameters for the improvement were investigated.

  5. Incorporating Single-nucleotide Polymorphisms Into the Lyman Model to Improve Prediction of Radiation Pneumonitis

    SciTech Connect (OSTI)

    Tucker, Susan L., E-mail: sltucker@mdanderson.org [Department of Bioinformatics and Computational Biology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li Minghuan [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China)] [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Xu Ting; Gomez, Daniel [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yuan Xianglin [Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan (China)] [Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan (China); Yu Jinming [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China)] [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Liu Zhensheng; Yin Ming; Guan Xiaoxiang; Wang Lie; Wei Qingyi [Department of Epidemiology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Epidemiology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mohan, Radhe [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Vinogradskiy, Yevgeniy [University of Colorado School of Medicine, Aurora, Colorado (United States)] [University of Colorado School of Medicine, Aurora, Colorado (United States); Martel, Mary [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao Zhongxing [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2013-01-01

    Purpose: To determine whether single-nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor-{beta}, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials: Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGF{beta}, TNF{alpha}, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiation therapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (grade {>=}3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results: Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGF{beta}, VEGF, TNF{alpha}, XRCC1 and APEX1. With smoking status included in the multivariate model, the SNPs significantly associated with increased risk of RP were in genes for TGF{beta}, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions: This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk vs <10% risk of RP when they were exposed to high MLDs.

  6. The effects of digital elevation model resolution on the calculation and predictions of topographic wetness indices.

    SciTech Connect (OSTI)

    Drover, Damion, Ryan

    2011-12-01

    One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would therefore be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a laser altimetry remote sensing method, obtained from the USDA Forest Service at Savannah River Site. The specific DEM resolutions were chosen because they are common grid cell sizes (10m, 30m, and 50m) used in mapping for management applications and in research. The finer resolutions (2m and 5m) were chosen for the purpose of determining how finer resolutions performed compared with coarser resolutions at predicting wetness and related soil attributes. The wetness indices were compared across DEMs and with each other in terms of quantile and distribution differences, then in terms of how well they each correlated with measured soil attributes. Spatial and non-spatial analyses were performed, and predictions using regression and geostatistics were examined for efficacy relative to each DEM resolution. Trends in the raw data and analysis results were also revealed.

  7. Rate-based degradation modeling of lithium-ion cells

    SciTech Connect (OSTI)

    E.V. Thomas; I. Bloom; J.P. Christophersen; V.S. Battaglia

    2012-05-01

    Accelerated degradation testing is commonly used as the basis to characterize battery cell performance over a range of stress conditions (e.g., temperatures). Performance is measured by some response that is assumed to be related to the state of health of the cell (e.g., discharge resistance). Often, the ultimate goal of such testing is to predict cell life at some reference stress condition, where cell life is defined to be the point in time where performance has degraded to some critical level. These predictions are based on a degradation model that expresses the expected performance level versus the time and conditions under which a cell has been aged. Usually, the degradation model relates the accumulated degradation to the time at a constant stress level. The purpose of this article is to present an alternative framework for constructing a degradation model that focuses on the degradation rate rather than the accumulated degradation. One benefit of this alternative approach is that prediction of cell life is greatly facilitated in situations where the temperature exposure is not isothermal. This alternative modeling framework is illustrated via a family of rate-based models and experimental data acquired during calendar-life testing of high-power lithium-ion cells.

  8. Method for quantifying the prediction uncertainties associated with water quality models

    SciTech Connect (OSTI)

    Summers, J.K.; Wilson, H.T.; Kou, J.

    1993-01-01

    Many environmental regulatory agencies depend on models to organize, understand, and utilize the information for regulatory decision making. A general analytical protocol was developed to quantify prediction error associated with commonly used surface water quality models. Its application is demonstrated by comparing water quality models configured to represent different levels of spatial, temporal, and mechanistic complexity. This comparison can be accomplished by fitting the models to a benchmark data set. Once the models are successfully fitted to the benchmark data, the prediction errors associated with each application can be quantified using the Monte Carlo simulation techniques.

  9. Fragment-based {sup 13}C nuclear magnetic resonance chemical shift predictions in molecular crystals: An alternative to planewave methods

    SciTech Connect (OSTI)

    Hartman, Joshua D.; Beran, Gregory J. O.; Monaco, Stephen; Schatschneider, Bohdan

    2015-09-14

    We assess the quality of fragment-based ab initio isotropic {sup 13}C chemical shift predictions for a collection of 25 molecular crystals with eight different density functionals. We explore the relative performance of cluster, two-body fragment, combined cluster/fragment, and the planewave gauge-including projector augmented wave (GIPAW) models relative to experiment. When electrostatic embedding is employed to capture many-body polarization effects, the simple and computationally inexpensive two-body fragment model predicts both isotropic {sup 13}C chemical shifts and the chemical shielding tensors as well as both cluster models and the GIPAW approach. Unlike the GIPAW approach, hybrid density functionals can be used readily in a fragment model, and all four hybrid functionals tested here (PBE0, B3LYP, B3PW91, and B97-2) predict chemical shifts in noticeably better agreement with experiment than the four generalized gradient approximation (GGA) functionals considered (PBE, OPBE, BLYP, and BP86). A set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided based on these benchmark calculations. Statistical cross-validation procedures are used to demonstrate the robustness of these fits.

  10. Modeling uranium transport in acidic contaminated groundwater with base addition

    SciTech Connect (OSTI)

    Zhang, Fan; Luo, Wensui; Parker, Jack C.; Brooks, Scott C; Watson, David B; Jardine, Philip; Gu, Baohua

    2011-01-01

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO{sub 3}{sup -}, SO{sub 4}{sup 2-}, U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  11. Predictive modeling of synergistic effects in nanoscale ion track formation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zarkadoula, Eva; Pakarinen, Olli H.; Xue, Haizhou; Zhang, Yanwen; Weber, William J.

    2015-08-05

    Molecular dynamics techniques and the inelastic thermal spike model are used to study the coupled effects of inelastic energy loss due to 21 MeV Ni ion irradiation and pre-existing defects in SrTiO3. We determine the dependence on pre-existing defect concentration of nanoscale track formation occurring from the synergy between the inelastic energy loss and the pre-existing atomic defects. We show that the nanoscale ion tracks’ size can be controlled by the concentration of pre-existing disorder. This work identifies a major gap in fundamental understanding concerning the role played by defects in electronic energy dissipation and electron–lattice coupling.

  12. Model predictive control system and method for integrated gasification combined cycle power generation

    DOE Patents [OSTI]

    Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa

    2013-04-09

    Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.

  13. Models for prediction of temperature difference and ventilation effectiveness with displacement ventilation

    SciTech Connect (OSTI)

    Yuan, X.; Chen, Q.; Glicksman, L.R.

    1999-07-01

    Displacement ventilation may provide better indoor air quality than mixing ventilation. Proper design of displacement ventilation requires information concerning the air temperature difference between the head and foot level of a sedentary person and the ventilation effectiveness at the breathing level. This paper presents models to predict the air temperature difference and the ventilation effectiveness, based on a database of 56 cases with displacement ventilation. The database was generated by using a validated CFD program and covers four different types of US buildings: small offices, large offices with partitions, classrooms, and industrial workshops under different thermal and flow boundary conditions. Both the maximum cooling load that can be removed by displacement ventilation and the ventilation effectiveness are shown to depend on the heat source type and ventilation rate in a room.

  14. The Effect of the Contact Model on Predicting Impact-Vibration Response.

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect Conference: The Effect of the Contact Model on Predicting Impact-Vibration Response. Citation Details In-Document Search Title: The Effect of the Contact Model on Predicting Impact-Vibration Response. Authors: Brake, Matthew Robert Publication Date: 2012-06-01 OSTI Identifier: 1064253 Report Number(s): SAND2012-5215C DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: Proposed for presentation at the ASME 2012

  15. Predictive modeling of synergistic effects in nanoscale ion track formation

    SciTech Connect (OSTI)

    Zarkadoula, Eva; Pakarinen, Olli H.; Xue, Haizhou; Zhang, Yanwen; Weber, William J.

    2015-08-05

    Molecular dynamics techniques and the inelastic thermal spike model are used to study the coupled effects of inelastic energy loss due to 21 MeV Ni ion irradiation and pre-existing defects in SrTiO3. We determine the dependence on pre-existing defect concentration of nanoscale track formation occurring from the synergy between the inelastic energy loss and the pre-existing atomic defects. We show that the nanoscale ion tracks’ size can be controlled by the concentration of pre-existing disorder. This work identifies a major gap in fundamental understanding concerning the role played by defects in electronic energy dissipation and electron–lattice coupling.

  16. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    SciTech Connect (OSTI)

    Singh, Kunwar P.; Gupta, Shikha; Rai, Premanjali

    2013-10-15

    Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models was performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and predictive abilities of the interspecies GRNN model to predict the carcinogenic potency of diverse chemicals. - Highlights: • Global robust models constructed for carcinogenicity prediction of diverse chemicals. • Tanimoto/BDS test revealed structural diversity of chemicals and nonlinearity in data. • PNN/GRNN successfully predicted carcinogenicity/carcinogenic potency of chemicals. • Developed interspecies PNN/GRNN models for carcinogenicity prediction. • Proposed models can be used as tool to predict carcinogenicity of new chemicals.

  17. Advanced Models and Controls for Prediction and Extension of Battery Lifetime (Presentation)

    SciTech Connect (OSTI)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G.; Pesaran, A.

    2014-02-01

    Predictive models of capacity and power fade must consider a multiplicity of degradation modes experienced by Li-ion batteries in the automotive environment. Lacking accurate models and tests, lifetime uncertainty must presently be absorbed by overdesign and excess warranty costs. To reduce these costs and extend life, degradation models are under development that predict lifetime more accurately and with less test data. The lifetime models provide engineering feedback for cell, pack and system designs and are being incorporated into real-time control strategies.

  18. Improving models to predict phenological responses to global change

    SciTech Connect (OSTI)

    Richardson, Andrew D.

    2015-11-25

    The term phenology describes both the seasonal rhythms of plants and animals, and the study of these rhythms. Plant phenological processes, including, for example, when leaves emerge in the spring and change color in the autumn, are highly responsive to variation in weather (e.g. a warm vs. cold spring) as well as longer-term changes in climate (e.g. warming trends and changes in the timing and amount of rainfall). We conducted a study to investigate the phenological response of northern peatland communities to global change. Field work was conducted at the SPRUCE experiment in northern Minnesota, where we installed 10 digital cameras. Imagery from the cameras is being used to track shifts in plant phenology driven by elevated carbon dioxide and elevated temperature in the different SPRUCE experimental treatments. Camera imagery and derived products (“greenness”) is being posted in near-real time on a publicly available web page (http://phenocam.sr.unh.edu/webcam/gallery/). The images will provide a permanent visual record of the progression of the experiment over the next 10 years. Integrated with other measurements collected as part of the SPRUCE program, this study is providing insight into the degree to which phenology may mediate future shifts in carbon uptake and storage by peatland ecosystems. In the future, these data will be used to develop improved models of vegetation phenology, which will be tested against ground observations collected by a local collaborator.

  19. Modeling of fluidized-bed combustion of coal: Phase II, final reports. Volume III. Model predictions and results

    SciTech Connect (OSTI)

    Louis, J.F.; Tung, S.E.

    1980-10-01

    This document is the third of a seven volume series of our Phase II Final Report. This volume deals with parametric studies carried out using the FBC model. A comparison with available pilot plant data is included where such data are available. This volume in essence documents model performance; describing predictions on bubble growth, combustion characteristics, sulfur capture, heat transfer and related parameters. The model has approximately forty input variables which are at the disposal of the user. The user has the option to change a few or all of these input variables. In the parametric studies reported here, a large number of input variables whose variation is less critical to the predicted results, were maintained constant at the default values. On the other hand, those parameters whose selection is very important in design and operation of the FBC's were varied in suitable operating regions. The chief among such parameters are: bed temperature, coal feed size distribution (2 parameters), average bed-sorbent size, calcium to sulfur molar ratio, superficial velocity, excess air fraction, and bed weight (or bed height). The computations for obtaining the parametric relationships are based upon selection of a geometrical design for the combustor. Bed cross-section is 6' x 6', bed height is 4', and the freeboard height is 16'. The heat transfer tubes have 2'' OD, a pitch of 10'', and are located on an equilateral triangle pattern. The air distributor is a perforated plate with 0.1'' diameter holes on a rectangular grid with 0.75'' center-to-center spacing.

  20. Global warming and climate change - predictive models for temperate and tropical regions

    SciTech Connect (OSTI)

    Malini, B.H.

    1997-12-31

    Based on the assumption of 4{degree}C increase of global temperature by the turn of 21st century due to the accumulation of greenhouse gases an attempt is made to study the possible variations in different climatic regimes. The predictive climatic water balance model for Hokkaido island of Japan (a temperate zone) indicates the possible occurrence of water deficit for two to three months, which is a unknown phenomenon in this region at present. Similarly, India which represents tropical region also will experience much drier climates with increased water deficit conditions. As a consequence, the thermal region of Hokkaido which at present is mostly Tundra and Micro thermal will change into a Meso thermal category. Similarly, the moisture regime which at present supports per humid (A2, A3 and A4) and Humid (B4) climates can support A1, B4, B3, B2 and B1 climates indicating a shift towards drier side of the climatic spectrum. Further, the predictive modes of both the regions have indicated increased evapotranspiration rates. Although there is not much of change in the overall thermal characteristics of the Indian region the moisture regime indicates a clear shift towards the aridity in the country.

  1. Model Predictive Control of HVAC Systems: Implementation and Testing at the University of California, Merced

    SciTech Connect (OSTI)

    Haves, Phillip; Hencey, Brandon; Borrell, Francesco; Elliot, John; Ma, Yudong; Coffey, Brian; Bengea, Sorin; Wetter, Michael

    2010-06-29

    A Model Predictive Control algorithm was developed for the UC Merced campus chilled water plant. Model predictive control (MPC) is an advanced control technology that has proven successful in the chemical process industry and other industries. The main goal of the research was to demonstrate the practical and commercial viability of MPC for optimization of building energy systems. The control algorithms were developed and implemented in MATLAB, allowing for rapid development, performance, and robustness assessment. The UC Merced chilled water plant includes three water-cooled chillers and a two million gallon chilled water storage tank. The tank is charged during the night to minimize on-peak electricity consumption and take advantage of the lower ambient wet bulb temperature. The control algorithms determined the optimal chilled water plant operation including chilled water supply (CHWS) temperature set-point, condenser water supply (CWS) temperature set-point and the charging start and stop times to minimize a cost function that includes energy consumption and peak electrical demand over a 3-day prediction horizon. A detailed model of the chilled water plant and simplified models of the buildings served by the plant were developed using the equation-based modeling language Modelica. Steady state models of the chillers, cooling towers and pumps were developed, based on manufacturers performance data, and calibrated using measured data collected and archived by the control system. A detailed dynamic model of the chilled water storage tank was also developed and calibrated. Simple, semi-empirical models were developed to predict the temperature and flow rate of the chilled water returning to the plant from the buildings. These models were then combined and simplified for use in a model predictive control algorithm that determines the optimal chiller start and stop times and set-points for the condenser water temperature and the chilled water supply temperature. The report describes the development and testing of the algorithm and evaluates the resulting performance, concluding with a discussion of next steps in further research. The experimental results show a small improvement in COP over the baseline policy but it is difficult to draw any strong conclusions about the energy savings potential for MPC with this system only four days of suitable experimental data were obtained once correct operation of the MPC system had been achieved. These data show an improvement in COP of 3.1% {+-} 2.2% relative to a baseline established immediately prior to the period when the MPC was run in its final form. This baseline includes control policy improvements that the plant operators learned by observing the earlier implementations of MPC, including increasing the temperature of the water supplied to the chiller condensers from the cooling towers. The process of data collection and model development, necessary for any MPC project, resulted in the team uncovering various problems with the chilled water system. Although it is difficult to quantify the energy savings resulting from these problems being remedied, they were likely on the same order as the energy savings from the MPC itself. Although the types of problems uncovered and the level of energy savings may differ significantly from other projects, some of the benefits of detecting and diagnosing problems are expected from the use of MPC for any chilled water plant. The degree of chiller loading was found to be a key factor for efficiency. It is more efficient to operate the chillers at or near full load. In order to maximize the chiller load, one would maximize the temperature difference across chillers and the chilled water flow rate through the chillers. Thus, the CHWS set-point and the chilled water flow-rate can be used to limit the chiller loading to prevent chiller surging. Since the flow rate has an upper bound and the CHWS set point has a lower bound, the chiller loading is constrained and often determined by the chilled water return temperature (CHWR). The CHWR temperature

  2. the-schedule-based-transit-model

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The Schedule-Based transit model of the Chicago Metropolitan Area Vadim Sokolov Transportation Research and Analysis Computing Center Argonne National Laboratory List of Authors ================ Vadim Sokolov Transportation Research and Analysis Computing Center Argonne National Laboratory 277 International Drive West Chicago, IL 60185 Abstract ========= Usually public transit systems are modeled using so called frequency based approach. In this case transit route times are defined in terms of

  3. Mechanism-based Representative Volume Elements (RVEs) for Predicting Property Degradations in Multiphase Materials

    SciTech Connect (OSTI)

    Xu, Wei; Sun, Xin; Li, Dongsheng; Ryu, Seun; Khaleel, Mohammad A.

    2013-02-01

    Quantitative understanding of the evolving thermal-mechanical properties of a multi-phase material hinges upon the availability of quantitative statistically representative microstructure descriptions. Questions then arise as to whether a two-dimensional (2D) or a three-dimensional (3D) representative volume element (RVE) should be considered as the statistically representative microstructure. Although 3D models are more representative than 2D models in general, they are usually computationally expensive and difficult to be reconstructed. In this paper, we evaluate the accuracy of a 2D RVE in predicting the property degradations induced by different degradation mechanisms with the multiphase solid oxide fuel cell (SOFC) anode material as an example. Both 2D and 3D microstructure RVEs of the anodes are adopted to quantify the effects of two different degradation mechanisms: humidity-induced electrochemical degradation and phosphorus poisoning induced structural degradation. The predictions of the 2D model are then compared with the available experimental measurements and the results from the 3D model. It is found that the 2D model, limited by its inability of reproducing the realistic electrical percolation, is unable to accurately predict the degradation of thermo-electrical properties. On the other hand, for the phosphorus poisoning induced structural degradation, both 2D and 3D microstructures yield similar results, indicating that the 2D model is capable of providing computationally efficient yet accurate results for studying the structural degradation within the anodes.

  4. Combining Traditional Cyber Security Audit Data with Psychosocial Data: Towards Predictive Modeling for Insider Threat Mitigation

    SciTech Connect (OSTI)

    Greitzer, Frank L.; Frincke, Deborah A.

    2010-09-01

    The purpose of this chapter is to motivate the combination of traditional cyber security audit data with psychosocial data, so as to move from an insider threat detection stance to one that enables prediction of potential insider presence. Two distinctive aspects of the approach are the objective of predicting or anticipating potential risks and the use of organizational data in addition to cyber data to support the analysis. The chapter describes the challenges of this endeavor and progress in defining a usable set of predictive indicators, developing a framework for integrating the analysis of organizational and cyber security data to yield predictions about possible insider exploits, and developing the knowledge base and reasoning capability of the system. We also outline the types of errors that one expects in a predictive system versus a detection system and discuss how those errors can affect the usefulness of the results.

  5. Model-Based Design and Integration of Large Li-ion Battery Systems

    SciTech Connect (OSTI)

    Smith, Kandler; Kim, Gi-Heon; Santhanagopalan, Shriram; Shi, Ying; Pesaran, Ahmad; Mukherjee, Partha; Barai, Pallab; Maute, Kurt; Behrou, Reza; Patil, Chinmaya

    2015-11-17

    This presentation introduces physics-based models of batteries and software toolsets, including those developed by the U.S. Department of Energy's (DOE) Computer-Aided Engineering for Electric-Drive Vehicle Batteries Program (CAEBAT). The presentation highlights achievements and gaps in model-based tools for materials-to-systems design, lifetime prediction and control.

  6. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect (OSTI)

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  7. Microstructure-based approach for predicting crack initiation and early growth in metals.

    SciTech Connect (OSTI)

    Cox, James V.; Emery, John M.; Brewer, Luke N.; Reedy, Earl David, Jr.; Puskar, Joseph David; Bartel, Timothy James; Dingreville, Remi P. M.; Foulk, James W., III; Battaile, Corbett Chandler; Boyce, Brad Lee

    2009-09-01

    Fatigue cracking in metals has been and is an area of great importance to the science and technology of structural materials for quite some time. The earliest stages of fatigue crack nucleation and growth are dominated by the microstructure and yet few models are able to predict the fatigue behavior during these stages because of a lack of microstructural physics in the models. This program has developed several new simulation tools to increase the microstructural physics available for fatigue prediction. In addition, this program has extended and developed microscale experimental methods to allow the validation of new microstructural models for deformation in metals. We have applied these developments to fatigue experiments in metals where the microstructure has been intentionally varied.

  8. Land-ice modeling for sea-level prediction (Technical Report) | SciTech

    Office of Scientific and Technical Information (OSTI)

    Connect Technical Report: Land-ice modeling for sea-level prediction Citation Details In-Document Search Title: Land-ice modeling for sea-level prediction Authors: Lipscomb, William H [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2010-06-11 OSTI Identifier: 1172858 Report Number(s): LA-UR-10-04049; LA-UR-10-4049 DOE Contract Number: AC52-06NA25396 Resource Type: Technical Report Research Org: Los Alamos National Laboratory (LANL) Sponsoring Org: DOE Country

  9. NEAR FIELD MODELING OF SPE1 EXPERIMENT AND PREDICTION OF THE SECOND SOURCE

    Office of Scientific and Technical Information (OSTI)

    PHYSICS EXPERIMENTS (SPE2) (Technical Report) | SciTech Connect NEAR FIELD MODELING OF SPE1 EXPERIMENT AND PREDICTION OF THE SECOND SOURCE PHYSICS EXPERIMENTS (SPE2) Citation Details In-Document Search Title: NEAR FIELD MODELING OF SPE1 EXPERIMENT AND PREDICTION OF THE SECOND SOURCE PHYSICS EXPERIMENTS (SPE2) Motion along joints and fractures in the rock has been proposed as one of the sources of near-source shear wave generation, and demonstrating the validity of this hypothesis is a focal

  10. Development of a land ice core for the Model for Prediction Across Scales

    Office of Scientific and Technical Information (OSTI)

    (MPAS) (Conference) | SciTech Connect of a land ice core for the Model for Prediction Across Scales (MPAS) Citation Details In-Document Search Title: Development of a land ice core for the Model for Prediction Across Scales (MPAS) No abstract prepared. Authors: Hoffman, Matthew J [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2012-06-25 OSTI Identifier: 1044843 Report Number(s): LA-UR-12-22469 TRN: US201214%%525 DOE Contract Number: AC52-06NA25396 Resource

  11. Mass-transport models to predict toxicity of inhaled gases in the upper respiratory tract

    SciTech Connect (OSTI)

    Hubal, E.A.C.; Fedkiw, P.S.; Kimbell, J.S. [North Carolina State Univ., Raleigh, NC (United States)

    1996-04-01

    Mass-transport (the movement of a chemical species) plays an important role in determining toxic responses of the upper respiratory tract (URT) to inhaled chemicals. Mathematical dosimetry models incorporate physical characteristics of mass transport and are used to predict quantitative uptake (absorption rate) and distribution of inhaled gases and vapors in the respiratory tract. Because knowledge of dose is an essential component of quantitative risk assessment, dosimetry modeling plays an important role in extrapolation of animal study results to humans. A survey of existing mathematical dosimetry models for the URT is presented, limitations of current models are discussed, and adaptations of existing models to produce a generally applicable model are suggested. Reviewed URT dosimetry models are categorized as early, lumped-parameter, and distributed-parameter models. Specific examples of other relevant modeling work are also presented. 35 refs., 11 figs., 1 tab.

  12. Model-Predictive Cascade Mitigation in Electric Power Systems With Storage and Renewables-Part II: Case-Study

    SciTech Connect (OSTI)

    Almassalkhi, MR; Hiskens, IA

    2015-01-01

    The novel cascade-mitigation scheme developed in Part I of this paper is implemented within a receding-horizon model predictive control (MPC) scheme with a linear controller model. This present paper illustrates the MPC strategy with a case-study that is based on the IEEE RTS-96 network, though with energy storage and renewable generation added. It is shown that the MPC strategy alleviates temperature overloads on transmission lines by rescheduling generation, energy storage, and other network elements, while taking into account ramp-rate limits and network limitations. Resilient performance is achieved despite the use of a simplified linear controller model. The MPC scheme is compared against a base-case that seeks to emulate human operator behavior.

  13. Predicting ecological roles in the rhizosphere using metabolome and transportome modeling

    SciTech Connect (OSTI)

    Larsen, Peter E.; Collart, Frank R.; Dai, Yang; Blanchard, Jeffrey L.

    2015-09-02

    The ability to obtain complete genome sequences from bacteria in environmental samples, such as soil samples from the rhizosphere, has highlighted the microbial diversity and complexity of environmental communities. New algorithms to analyze genome sequence information in the context of community structure are needed to enhance our understanding of the specific ecological roles of these organisms in soil environments. We present a machine learning approach using sequenced Pseudomonad genomes coupled with outputs of metabolic and transportomic computational models for identifying the most predictive molecular mechanisms indicative of a Pseudomonad’s ecological role in the rhizosphere: a biofilm, biocontrol agent, promoter of plant growth, or plant pathogen. Computational predictions of ecological niche were highly accurate overall with models trained on transportomic model output being the most accurate (Leave One Out Validation F-scores between 0.82 and 0.89). The strongest predictive molecular mechanism features for rhizosphere ecological niche overlap with many previously reported analyses of Pseudomonad interactions in the rhizosphere, suggesting that this approach successfully informs a system-scale level understanding of how Pseudomonads sense and interact with their environments. The observation that an organism’s transportome is highly predictive of its ecological niche is a novel discovery and may have implications in our understanding microbial ecology. The framework developed here can be generalized to the analysis of any bacteria across a wide range of environments and ecological niches making this approach a powerful tool for providing insights into functional predictions from bacterial genomic data.

  14. Subtask 2.4 - Integration and Synthesis in Climate Change Predictive Modeling

    SciTech Connect (OSTI)

    Jaroslav Solc

    2009-06-01

    The Energy & Environmental Research Center (EERC) completed a brief evaluation of the existing status of predictive modeling to assess options for integration of our previous paleohydrologic reconstructions and their synthesis with current global climate scenarios. Results of our research indicate that short-term data series available from modern instrumental records are not sufficient to reconstruct past hydrologic events or predict future ones. On the contrary, reconstruction of paleoclimate phenomena provided credible information on past climate cycles and confirmed their integration in the context of regional climate history is possible. Similarly to ice cores and other paleo proxies, acquired data represent an objective, credible tool for model calibration and validation of currently observed trends. It remains a subject of future research whether further refinement of our results and synthesis with regional and global climate observations could contribute to improvement and credibility of climate predictions on a regional and global scale.

  15. Project Profile: Physics-Based Reliability Models for Supercritical...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Abengoa logo GE, under the Physics of Reliability: Evaluating Design Insights for Component Technologies in Solar (PREDICTS) Program will be leveraging internally developed models ...

  16. Prediction of Liver Function by Using Magnetic Resonance-based Portal Venous Perfusion Imaging

    SciTech Connect (OSTI)

    Cao Yue; Wang Hesheng; Johnson, Timothy D.; Pan, Charlie; Hussain, Hero; Balter, James M.; Normolle, Daniel; Ben-Josef, Edgar; Ten Haken, Randall K.; Lawrence, Theodore S.; Feng, Mary

    2013-01-01

    Purpose: To evaluate whether liver function can be assessed globally and spatially by using volumetric dynamic contrast-enhanced magnetic resonance imaging MRI (DCE-MRI) to potentially aid in adaptive treatment planning. Methods and Materials: Seventeen patients with intrahepatic cancer undergoing focal radiation therapy (RT) were enrolled in institution review board-approved prospective studies to obtain DCE-MRI (to measure regional perfusion) and indocyanine green (ICG) clearance rates (to measure overall liver function) prior to, during, and at 1 and 2 months after treatment. The volumetric distribution of portal venous perfusion in the whole liver was estimated for each scan. We assessed the correlation between mean portal venous perfusion in the nontumor volume of the liver and overall liver function measured by ICG before, during, and after RT. The dose response for regional portal venous perfusion to RT was determined using a linear mixed effects model. Results: There was a significant correlation between the ICG clearance rate and mean portal venous perfusion in the functioning liver parenchyma, suggesting that portal venous perfusion could be used as a surrogate for function. Reduction in regional venous perfusion 1 month after RT was predicted by the locally accumulated biologically corrected dose at the end of RT (P<.0007). Regional portal venous perfusion measured during RT was a significant predictor for regional venous perfusion assessed 1 month after RT (P<.00001). Global hypovenous perfusion pre-RT was observed in 4 patients (3 patients with hepatocellular carcinoma and cirrhosis), 3 of whom had recovered from hypoperfusion, except in the highest dose regions, post-RT. In addition, 3 patients who had normal perfusion pre-RT had marked hypervenous perfusion or reperfusion in low-dose regions post-RT. Conclusions: This study suggests that MR-based volumetric hepatic perfusion imaging may be a biomarker for spatial distribution of liver function, which could aid in individualizing therapy, particularly for patients at risk for liver injury after RT.

  17. Model-Based Sampling and Inference

    U.S. Energy Information Administration (EIA) Indexed Site

    Model-Based Sampling, Inference and Imputation James R. Knaub, Jr., Energy Information Administration, EI-53.1 James.Knaub@eia.doe.gov Key Words: Survey statistics, Randomization, Conditionality, Random sampling, Cutoff sampling Abstract: Picking a sample through some randomization mechanism, such as random sampling within groups (stratified random sampling), or, say, sampling every fifth item (systematic random sampling), may be familiar to a lot of people. These are design-based samples.

  18. Long-Fiber Thermoplastic Injection Molded Composites: from Process Modeling to Property Prediction

    SciTech Connect (OSTI)

    Nguyen, Ba Nghiep; Holbery, Jim D.; Johnson, Kenneth I.; Smith, Mark T.

    2005-09-01

    Recently, long-fiber filled thermoplastics have become a great interest to the automotive industry since these materials offer much better property performance (e.g. elastic moduli, strength, durability…) than their short-fiber analogues, and they can be processed through injection molding with some specific tool design. However, in order that long-fiber thermoplastic injection molded composites can be used efficiently for automotive applications, there is a tremendous need to develop process and constitutive models as well as computational tools to predict the microstructure of the as-formed composite, and its resulting properties and macroscopic responses from processing to the final product. The microstructure and properties of such a composite are governed by i) flow-induced fiber orientation, ii) fiber breakage during injection molding, and iii) processing conditions (e,g. pressure, mold and melt temperatures, mold geometries, injection speed, etc.). This paper highlights our efforts to address these challenging issues. The work is an integrated part of a research program supported by the US Department of Energy, which includes • The development of process models for long-fiber filled thermoplastics, • The construction of an interface between process modeling and property prediction as well as the development of new constitutive models to perform linear and nonlinear structural analyses, • Experimental characterization of model parameters and verification of the model predictions.

  19. A MULTISCALE, CELL-BASED FRAMEWORK FOR MODELING CANCER DEVELOPMENT

    SciTech Connect (OSTI)

    JIANG, YI

    2007-01-16

    Cancer remains to be one of the leading causes of death due to diseases. We use a systems approach that combines mathematical modeling, numerical simulation, in vivo and in vitro experiments, to develop a predictive model that medical researchers can use to study and treat cancerous tumors. The multiscale, cell-based model includes intracellular regulations, cellular level dynamics and intercellular interactions, and extracellular level chemical dynamics. The intracellular level protein regulations and signaling pathways are described by Boolean networks. The cellular level growth and division dynamics, cellular adhesion and interaction with the extracellular matrix is described by a lattice Monte Carlo model (the Cellular Potts Model). The extracellular dynamics of the signaling molecules and metabolites are described by a system of reaction-diffusion equations. All three levels of the model are integrated through a hybrid parallel scheme into a high-performance simulation tool. The simulation results reproduce experimental data in both avasular tumors and tumor angiogenesis. By combining the model with experimental data to construct biologically accurate simulations of tumors and their vascular systems, this model will enable medical researchers to gain a deeper understanding of the cellular and molecular interactions associated with cancer progression and treatment.

  20. A DISLOCATION-BASED CLEAVAGE INITIATION MODEL FOR PRESSURE VESSEL

    SciTech Connect (OSTI)

    Cochran, Kristine B; Erickson, Marjorie A; Williams, Paul T; Klasky, Hilda B; Bass, Bennett Richard

    2012-01-01

    Efforts are under way to develop a theoretical, multi-scale model for the prediction of fracture toughness of ferritic steels in the ductile-to-brittle transition temperature (DBTT) region that accounts for temperature, irradiation, strain rate, and material condition (chemistry and heat treatment) effects. This new model is intended to address difficulties associated with existing empirically-derived models of the DBTT region that cannot be extrapolated to conditions for which data are unavailable. Dislocation distribution equations, derived from the theories of Yokobori et al., are incorporated to account for the local stress state prior to and following initiation of a microcrack from a second-phase particle. The new model is the basis for the DISlocation-based FRACture (DISFRAC) computer code being developed at the Oak Ridge National Laboratory (ORNL). The purpose of this code is to permit fracture safety assessments of ferritic structures with only tensile properties required as input. The primary motivation for the code is to assist in the prediction of radiation effects on nuclear reactor pressure vessels, in parallel with the EURATOM PERFORM 60 project.

  1. Monte Carlo and analytical model predictions of leakage neutron exposures from passively scattered proton therapy

    SciTech Connect (OSTI)

    Pérez-Andújar, Angélica [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Zhang, Rui; Newhauser, Wayne [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)] [Department of Radiation Physics, Unit 1202, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, 6767 Bertner Avenue, Houston, Texas 77030 (United States)

    2013-12-15

    Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.

  2. Predicting ecological roles in the rhizosphere using metabolome and transportome modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Larsen, Peter E.; Collart, Frank R.; Dai, Yang; Blanchard, Jeffrey L.

    2015-09-02

    The ability to obtain complete genome sequences from bacteria in environmental samples, such as soil samples from the rhizosphere, has highlighted the microbial diversity and complexity of environmental communities. New algorithms to analyze genome sequence information in the context of community structure are needed to enhance our understanding of the specific ecological roles of these organisms in soil environments. We present a machine learning approach using sequenced Pseudomonad genomes coupled with outputs of metabolic and transportomic computational models for identifying the most predictive molecular mechanisms indicative of a Pseudomonad’s ecological role in the rhizosphere: a biofilm, biocontrol agent, promoter ofmore » plant growth, or plant pathogen. Computational predictions of ecological niche were highly accurate overall with models trained on transportomic model output being the most accurate (Leave One Out Validation F-scores between 0.82 and 0.89). The strongest predictive molecular mechanism features for rhizosphere ecological niche overlap with many previously reported analyses of Pseudomonad interactions in the rhizosphere, suggesting that this approach successfully informs a system-scale level understanding of how Pseudomonads sense and interact with their environments. The observation that an organism’s transportome is highly predictive of its ecological niche is a novel discovery and may have implications in our understanding microbial ecology. The framework developed here can be generalized to the analysis of any bacteria across a wide range of environments and ecological niches making this approach a powerful tool for providing insights into functional predictions from bacterial genomic data.« less

  3. An Elastic-Plastic and Strength Prediction Model for Injection-Molded Long-Fiber Thermoplastics

    SciTech Connect (OSTI)

    Nguyen, Ba Nghiep; Kunc, Vlastimil; Phelps, Jay; Tucker III, Charles L.; Bapanapalli, Satish K.

    2008-09-01

    This paper applies a recently developed model to predict the elastic-plastic stress/strain response and strength of injection-molded long-fiber thermoplastics (LFTs). The model combines a micro-macro constitutive modeling approach with experimental characterization and modeling of the composite microstructure to determine the composite stress/strain response and strength. Specifically, it accounts for elastic fibers embedded in a thermoplastic resin that exhibits the elastic-plastic behavior obeying the Ramberg-Osgood relation and J-2 deformation theory of plasticity. It also accounts for fiber length, orientation and volume fraction distributions in the composite formed by the injection-molding process. Injection-molded-long-glass-fiber/polypropylene (PP) specimens were prepared for mechanical characterization and testing. Fiber length, orientation, and volume fraction distributions were then measured at some selected locations for use in the computation. Fiber orientations in these specimens were also predicted using an anisotropic rotary diffusion model developed for LFTs. The stress-strain response of the as-formed composite was computed by an incremental procedure that uses the Eshelby’s equivalent inclusion method, the Mori-Tanaka assumption and a fiber orientation averaging technique. The model has been validated against the experimental stress-strain results obtained for these long-glass-fiber/PP specimens.

  4. A comparison of general circulation model predictions to sand drift and dune orientations

    SciTech Connect (OSTI)

    Blumberg, D.G.; Greeley, R.

    1996-12-01

    The growing concern over climate change and decertification stresses the importance of aeolian process prediction. In this paper the use of a general circulation model to predict current aeolian features is examined. A GCM developed at NASA/Goddard Space Flight Center was used in conjunction with White`s aeolian sand flux model to produce a global potential aeolian transport map. Surface wind shear stress predictions were used from the output of a GCM simulation that was performed as part of the Atmospheric Model Intercomparison Project on 1979 climate conditions. The spatial resolution of this study (as driven by the GCM) is 4{degrees} X 5{degrees}; instantaneous 6-hourly wind stress data were saved by the GCM and used in this report. A global map showing potential sand transport was compared to drift potential directions as inferred from Landsat images from the 1980s for several sand seas and a coastal dune field. Generally, results show a good correlation between the simulated sand drift direction and the drift direction inferred for dune forms. Discrepancies between the drift potential and the drift inferred from images were found in the North American deserts and the Arabian peninsula. An attempt to predict the type of dune that would be formed in specific regions was not successful. The model could probably be further improved by incorporating soil moisture, surface roughness, and vegetation information for a better assessment of sand threshold conditions. The correlation may permit use of a GCM to analyze {open_quotes}fossil{close_quotes} dunes or to forecast aeolian processes. 48 refs., 8 figs.

  5. Ductile Tearing of Thin Aluminum Plates Under Blast Loading. Predictions with Fully Coupled Models and Biaxial Material Response Characterization

    SciTech Connect (OSTI)

    Corona, Edmundo; Gullerud, Arne S.; Haulenbeek, Kimberly K.; Reu, Phillip L.

    2015-06-01

    The work presented in this report concerns the response and failure of thin 2024- T3 aluminum alloy circular plates to a blast load produced by the detonation of a nearby spherical charge. The plates were fully clamped around the circumference and the explosive charge was located centrally with respect to the plate. The principal objective was to conduct a numerical model validation study by comparing the results of predictions to experimental measurements of plate deformation and failure for charges with masses in the vicinity of the threshold between no tearing and tearing of the plates. Stereo digital image correlation data was acquired for all tests to measure the deflection and strains in the plates. The size of the virtual strain gage in the measurements, however, was relatively large, so the strain measurements have to be interpreted accordingly as lower bounds of the actual strains in the plate and of the severity of the strain gradients. A fully coupled interaction model between the blast and the deflection of the structure was considered. The results of the validation exercise indicated that the model predicted the deflection of the plates reasonably accurately as well as the distribution of strain on the plate. The estimation of the threshold charge based on a critical value of equivalent plastic strain measured in a bulge test, however, was not accurate. This in spite of efforts to determine the failure strain of the aluminum sheet under biaxial stress conditions. Further work is needed to be able to predict plate tearing with some degree of confidence. Given the current technology, at least one test under the actual blast conditions where the plate tears is needed to calibrate the value of equivalent plastic strain when failure occurs in the numerical model. Once that has been determined, the question of the explosive mass value at the threshold could be addressed with more confidence.

  6. MM-Estimator and Adjusted Super Smoother based Simultaneous Prediction Confedenc

    Energy Science and Technology Software Center (OSTI)

    2002-07-19

    A Novel Application of Regression Analysis (MM-Estimator) with Simultaneous Prediction Confidence Intervals are proposed to detect up- or down-regulated genes, which are outliers in scatter plots based on log-transformed red (Cy5 fluorescent dye) versus green (Cy3 fluorescent Dye) intensities. Advantages of the application: 1) Robust and Resistant MM-Estimator is a Reliable Method to Build Linear Regression In the presence of Outliers, 2) Exploratory Data Analysis Tools (Boxplots, Averaged Shifted Histograms, Quantile-Quantile Normal Plots and Scattermore » Plots) are Unsed to Test Visually underlying assumptions of linearity and Contaminated Normality in Microarray data), 3) Simultaneous prediction confidence intervals (SPCIs) Guarantee a desired confidence level across the whole range of the data points used for the scatter plots. Results of the outlier detection procedure is a set of significantly differentially expressed genes extracted from the employed microarray data set. A scatter plot smoother (super smoother or locally weighted regression) is used to quantify heteroscendasticity is residual variance (Commonly takes place in lower and higher intensity areas). The set of differentially expressed genes is quantified using interval estimates for P-values as a probabilistic measure of being outlier by chance. Monte Carlo simultations are used to adjust super smoother-based SPCIs.her.« less

  7. Reduced-Order Model Based Feedback Control For Modified Hasegawa...

    Office of Scientific and Technical Information (OSTI)

    Reduced-Order Model Based Feedback Control For Modified Hasegawa-Wakatani Model Citation Details In-Document Search Title: Reduced-Order Model Based Feedback Control For Modified ...

  8. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    SciTech Connect (OSTI)

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  9. A Screening Model to Predict Microalgae Biomass Growth in Photobioreactors and Raceway Ponds

    SciTech Connect (OSTI)

    Huesemann, Michael H.; Van Wagenen, Jonathan M.; Miller, Tyler W.; Chavis, Aaron R.; Hobbs, Watts B.; Crowe, Braden J.

    2013-06-01

    A microalgae biomass growth model was developed for screening novel strains for their potential to exhibit high biomass productivities under nutrient-replete conditions in photobioreactors or outdoor ponds. Growth is modeled by first estimating the light attenuation by biomass according to Beer-Lambert’s law, and then calculating the specific growth rate in discretized culture volume slices that receive declining light intensities due to attenuation. The model requires only two physical and two species-specific biological input parameters, all of which are relatively easy to determine: incident light intensity, culture depth, as well as the biomass light absorption coefficient and the specific growth rate as a function of light intensity. Roux bottle culture experiments were performed with Nannochloropsis salina at constant temperature (23 °C) at six different incident light intensities (5, 10, 25, 50, 100, 250, and 850 ?mol/m2? sec) to determine both the specific growth rate under non-shading conditions and the biomass light absorption coefficient as a function of light intensity. The model was successful in predicting the biomass growth rate in these Roux bottle cultures during the light-limited linear phase at different incident light intensities. Model predictions were moderately sensitive to minor variations in the values of input parameters. The model was also successful in predicting the growth performance of Chlorella sp. cultured in LED-lighted 800 L raceway ponds operated at constant temperature (30 °C) and constant light intensity (1650 ?mol/m2? sec). Measurements of oxygen concentrations as a function of time demonstrated that following exposure to darkness, it takes at least 5 seconds for cells to initiate dark respiration. As a result, biomass loss due to dark respiration in the aphotic zone of a culture is unlikely to occur in highly mixed small-scale photobioreactors where cells move rapidly in and out of the light. By contrast, as supported also by the growth model, biomass loss due to dark respiration occurs in the dark zones of the relatively less well mixed pond cultures. In addition to screening novel microalgae strains for high biomass productivities, the model can also be used for optimizing the pond design and operation. Additional research is needed to validate the biomass growth model for other microalgae species and for the more realistic case of fluctuating temperatures and light intensities observed in outdoor pond cultures.

  10. Rolling Process Modeling Report: Finite-Element Prediction of Roll Separating Force and Rolling Defects

    SciTech Connect (OSTI)

    Soulami, Ayoub; Lavender, Curt A.; Paxton, Dean M.; Burkes, Douglas

    2014-04-23

    Pacific Northwest National Laboratory (PNNL) has been investigating manufacturing processes for the uranium-10% molybdenum (U-10Mo) alloy plate-type fuel for the U.S. high-performance research reactors. This work supports the Convert Program of the U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) Global Threat Reduction Initiative. This report documents modeling results of PNNL’s efforts to perform finite-element simulations to predict roll separating forces and rolling defects. Simulations were performed using a finite-element model developed using the commercial code LS-Dyna. Simulations of the hot rolling of U-10Mo coupons encapsulated in low-carbon steel have been conducted following two different schedules. Model predictions of the roll-separation force and roll-pack thicknesses at different stages of the rolling process were compared with experimental measurements. This report discusses various attributes of the rolled coupons revealed by the model (e.g., dog-boning and thickness non-uniformity).

  11. Model based control of a coke battery

    SciTech Connect (OSTI)

    Stone, P.M.; Srour, J.M.; Zulli, P.; Cunningham, R.; Hockings, K.

    1997-12-31

    This paper describes a model-based strategy for coke battery control at BHP Steel`s operations in Pt Kembla, Australia. The strategy uses several models describing the battery thermal and coking behavior. A prototype controller has been installed on the Pt Kembla No. 6 Battery (PK6CO). In trials, the new controller has been well accepted by operators and has resulted in a clear improvement in battery thermal stability, with a halving of the standard deviation of average battery temperature. Along with other improvements to that battery`s operations, this implementation has contributed to a 10% decrease in specific battery energy consumption. A number of enhancements to the low level control systems on that battery are currently being undertaken in order to realize further benefits.

  12. Depositional sequence analysis and sedimentologic modeling for improved prediction of Pennsylvanian reservoirs

    SciTech Connect (OSTI)

    Watney, W.L.

    1994-12-01

    Reservoirs in the Lansing-Kansas City limestone result from complex interactions among paleotopography (deposition, concurrent structural deformation), sea level, and diagenesis. Analysis of reservoirs and surface and near-surface analogs has led to developing a {open_quotes}strandline grainstone model{close_quotes} in which relative sea-level stabilized during regressions, resulting in accumulation of multiple grainstone buildups along depositional strike. Resulting stratigraphy in these carbonate units are generally predictable correlating to inferred topographic elevation along the shelf. This model is a valuable predictive tool for (1) locating favorable reservoirs for exploration, and (2) anticipating internal properties of the reservoir for field development. Reservoirs in the Lansing-Kansas City limestones are developed in both oolitic and bioclastic grainstones, however, re-analysis of oomoldic reservoirs provides the greatest opportunity for developing bypassed oil. A new technique, the {open_quotes}Super{close_quotes} Pickett crossplot (formation resistivity vs. porosity) and its use in an integrated petrophysical characterization, has been developed to evaluate extractable oil remaining in these reservoirs. The manual method in combination with 3-D visualization and modeling can help to target production limiting heterogeneities in these complex reservoirs and moreover compute critical parameters for the field such as bulk volume water. Application of this technique indicates that from 6-9 million barrels of Lansing-Kansas City oil remain behind pipe in the Victory-Northeast Lemon Fields. Petroleum geologists are challenged to quantify inferred processes to aid in developing rationale geologically consistent models of sedimentation so that acceptable levels of prediction can be obtained.

  13. Biologically based multistage modeling of radiation effects

    SciTech Connect (OSTI)

    William Hazelton; Suresh Moolgavkar; E. Georg Luebeck

    2005-08-30

    This past year we have made substantial progress in modeling the contribution of homeostatic regulation to low-dose radiation effects and carcinogenesis. We have worked to refine and apply our multistage carcinogenesis models to explicitly incorporate cell cycle states, simple and complex damage, checkpoint delay, slow and fast repair, differentiation, and apoptosis to study the effects of low-dose ionizing radiation in mouse intestinal crypts, as well as in other tissues. We have one paper accepted for publication in ''Advances in Space Research'', and another manuscript in preparation describing this work. I also wrote a chapter describing our combined cell-cycle and multistage carcinogenesis model that will be published in a book on stochastic carcinogenesis models edited by Wei-Yuan Tan. In addition, we organized and held a workshop on ''Biologically Based Modeling of Human Health Effects of Low dose Ionizing Radiation'', July 28-29, 2005 at Fred Hutchinson Cancer Research Center in Seattle, Washington. We had over 20 participants, including Mary Helen Barcellos-Hoff as keynote speaker, talks by most of the low-dose modelers in the DOE low-dose program, experimentalists including Les Redpath (and Mary Helen), Noelle Metting from DOE, and Tony Brooks. It appears that homeostatic regulation may be central to understanding low-dose radiation phenomena. The primary effects of ionizing radiation (IR) are cell killing, delayed cell cycling, and induction of mutations. However, homeostatic regulation causes cells that are killed or damaged by IR to eventually be replaced. Cells with an initiating mutation may have a replacement advantage, leading to clonal expansion of these initiated cells. Thus we have focused particularly on modeling effects that disturb homeostatic regulation as early steps in the carcinogenic process. There are two primary considerations that support our focus on homeostatic regulation. First, a number of epidemiologic studies using multistage carcinogenesis models that incorporate the ''initiation, promotion, and malignant conversion'' paradigm of carcinogenesis are indicating that promotion of initiated cells is the most important cellular mechanism driving the shape of the age specific hazard for many types of cancer. Second, we have realized that many of the genes that are modified in early stages of the carcinogenic process contribute to one or more of four general cellular pathways that confer a promotional advantage to cells when these pathways are disrupted.

  14. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    SciTech Connect (OSTI)

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; Brown, Christopher F.

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  15. Controlling bimetallic nanostructures by the microemulsion method with subnanometer resolution using a prediction model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Buceta, David; Tojo, Concha; Vukmirovic, Miomir B.; Deepak, F. Leonard; Lopez-Quintela, M. Arturo

    2015-06-02

    In this study, we present a theoretical model to predict the atomic structure of Au/Pt nanoparticles synthesized in microemulsions. Excellent concordance with the experimental results shows that the structure of the nanoparticles can be controlled at sub-nanometer resolution simply by changing the reactants concentration. The results of this study not only offer a better understanding of the complex mechanisms governing reactions in microemulsions, but open up a simple new way to synthesize bimetallic nanoparticles with ad-hoc controlled nanostructures.

  16. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; Brown, Christopher F.

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  17. NCAR Contribution to A U.S. National Multi-Model Ensemble (NMME) ISI Prediction System

    SciTech Connect (OSTI)

    Tribbia, Joseph

    2015-11-25

    NCAR brought the latest version of the Community Earth System Model (version 1, CESM1) into the mix of models in the NMME effort. This new version uses our newest atmospheric model CAM5 and produces a coupled climate and ENSO that are generally as good or better than those of the Community Climate System Model version 4 (CCSM4). Compared to CCSM4, the new coupled model has a superior climate response with respect to low clouds in both the subtropical stratus regimes and the Arctic. However, CESM1 has been run to date using a prognostic aerosol model that more than doubles its computational cost. We are currently evaluating a version of the new model using prescribed aerosols and expect it will be ready for integrations in summer 2012. Because of this NCAR has not been able to complete the hindcast integrations using the NCAR loosely-coupled ensemble Kalman filter assimilation method nor has it contributed to the current (Stage I) NMME operational utilization. The expectation is that this model will be included in the NMME in late 2012 or early 2013. The initialization method will utilize the Ensemble Kalman Filter Assimilation methods developed at NCAR using the Data Assimilation Research Testbed (DART) in conjunction with Jeff Anderson’s team in CISL. This methodology has been used in our decadal prediction contributions to CMIP5. During the course of this project, NCAR has setup and performed all the needed hindcast and forecast simulations and provide the requested fields to our collaborators. In addition, NCAR researchers have participated fully in research themes (i) and (ii). Specifically, i) we have begun to evaluate and optimize our system in hindcast mode, focusing on the optimal number of ensemble members, methodologies to recalibrate individual dynamical models, and accessing our forecasts across multiple time scales, i.e., beyond two weeks, and ii) we have begun investigation of the role of different ocean initial conditions in seasonal forecasts. The completion of the calibration hindcasts for Seasonal to Interannual (SI) predictions and the maintenance of the data archive associated with the NCAR portion of this effort has been the responsibility of the Project Scientist I (Alicia Karspeck) that was partially supported on this project.

  18. Bayesian probabilistic model for life prediction and fault mode classification of solid state luminaires

    SciTech Connect (OSTI)

    Lall, Pradeep; Wei, Junchao; Sakalaukus, Peter

    2014-06-22

    A new method has been developed for assessment of the onset of degradation in solid state luminaires to classify failure mechanisms by using metrics beyond lumen degradation that are currently used for identification of failure. Luminous Flux output, Correlated Color Temperature Data on Philips LED Lamps has been gathered under 85°C/85%RH till lamp failure. Failure modes of the test population of the lamps have been studied to understand the failure mechanisms in 85°C/85%RH accelerated test. Results indicate that the dominant failure mechanism is the discoloration of the LED encapsulant inside the lamps which is the likely cause for the luminous flux degradation and the color shift. The acquired data has been used in conjunction with Bayesian Probabilistic Models to identify luminaires with onset of degradation much prior to failure through identification of decision boundaries between lamps with accrued damage and lamps beyond the failure threshold in the feature space. In addition luminaires with different failure modes have been classified separately from healthy pristine luminaires. The α-λ plots have been used to evaluate the robustness of the proposed methodology. Results show that the predicted degradation for the lamps tracks the true degradation observed during 85°C/85%RH during accelerated life test fairly closely within the ±20% confidence bounds. Correlation of model prediction with experimental results indicates that the presented methodology allows the early identification of the onset of failure much prior to development of complete failure distributions and can be used for assessing the damage state of SSLs in fairly large deployments. It is expected that, the new prediction technique will allow the development of failure distributions without testing till L70 life for the manifestation of failure.

  19. Comparison of limited measurements of the OTEC-1 plume with analytical-model predictions

    SciTech Connect (OSTI)

    Paddock, R.A.; Ditmars, J.D.

    1981-07-01

    Ocean Thermal Energy Conversion (OTEC) requires significant amounts of warm surface waters and cold deep waters for power production. Because these waters are returned to the ocean as effluents, their behavior may affect plant operation and impact the environment. The OTEC-1 facility tested 1-MWe heat exchangers aboard the vessel Ocean Energy Converter moored off the island of Hawaii. The warm and cold waters used by the OTEC-1 facility were combined prior to discharge from the vessel to create a mixed discharge condition. A limited field survey of the mixed discharge plume using fluorescent dye as a tracer was conducted on April 11, 1981, as part of the environmental studies at OTEC-1 coordinated by the Marine Sciences Group at Lawrence Berkeley Laboratory. Results of that survey were compared with analytical model predictions of plume behavior. Although the predictions were in general agreement with the results of the plume survey, inherent limitations in the field measurements precluded complete description of the plume or detailed evaluation of the models.

  20. Model-Based Transient Calibration Optimization for Next Generation...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Based Transient Calibration Optimization for Next Generation Diesel Engines Model-Based Transient Calibration Optimization for Next Generation Diesel Engines 2005 Diesel Engine...

  1. Mathematical model for predicting the probability of acute mortality in a human population exposed to accidentally released airborne radionuclides. Final report for Phase I

    SciTech Connect (OSTI)

    Filipy, R.E.; Borst, F.J.; Cross, F.T.; Park, J.F.; Moss, O.R.; Roswell, R.L.; Stevens, D.L.

    1980-05-01

    A mathematical model was constructed for the purpose of predicting the fraction of human population which would die within 1 year of an accidental exposure to airborne radionuclides. The model is based on data from laboratory experiments with rats, dogs and baboons, and from human epidemiological data. Doses from external, whole-body irradiation and from inhaled, alpha- and beta-emitting radionuclides are calculated for several organs. The probabilities of death from radiation pneumonitis and from bone marrow irradiation are predicted from doses accumulated within 30 days of exposure to the radioactive aerosol. The model is compared with existing similar models under hypothetical exposure conditions. Suggestions for further experiments with inhaled radionuclides are included. 25 refs., 16 figs., 13 tabs.

  2. Investigation of the effect of chemistry models on the numerical predictions of the supersonic combustion of hydrogen

    SciTech Connect (OSTI)

    Kumaran, K.; Babu, V.

    2009-04-15

    In this numerical study, the influence of chemistry models on the predictions of supersonic combustion in a model combustor is investigated. To this end, 3D, compressible, turbulent, reacting flow calculations with a detailed chemistry model (with 37 reactions and 9 species) and the Spalart-Allmaras turbulence model have been carried out. These results are compared with earlier results obtained using single step chemistry. Hydrogen is used as the fuel and three fuel injection schemes, namely, strut, staged (i.e., strut and wall) and wall injection, are considered to evaluate the impact of the chemistry models on the flow field predictions. Predictions of the mass fractions of major species, minor species, dimensionless stagnation temperature, dimensionless static pressure rise and thrust percentage along the combustor length are presented and discussed. Overall performance metrics such as mixing efficiency and combustion efficiency are used to draw inferences on the nature (whether mixing- or kinetic-controlled) and the completeness of the combustion process. The predicted values of the dimensionless wall static pressure are compared with experimental data reported in the literature. The calculations show that multi step chemistry predicts higher and more wide spread heat release than what is predicted by single step chemistry. In addition, it is also shown that multi step chemistry predicts intricate details of the combustion process such as the ignition distance and induction distance. (author)

  3. Modeling of stagnation-line nonequilibrium flows by means of quantum based collisional models

    SciTech Connect (OSTI)

    MunafĂČ, A. Magin, T. E.

    2014-09-15

    The stagnation-line flow over re-entry bodies is analyzed by means of a quantum based collisional model which accounts for dissociation and energy transfer in N{sub 2}-N interactions. The physical model is based on a kinetic database developed at NASA Ames Research Center. The reduction of the kinetic mechanism is achieved by lumping the rovibrational energy levels of the N{sub 2} molecule in energy bins. The energy bins are treated as separate species, thus allowing for non-Boltzmann distributions of their populations. The governing equations are discretized in space by means of the Finite Volume method. A fully implicit time-integration is used to obtain steady-state solutions. The results show that the population of the energy bins strongly deviate from a Boltzmann distribution close to the shock wave and across the boundary layer. The sensitivity analysis to the number of energy bins reveals that accurate estimation of flow quantities (such as chemical composition and wall heat flux) can be obtained by using only 10 energy bins. A comparison with the predictions obtained by means of conventional multi-temperature models indicates that the former can lead to an overestimation of the wall heat flux, due to an inaccurate modeling of recombination in the boundary layer.

  4. Commercial Buildings Sector Agent-Based Model | Open Energy Informatio...

    Open Energy Info (EERE)

    OpenEI Keyword(s): EERE tool, Commercial Buildings Sector Agent-Based Model Language: English References: Building Efficiency: Development of an Agent-based Model of the US...

  5. Demonstrating and Validating a Next Generation Model-Based Controller...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Model-Based Controller for Fuel Efficient, Low Emissions Diesel Engines Fully model-based, practically-mapless engine control concept is viable PDF icon deer09allain.pdf...

  6. Improving Thermal Model Prediction Through Statistical Analysis of Irradiation and Post-Irradiation Data from AGR Experiments

    SciTech Connect (OSTI)

    Binh T. Pham; Grant L. Hawkes; Jeffrey J. Einerson

    2014-05-01

    As part of the High Temperature Reactors (HTR) R&D program, a series of irradiation tests, designated as Advanced Gas-cooled Reactor (AGR), have been defined to support development and qualification of fuel design, fabrication process, and fuel performance under normal operation and accident conditions. The AGR tests employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule and instrumented with thermocouples (TC) embedded in graphite blocks enabling temperature control. While not possible to obtain by direct measurements in the tests, crucial fuel conditions (e.g., temperature, neutron fast fluence, and burnup) are calculated using core physics and thermal modeling codes. This paper is focused on AGR test fuel temperature predicted by the ABAQUS code's finite element-based thermal models. The work follows up on a previous study, in which several statistical analysis methods were adapted, implemented in the NGNP Data Management and Analysis System (NDMAS), and applied for qualification of AGR-1 thermocouple data. Abnormal trends in measured data revealed by the statistical analysis are traced to either measuring instrument deterioration or physical mechanisms in capsules that may have shifted the system thermal response. The main thrust of this work is to exploit the variety of data obtained in irradiation and post-irradiation examination (PIE) for assessment of modeling assumptions. As an example, the uneven reduction of the control gas gap in Capsule 5 found in the capsule metrology measurements in PIE helps identify mechanisms other than TC drift causing the decrease in TC readings. This suggests a more physics-based modification of the thermal model that leads to a better fit with experimental data, thus reducing model uncertainty and increasing confidence in the calculated fuel temperatures of the AGR-1 test.

  7. Validation of model based active control of combustion instability

    SciTech Connect (OSTI)

    Fleifil, M.; Ghoneim, Z.; Ghoniem, A.F.

    1998-07-01

    The demand for efficient, company and clean combustion systems have spurred research into the fundamental mechanisms governing their performance and means of interactively changing their performance characteristics. Thermoacoustic instability which is frequently observed in combustion systems with high power density, when burning close to the lean flammability limit, or using exhaust gas recirculation to meet more stringent emissions regulations, etc. Its occurrence and/or means to mitigate them passively lead to performance degradation such as reduced combustion efficiency, high local heat transfer rates, increase in the mixture equivalence ratio or system failure due to structural damage. This paper reports on their study of the origin of thermoacoustic instability, its dependence on system parameters and the means of actively controlling it. The authors have developed an analytical model of thermoacoustic instability in premixed combustors. The model combines a heat release dynamics model constructed using the kinematics of a premixed flame stabilized behind a perforated plate with the linearized conservation equations governing the system acoustics. This formulation allows model based controller design. In order to test the performance of the analytical model, a numerical solution of the partial differential equations governing the system has been carried out using the principle of harmonic separation and focusing on the dominant unstable mode. This leads to a system of ODEs governing the thermofluid variables. Analytical predictions of the frequency and growth ate of the unstable mode are shown to be in good agreement with the numerical simulations as well s with those obtained using experimental identification techniques when applied to a laboratory combustor. The authors use these results to confirm the validity of the assumptions used in formulating the analytical model. A controller based on the minimization of a cost function using the LQR technique has been designed using the analytical model and implemented on a bench top laboratory combustor. The authors show that the controller is capable of suppressing the pressure oscillations in the combustor with a settling time much shorter than what had been attained before and without exciting secondary peaks.

  8. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOE Patents [OSTI]

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  9. Water and Heat Balance Model for Predicting Drainage Below the Plant Root Zone

    Energy Science and Technology Software Center (OSTI)

    1989-11-01

    UNSAT-H Version 2.0 is a one-dimensional model that simulates the dynamic processes of infiltration, drainage, redistribution, surface evaporation, and the uptake of water from soil by plants. The model was developed for assessing the water dynamics of arid sites used or proposed for near-surface waste disposal. In particular, the model is used for simulating the water balance of cover systems over buried waste and for estimating the recharge rate (i.e., the drainage rate beneath themore » plant root zone when a sizable vadose zone is present). The mathematical base of the model are Richards'' equation for water flow, Ficks'' law for vapor diffusion, and Fouriers law for heat flow. The simulated profile can be homogeneous or layered. The boundary conditions can be controlled as either constant (potential or temperature) or flux conditions to reflect actual conditions at a given site.« less

  10. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    SciTech Connect (OSTI)

    Duffy, Stephen

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  11. Development of a model for predicting transient hydrogen venting in 55-gallon drums

    SciTech Connect (OSTI)

    Apperson, Jason W; Clemmons, James S; Garcia, Michael D; Sur, John C; Zhang, Duan Z; Romero, Michael J

    2008-01-01

    Remote drum venting was performed on a population of unvented high activity drums (HAD) in the range of 63 to 435 plutonium equivalent Curies (PEC). These 55-gallon Transuranic (TRU) drums will eventually be shipped to the Waste Isolation Pilot Plant (WIPP). As a part of this process, the development of a calculational model was required to predict the transient hydrogen concentration response of the head space and polyethylene liner (if present) within the 55-gallon drum. The drum and liner were vented using a Remote Drum Venting System (RDVS) that provided a vent sampling path for measuring flammable hydrogen vapor concentrations and allow hydrogen to diffuse below lower flammability limit (LFL) concentrations. One key application of the model was to determine the transient behavior of hydrogen in the head space, within the liner, and the sensitivity to the number of holes made in the liner or number of filters. First-order differential mass transport equations were solved using Laplace transformations and numerically to verify the results. the Mathematica 6.0 computing tool was also used as a validation tool and for examining larger than two chamber systems. Results will be shown for a variety of configurations, including 85-gallon and 110-gallon overpack drums. The model was also validated against hydrogen vapor concentration assay measurements.

  12. Vehicle Technologies Office Merit Review 2014: Trip Prediction and Route-Based Vehicle Energy Management

    Broader source: Energy.gov [DOE]

    Presentation given by Argonne National Laboratory at 2014 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about trip prediction...

  13. Depositional sequence analysis and sedimentologic modeling for improved prediction of Pennsylvanian reservoirs (Annex 1)

    SciTech Connect (OSTI)

    Watney, W.L.

    1992-01-01

    Interdisciplinary studies of the Upper Pennsylvanian Lansing and Kansas City groups have been undertaken in order to improve the geologic characterization of petroleum reservoirs and to develop a quantitative understanding of the processes responsible for formation of associated depositional sequences. To this end, concepts and methods of sequence stratigraphy are being used to define and interpret the three-dimensional depositional framework of the Kansas City Group. The investigation includes characterization of reservoir rocks in oil fields in western Kansas, description of analog equivalents in near-surface and surface sites in southeastern Kansas, and construction of regional structural and stratigraphic framework to link the site specific studies. Geologic inverse and simulation models are being developed to integrate quantitative estimates of controls on sedimentation to produce reconstructions of reservoir-bearing strata in an attempt to enhance our ability to predict reservoir characteristics.

  14. Predictive Treatment Management: Incorporating a Predictive Tumor Response Model Into Robust Prospective Treatment Planning for Non-Small Cell Lung Cancer

    SciTech Connect (OSTI)

    Zhang, Pengpeng; Yorke, Ellen; Hu, Yu-Chi; Mageras, Gig; Rimner, Andreas; Deasy, Joseph O.

    2014-02-01

    Purpose: We hypothesized that a treatment planning technique that incorporates predicted lung tumor regression into optimization, predictive treatment planning (PTP), could allow dose escalation to the residual tumor while maintaining coverage of the initial target without increasing dose to surrounding organs at risk (OARs). Methods and Materials: We created a model to estimate the geometric presence of residual tumors after radiation therapy using planning computed tomography (CT) and weekly cone beam CT scans of 5 lung cancer patients. For planning purposes, we modeled the dynamic process of tumor shrinkage by morphing the original planning target volume (PTV{sub orig}) in 3 equispaced steps to the predicted residue (PTV{sub pred}). Patients were treated with a uniform prescription dose to PTV{sub orig}. By contrast, PTP optimization started with the same prescription dose to PTV{sub orig} but linearly increased the dose at each step, until reaching the highest dose achievable to PTV{sub pred} consistent with OAR limits. This method is compared with midcourse adaptive replanning. Results: Initial parenchymal gross tumor volume (GTV) ranged from 3.6 to 186.5 cm{sup 3}. On average, the primary GTV and PTV decreased by 39% and 27%, respectively, at the end of treatment. The PTP approach gave PTV{sub orig} at least the prescription dose, and it increased the mean dose of the true residual tumor by an average of 6.0 Gy above the adaptive approach. Conclusions: PTP, incorporating a tumor regression model from the start, represents a new approach to increase tumor dose without increasing toxicities, and reduce clinical workload compared with the adaptive approach, although model verification using per-patient midcourse imaging would be prudent.

  15. Improving Thermal Model Prediction Through Statistical Analysis of Irradiation and Post-Irradiation Data from AGR Experiments

    SciTech Connect (OSTI)

    Dr. Binh T. Pham; Grant L. Hawkes; Jeffrey J. Einerson

    2012-10-01

    As part of the Research and Development program for Next Generation High Temperature Reactors (HTR), a series of irradiation tests, designated as Advanced Gas-cooled Reactor (AGR), have been defined to support development and qualification of fuel design, fabrication process, and fuel performance under normal operation and accident conditions. The AGR tests employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule and instrumented with thermocouples (TC) embedded in graphite blocks enabling temperature control. The data representing the crucial test fuel conditions (e.g., temperature, neutron fast fluence, and burnup) while impossible to obtain from direct measurements are calculated by physics and thermal models. The irradiation and post-irradiation examination (PIE) experimental data are used in model calibration effort to reduce the inherent uncertainty of simulation results. This paper is focused on fuel temperature predicted by the ABAQUS code’s finite element-based thermal models. The work follows up on a previous study, in which several statistical analysis methods were adapted, implemented in the NGNP Data Management and Analysis System (NDMAS), and applied for improving qualification of AGR-1 thermocouple data. The present work exercises the idea that the abnormal trends of measured data observed from statistical analysis may be caused by either measuring instrument deterioration or physical mechanisms in capsules that may have shifted the system thermal response. As an example, the uneven reduction of the control gas gap in Capsule 5 revealed by the capsule metrology measurements in PIE helps justify the reduction in TC readings instead of TC drift. This in turn prompts modification of thermal model to better fit with experimental data, thus help increase confidence, and in other word reduce model uncertainties in thermal simulation results of the AGR-1 test.

  16. Development of a model for predicting intergranular stress corrosion cracking of Alloy 600 tubes in PWR primary water. Final report

    SciTech Connect (OSTI)

    Garud, Y.S.

    1985-01-01

    A preliminary mathematical model developed in this study may make it possible to predict stress corrosion cracking on the primary side of PWR steam generator tubing. The study outlines a comprehensive testing program that will provide the operational and experimental data to further develop and verify the model.

  17. Coupling a Mesoscale Numerical Weather Prediction Model with Large-Eddy Simulation for Realistic Wind Plant Aerodynamics Simulations (Poster)

    SciTech Connect (OSTI)

    Draxl, C.; Churchfield, M.; Mirocha, J.; Lee, S.; Lundquist, J.; Michalakes, J.; Moriarty, P.; Purkayastha, A.; Sprague, M.; Vanderwende, B.

    2014-06-01

    Wind plant aerodynamics are influenced by a combination of microscale and mesoscale phenomena. Incorporating mesoscale atmospheric forcing (e.g., diurnal cycles and frontal passages) into wind plant simulations can lead to a more accurate representation of microscale flows, aerodynamics, and wind turbine/plant performance. Our goal is to couple a numerical weather prediction model that can represent mesoscale flow [specifically the Weather Research and Forecasting model] with a microscale LES model (OpenFOAM) that can predict microscale turbulence and wake losses.

  18. A predictive analytic model for the solar modulation of cosmic rays

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Cholis, Ilias; Hooper, Dan; Linden, Tim

    2016-02-23

    An important factor limiting our ability to understand the production and propagation of cosmic rays pertains to the effects of heliospheric forces, commonly known as solar modulation. The solar wind is capable of generating time- and charge-dependent effects on the spectrum and intensity of low-energy (â‰Č10 GeV) cosmic rays reaching Earth. Previous analytic treatments of solar modulation have utilized the force-field approximation, in which a simple potential is adopted whose amplitude is selected to best fit the cosmic-ray data taken over a given period of time. Making use of recently available cosmic-ray data from the Voyager 1 spacecraft, along withmore » measurements of the heliospheric magnetic field and solar wind, we construct a time-, charge- and rigidity-dependent model of solar modulation that can be directly compared to data from a variety of cosmic-ray experiments. Here, we provide a simple analytic formula that can be easily utilized in a variety of applications, allowing us to better predict the effects of solar modulation and reduce the number of free parameters involved in cosmic-ray propagation models.« less

  19. Impact of Pilot Light Modeling on the Predicted Annual Performance of Residential Gas Water Heaters: Preprint

    SciTech Connect (OSTI)

    Maguire, J.; Burch, J.

    2013-08-01

    Modeling residential water heaters with dynamic simulation models can provide accurate estimates of their annual energy consumption, if the units? characteristics and use conditions are known. Most gas storage water heaters (GSWHs) include a standing pilot light. It is generally assumed that the pilot light energy will help make up standby losses and have no impact on the predicted annual energy consumption. However, that is not always the case. The gas input rate and conversion efficiency of a pilot light for a GSWH were determined from laboratory data. The data were used in simulations of a typical GSWH with and without a pilot light, for two cases: 1) the GSWH is used alone; and 2) the GSWH is the second tank in a solar water heating (SWH) system. The sensitivity of wasted pilot light energy to annual hot water use, climate, and installation location was examined. The GSWH used alone in unconditioned space in a hot climate had a slight increase in energy consumption. The GSWH with a pilot light used as a backup to an SWH used up to 80% more auxiliary energy than one without in hot, sunny locations, from increased tank losses.

  20. VALIDATION AND RESULTS OF A PSEUDO-MULTI-ZONE COMBUSTION TRAJECTORY PREDICTION MODEL FOR CAPTURING SOOT AND NOX FORMATION ON A MEDIUM DUTY DIESEL ENGINE

    SciTech Connect (OSTI)

    Bittle, Joshua A.; Gao, Zhiming; Jacobs, Timothy J.

    2013-01-01

    A pseudo-multi-zone phenomenological model has been created with the ultimate goal of supporting efforts to enable broader commercialization of low temperature combustion modes in diesel engines. The benefits of low temperature combustion are the simultaneous reduction in soot and nitric oxide emissions and increased engine efficiency if combustion is properly controlled. Determining what qualifies as low temperature combustion for any given engine can be difficult without expensive emissions analysis equipment. This determination can be made off-line using computer models or through factory calibration procedures. This process could potentially be simplified if a real-time prediction model could be implemented to run for any engine platform this is the motivation for this study. The major benefit of this model is the ability for it to predict the combustion trajectory, i.e. local temperature and equivalence ratio in the burning zones. The model successfully captures all the expected trends based on the experimental data and even highlights an opportunity for simply using the average reaction temperature and equivalence ratio as an indicator of emissions levels alone - without solving formation sub-models. This general type of modeling effort is not new, but a major effort was made to minimize the calculation duration to enable implementation as an input to real-time next-cycle engine controller Instead of simply using the predicted engine out soot and NOx levels, control decisions could be made based on the trajectory. This has the potential to save large amounts of calibration time because with minor tuning (the model has only one automatically determined constant) it is hoped that the control algorithm would be generally applicable.

  1. REVIEW OF MECHANISTIC UNDERSTANDING AND MODELING AND UNCERTAINTY ANALYSIS METHODS FOR PREDICTING CEMENTITIOUS BARRIER PERFORMANCE

    SciTech Connect (OSTI)

    Langton, C.; Kosson, D.

    2009-11-30

    Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focus of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The various chapters contain both a description of the mechanism or and a discussion of the current approaches to modeling the phenomena.

  2. Reduced order models for prediction of groundwater quality impacts from CO₂ and brine leakage

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zheng, Liange; Carroll, Susan; Bianchi, Marco; Mansoor, Kayyum; Sun, Yunwei; Birkholzer, Jens

    2014-12-31

    A careful assessment of the risk associated with geologic CO₂ storage is critical to the deployment of large-scale storage projects. A potential risk is the deterioration of groundwater quality caused by the leakage of CO₂ and brine leakage from deep subsurface reservoirs. In probabilistic risk assessment studies, numerical modeling is the primary tool employed to assess risk. However, the application of traditional numerical models to fully evaluate the impact of CO₂ leakage on groundwater can be computationally complex, demanding large processing times and resources, and involving large uncertainties. As an alternative, reduced order models (ROMs) can be used as highlymore » efficient surrogates for the complex process-based numerical models. In this study, we represent the complex hydrogeological and geochemical conditions in a heterogeneous aquifer and subsequent risk by developing and using two separate ROMs. The first ROM is derived from a model that accounts for the heterogeneous flow and transport conditions in the presence of complex leakage functions for CO₂ and brine. The second ROM is obtained from models that feature similar, but simplified flow and transport conditions, and allow for a more complex representation of all relevant geochemical reactions. To quantify possible impacts to groundwater aquifers, the basic risk metric is taken as the aquifer volume in which the water quality of the aquifer may be affected by an underlying CO₂ storage project. The integration of the two ROMs provides an estimate of the impacted aquifer volume taking into account uncertainties in flow, transport and chemical conditions. These two ROMs can be linked in a comprehensive system level model for quantitative risk assessment of the deep storage reservoir, wellbore leakage, and shallow aquifer impacts to assess the collective risk of CO₂ storage projects.« less

  3. Reduced order models for prediction of groundwater quality impacts from CO? and brine leakage

    SciTech Connect (OSTI)

    Zheng, Liange; Carroll, Susan; Bianchi, Marco; Mansoor, Kayyum; Sun, Yunwei; Birkholzer, Jens

    2014-12-31

    A careful assessment of the risk associated with geologic CO? storage is critical to the deployment of large-scale storage projects. A potential risk is the deterioration of groundwater quality caused by the leakage of CO? and brine leakage from deep subsurface reservoirs. In probabilistic risk assessment studies, numerical modeling is the primary tool employed to assess risk. However, the application of traditional numerical models to fully evaluate the impact of CO? leakage on groundwater can be computationally complex, demanding large processing times and resources, and involving large uncertainties. As an alternative, reduced order models (ROMs) can be used as highly efficient surrogates for the complex process-based numerical models. In this study, we represent the complex hydrogeological and geochemical conditions in a heterogeneous aquifer and subsequent risk by developing and using two separate ROMs. The first ROM is derived from a model that accounts for the heterogeneous flow and transport conditions in the presence of complex leakage functions for CO? and brine. The second ROM is obtained from models that feature similar, but simplified flow and transport conditions, and allow for a more complex representation of all relevant geochemical reactions. To quantify possible impacts to groundwater aquifers, the basic risk metric is taken as the aquifer volume in which the water quality of the aquifer may be affected by an underlying CO? storage project. The integration of the two ROMs provides an estimate of the impacted aquifer volume taking into account uncertainties in flow, transport and chemical conditions. These two ROMs can be linked in a comprehensive system level model for quantitative risk assessment of the deep storage reservoir, wellbore leakage, and shallow aquifer impacts to assess the collective risk of CO? storage projects.

  4. Numerical Prediction of Experimentally Observed Behavior of a Scale Model of an Offshore Wind Turbine Supported by a Tension-Leg Platform: Preprint

    SciTech Connect (OSTI)

    Prowell, I.; Robertson, A.; Jonkman, J.; Stewart, G. M.; Goupee, A. J.

    2013-01-01

    Realizing the critical importance the role physical experimental tests play in understanding the dynamics of floating offshore wind turbines, the DeepCwind consortium conducted a one-fiftieth-scale model test program where several floating wind platforms were subjected to a variety of wind and wave loading condition at the Maritime Research Institute Netherlands wave basin. This paper describes the observed behavior of a tension-leg platform, one of three platforms tested, and the systematic effort to predict the measured response with the FAST simulation tool using a model primarily based on consensus geometric and mass properties of the test specimen.

  5. Modeling Heavy/Medium-Duty Fuel Consumption Based on Drive Cycle Properties

    SciTech Connect (OSTI)

    Wang, Lijuan; Duran, Adam; Gonder, Jeffrey; Kelly, Kenneth

    2015-10-13

    This paper presents multiple methods for predicting heavy/medium-duty vehicle fuel consumption based on driving cycle information. A polynomial model, a black box artificial neural net model, a polynomial neural network model, and a multivariate adaptive regression splines (MARS) model were developed and verified using data collected from chassis testing performed on a parcel delivery diesel truck operating over the Heavy Heavy-Duty Diesel Truck (HHDDT), City Suburban Heavy Vehicle Cycle (CSHVC), New York Composite Cycle (NYCC), and hydraulic hybrid vehicle (HHV) drive cycles. Each model was trained using one of four drive cycles as a training cycle and the other three as testing cycles. By comparing the training and testing results, a representative training cycle was chosen and used to further tune each method. HHDDT as the training cycle gave the best predictive results, because HHDDT contains a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. Among the four model approaches, MARS gave the best predictive performance, with an average absolute percent error of -1.84% over the four chassis dynamometer drive cycles. To further evaluate the accuracy of the predictive models, the approaches were first applied to real-world data. MARS outperformed the other three approaches, providing an average absolute percent error of -2.2% of four real-world road segments. The MARS model performance was then compared to HHDDT, CSHVC, NYCC, and HHV drive cycles with the performance from Future Automotive System Technology Simulator (FASTSim). The results indicated that the MARS method achieved a comparative predictive performance with FASTSim.

  6. Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties.

    SciTech Connect (OSTI)

    Salloum, Maher N.; Gharagozloo, Patricia E.

    2013-10-01

    Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

  7. A coarse-grained model with implicit salt for RNAs: Predicting 3D structure, stability and salt effect

    SciTech Connect (OSTI)

    Shi, Ya-Zhou; Wang, Feng-Hua; Wu, Yuan-Yan; Tan, Zhi-Jie

    2014-09-14

    To bridge the gap between the sequences and 3-dimensional (3D) structures of RNAs, some computational models have been proposed for predicting RNA 3D structures. However, the existed models seldom consider the conditions departing from the room/body temperature and high salt (1M NaCl), and thus generally hardly predict the thermodynamics and salt effect. In this study, we propose a coarse-grained model with implicit salt for RNAs to predict 3D structures, stability, and salt effect. Combined with Monte Carlo simulated annealing algorithm and a coarse-grained force field, the model folds 46 tested RNAs (?45 nt) including pseudoknots into their native-like structures from their sequences, with an overall mean RMSD of 3.5 Ć and an overall minimum RMSD of 1.9 Ć from the experimental structures. For 30 RNA hairpins, the present model also gives the reliable predictions for the stability and salt effect with the mean deviation ? 1.0 °C of melting temperatures, as compared with the extensive experimental data. In addition, the model could provide the ensemble of possible 3D structures for a short RNA at a given temperature/salt condition.

  8. Model-Based Sampling and Inference

    U.S. Energy Information Administration (EIA) Indexed Site

    ... Sarndal, C.-E., Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer- Verlag. Steel, P.M. and Shao, J. (1997), "Estimation of Variance Due to Imputation in ...

  9. Commercial Implementation of Model-Based Manufacturing of Nanostructured Metals

    SciTech Connect (OSTI)

    Lowe, Terry C.

    2012-07-24

    Computational modeling is an essential tool for commercial production of nanostructured metals. Strength is limited by imperfections at the high strength levels that are achievable in nanostructured metals. Processing to achieve homogeneity at the micro- and nano-scales is critical. Manufacturing of nanostructured metals is intrinsically a multi-scale problem. Manufacturing of nanostructured metal products requires computer control, monitoring and modeling. Large scale manufacturing of bulk nanostructured metals by Severe Plastic Deformation is a multi-scale problem. Computational modeling at all scales is essential. Multiple scales of modeling must be integrated to predict and control nanostructural, microstructural, macrostructural product characteristics and production processes.

  10. A nonlocal, ordinary, state-based plasticity model for peridynamics.

    Office of Scientific and Technical Information (OSTI)

    (Technical Report) | SciTech Connect nonlocal, ordinary, state-based plasticity model for peridynamics. Citation Details In-Document Search Title: A nonlocal, ordinary, state-based plasticity model for peridynamics. An implicit time integration algorithm for a non-local, state-based, peridynamics plasticity model is developed. The flow rule was proposed in [3] without an integration strategy or yield criterion. This report addresses both of these issues and thus establishes the first

  11. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    SciTech Connect (OSTI)

    Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-05-15

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  12. Identifying at-risk employees: A behavioral model for predicting potential insider threats

    SciTech Connect (OSTI)

    Greitzer, Frank L.; Kangas, Lars J.; Noonan, Christine F.; Dalton, Angela C.

    2010-09-01

    A psychosocial model was developed to assess an employee’s behavior associated with an increased risk of insider abuse. The model is based on case studies and research literature on factors/correlates associated with precursor behavioral manifestations of individuals committing insider crimes. In many of these crimes, managers and other coworkers observed that the offenders had exhibited signs of stress, disgruntlement, or other issues, but no alarms were raised. Barriers to using such psychosocial indicators include the inability to recognize the signs and the failure to record the behaviors so that they could be assessed by a person experienced in psychosocial evaluations. We have developed a model using a Bayesian belief network with the help of human resources staff, experienced in evaluating behaviors in staff. We conducted an experiment to assess its agreement with human resources and management professionals, with positive results. If implemented in an operational setting, the model would be part of a set of management tools for employee assessment that can raise an alarm about employees who pose higher insider threat risks. In separate work, we combine this psychosocial model’s assessment with computer workstation behavior to raise the efficacy of recognizing an insider crime in the making.

  13. A Stochastic Reactor Based Virtual Engine Model Employing Detailed...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Detailed Chemistry for Kinetic Studies of In-Cylinder Combustion and Exhaust Aftertreatment A Stochastic Reactor Based Virtual Engine Model Employing Detailed Chemistry for ...

  14. Experiment-Based Model for the Chemical Interactions between...

    Broader source: Energy.gov (indexed) [DOE]

    Experiment-Based Model for the Chemical Interactions between Geothermal Rocks, ... Enhanced Geothermal Systems (EGS) with CO2as Heat Transmission Fluid Chemical Impact of ...

  15. A Model-Based Approach to Scintillator/Photomultiplier System...

    Office of Scientific and Technical Information (OSTI)

    System Characterization Citation Details In-Document Search Title: A Model-Based Approach to ScintillatorPhotomultiplier System Characterization You are accessing ...

  16. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) ...

  17. Physics-Based Constraints in the Forward Modeling Analysis of...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based ...

  18. Physics-based constraints in the forward modeling analysis of...

    Office of Scientific and Technical Information (OSTI)

    Conference: Physics-based constraints in the forward modeling analysis of time-correlated image data Citation Details In-Document Search Title: Physics-based constraints in the ...

  19. Model for the Prediction of the Hydriding Thermodynamics of Pd-Rh-Co Ternary Alloys

    SciTech Connect (OSTI)

    Teter, D.F.; Thoma, D.J.

    1999-03-01

    A dilute solution model (with respect to the substitutional alloying elements) has been developed, which accurately predicts the hydride formation and decomposition thermodynamics and the storage capacities of dilute ternary Pd-Rh-Co alloys. The effect of varying the rhodium and cobalt compositions on the thermodynamics of hydride formation and decomposition and hydrogen capacity of several palladium-rhodium-cobalt ternary alloys has been investigated using pressure-composition (PC) isotherms. Alloying in the dilute regime (<10 at.%) causes the enthalpy for hydride formation to linearly decrease with increasing alloying content. Cobalt has a stronger effect on the reduction in enthalpy than rhodium for equivalent alloying amounts. Also, cobalt reduces the hydrogen storage capacity with increasing alloying content. The plateau thermodynamics are strongly linked to the lattice parameters of the alloys. A near-linear dependence of the enthalpy of hydride formation on the lattice parameter was observed for both the binary Pd-Rh and Pd-Co alloys, as well as for the ternary Pd-Rh-Co alloys. The Pd-5Rh-3Co (at. %) alloy was found to have similar plateau thermodynamics as a Pd-10Rh alloy, however, this ternary alloy had a diminished hydrogen storage capacity relative to Pd-10Rh.

  20. Using calibrated engineering models to predict energy savings in large-scale geothermal heat pump projects

    SciTech Connect (OSTI)

    Shonder, J.A.; Hughes, P.J.; Thornton, J.W.

    1998-10-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  1. Using Calibrated Engineering Models To Predict Energy Savings In Large-Scale Geothermal Heat Pump Projects

    SciTech Connect (OSTI)

    Shonder, John A; Hughes, Patrick; Thornton, Jeff W.

    1998-01-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  2. NEAR FIELD MODELING OF SPE1 EXPERIMENT AND PREDICTION OF THE...

    Office of Scientific and Technical Information (OSTI)

    AND PREDICTION OF THE SECOND SOURCE PHYSICS EXPERIMENTS ... as Russian and French nuclear test data in granitic rocks. ... Country of Publication: United States Language: English ...

  3. Predictive Geosciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Predictive Geosciences Researchers in the Predictive Geosciences competency develop and calibrate efficient tools and quantitative relationships for the science-based prediction of the behavior of engineered-natural systems. Research includes fluid-rock geochemistry, fluid-rock geophysics, and geochemical engineering, specifically: Fluid-Rock Geochemistry Pursuing geomaterials science as it relates to the chemical interaction between subsurface fluids and solid materials (both natural and

  4. Prediction of subsurface fracture in mining zone of Papua using passive seismic tomography based on Fresnel zone

    SciTech Connect (OSTI)

    Setiadi, Herlan; Nurhandoko, Bagus Endar B.; Wely, Woen; Riyanto, Erwin

    2015-04-16

    Fracture prediction in a block cave of underground mine is very important to monitor the structure of the fracture that can be harmful to the mining activities. Many methods can be used to obtain such information, such as TDR (Time Domain Relectometry) and open hole. Both of them have limitations in range measurement. Passive seismic tomography is one of the subsurface imaging method. It has advantage in terms of measurements, cost, and rich of rock physical information. This passive seismic tomography studies using Fresnel zone to model the wavepath by using frequency parameter. Fresnel zone was developed by Nurhandoko in 2000. The result of this study is tomography of P and S wave velocity which can predict position of fracture. The study also attempted to use sum of the wavefronts to obtain position and time of seismic event occurence. Fresnel zone tomography and the summation wavefront can predict location of geological structure of mine area as well.

  5. Development and Application of a Statistical Methodology to Evaluate the Predictive Accuracy of Building Energy Baseline Models

    SciTech Connect (OSTI)

    Granderson, Jessica; Price, Phillip N

    2014-02-21

    This  paper  documents  the  development  and  application  of  a  general  statistical  methodology to assess the accuracy of baseline energy models, focusing on its application  to  Measurement  and  Verification  (M&V)  of  whole-­building  energy  savings.  The methodology complements the principles addressed in resources such as ASHRAE Guideline  14  and  the  International  Performance  Measurement  and  Verification  Protocol. It requires fitting a baseline model to data from a ``training period’’ and using the  model  to  predict  total  electricity  consumption  during  a  subsequent  ``prediction  period.’’ We  illustrate  the  methodology  by  evaluating  five  baseline  models  using  data  from  29  buildings. The training period and prediction period were varied, and model predictions of  daily,  weekly,  and  monthly  energy  consumption  were  compared  to  meter  data  to  determine model accuracy. Several metrics were used to characterize the accuracy of the predictions, and in some cases the best-­performing model as judged by one metric was not the best performer when judged by another metric.

  6. A test of an expert-based bird-habitat relationship model in South Carolina.

    SciTech Connect (OSTI)

    Kilgo, John, C.; Gartner, David, L.; Chapman, Brian, R.; Dunning, John, B., Jr.; Franzreb, Kathleen, E.; Gauthreaux, Sidney, A.; Greenberg, Catheryn, H.; Levey, Douglas, J.; Miller, Karl, V.; Pearson, Scott, F.

    2002-01-01

    Wildlife-habitat relationships models are used widely by land managers to provide information on which species are likely to occur in an area of interest and may be impacted by a proposed management activity. Few such models have been tested. Recent Avian census data from the Savannah River Site, South Carolina was used to validate BIRDHAB, a geographic information system (GIS) model developed by United States Forest Service resource managers to predict relative habitat quality for birds at the stand level on national forests in the southeastern United States. BIRDHAB is based on the species-habitat matrices presented by Hamel (1992).

  7. Review and model-based analysis of factors influencing soil carbon sequestration beneath switchgrass (Panicum virgatum)

    SciTech Connect (OSTI)

    Garten Jr, Charles T [ORNL

    2012-01-01

    Abstract. A simple, multi-compartment model was developed to predict soil carbon sequestration beneath switchgrass (Panicum virgatum) plantations in the southeastern United States. Soil carbon sequestration is an important component of sustainable switchgrass production for bioenergy because soil organic matter promotes water retention, nutrient supply, and soil properties that minimize erosion. A literature review was included for the purpose of model parameterization and five model-based experiments were conducted to predict how changes in environment (temperature) or crop management (cultivar, fertilization, and harvest efficiency) might affect soil carbon storage and nitrogen losses. Predictions of soil carbon sequestration were most sensitive to changes in annual biomass production, the ratio of belowground to aboveground biomass production, and temperature. Predictions of ecosystem nitrogen loss were most sensitive to changes in annual biomass production, the soil C/N ratio, and nitrogen remobilization efficiency (i.e., nitrogen cycling within the plant). Model-based experiments indicated that 1) soil carbon sequestration can be highly site specific depending on initial soil carbon stocks, temperature, and the amount of annual nitrogen fertilization, 2) response curves describing switchgrass yield as a function of annual nitrogen fertilization were important to model predictions, 3) plant improvements leading to greater belowground partitioning of biomass could increase soil carbon sequestration, 4) improvements in harvest efficiency have no indicated effects on soil carbon and nitrogen, but improve cumulative biomass yield, and 5) plant improvements that reduce organic matter decomposition rates could also increase soil carbon sequestration, even though the latter may not be consistent with desired improvements in plant tissue chemistry to maximize yields of cellulosic ethanol.

  8. Kinetic data base for combustion modeling

    SciTech Connect (OSTI)

    Tsang, W.; Herron, J.T.

    1993-12-01

    The aim of this work is to develop a set of evaluated rate constants for use in the simulation of hydrocarbon combustion. The approach has been to begin with the small molecules and then introduce larger species with the various structural elements that can be found in all hydrocarbon fuels and decomposition products. Currently, the data base contains most of the species present in combustion systems with up to four carbon atoms. Thus, practically all the structural grouping found in aliphatic compounds have now been captured. The direction of future work is the addition of aromatic compounds to the data base.

  9. Lattice and off-lattice side chain models of protein folding: Linear time structure prediction better than 86% of optimal

    SciTech Connect (OSTI)

    Hart, W.E.; Istrail, S. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.

    1996-08-09

    This paper considers the protein structure prediction problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven extremely useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. The authors consider two side chain models: a lattice model that generalizes the HP model (Dill 85) to explicitly represent side chains on the cubic lattice, and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. They describe algorithms for both of these models with mathematically guaranteed error bounds. In particular, the authors describe a linear time performance guaranteed approximation algorithm for the HP side chain model that constructs conformations whose energy is better than 865 of optimal in a face centered cubic lattice, and they demonstrate how this provides a 70% performance guarantee for the HP-TSSC model. This is the first algorithm in the literature for off-lattice protein structure prediction that has a rigorous performance guarantee. The analysis of the HP-TSSC model builds off of the work of Dancik and Hannenhalli who have developed a 16/30 approximation algorithm for the HP model on the hexagonal close packed lattice. Further, the analysis provides a mathematical methodology for transferring performance guarantees on lattices to off-lattice models. These results partially answer the open question of Karplus et al. concerning the complexity of protein folding models that include side chains.

  10. A Human Life-Stage Physiologically Based Pharmacokinetic and Pharmacodynamic Model for Chlorpyrifos: Development and Validation

    SciTech Connect (OSTI)

    Smith, Jordan N.; Hinderliter, Paul M.; Timchalk, Charles; Bartels, M. J.; Poet, Torka S.

    2014-08-01

    Sensitivity to chemicals in animals and humans are known to vary with age. Age-related changes in sensitivity to chlorpyrifos have been reported in animal models. A life-stage physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) model was developed to computationally predict disposition of CPF and its metabolites, chlorpyrifos-oxon (the ultimate toxicant) and 3,5,6-trichloro-2-pyridinol (TCPy), as well as B-esterase inhibition by chlorpyrifos-oxon in humans. In this model, age-dependent body weight was calculated from a generalized Gompertz function, and compartments (liver, brain, fat, blood, diaphragm, rapid, and slow) were scaled based on body weight from polynomial functions on a fractional body weight basis. Blood flows among compartments were calculated as a constant flow per compartment volume. The life-stage PBPK/PD model was calibrated and tested against controlled adult human exposure studies. Model simulations suggest age-dependent pharmacokinetics and response may exist. At oral doses ? 0.55 mg/kg of chlorpyrifos (significantly higher than environmental exposure levels), 6 mo old children are predicted to have higher levels of chlorpyrifos-oxon in blood and higher levels of red blood cell cholinesterase inhibition compared to adults from equivalent oral doses of chlorpyrifos. At lower doses that are more relevant to environmental exposures, the model predicts that adults will have slightly higher levels of chlorpyrifos-oxon in blood and greater cholinesterase inhibition. This model provides a computational framework for age-comparative simulations that can be utilized to predict CPF disposition and biological response over various postnatal life-stages.

  11. A validated model to predict microalgae growth in outdoor pond cultures subjected to fluctuating light intensities and water temperatures

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Huesemann, Michael H.; Crowe, Braden J.; Waller, Peter; Chavis, Aaron R.; Hobbs, Samuel J.; Edmundson, Scott J.; Wigmosta, Mark S.

    2015-12-11

    Here, a microalgae biomass growth model was developed for screening novel strains for their potential to exhibit high biomass productivities under nutrient-replete conditions in outdoor ponds subjected to fluctuating light intensities and water temperatures. Growth is modeled by first estimating the light attenuation by biomass according to a scatter-corrected Beer-Lambert Law, and then calculating the specific growth rate in discretized culture volume slices that receive declining light intensities due to attenuation. The model requires the following experimentally determined strain-specific input parameters: specific growth rate as a function of light intensity and temperature, biomass loss rate in the dark as amore » function of temperature and average light intensity during the preceding light period, and the scatter-corrected biomass light absorption coefficient. The model was successful in predicting the growth performance and biomass productivity of three different microalgae species (Chlorella sorokiniana, Nannochloropsis salina, and Picochlorum sp.) in raceway pond cultures (batch and semi-continuous) subjected to diurnal sunlight intensity and water temperature variations. Model predictions were moderately sensitive to minor deviations in input parameters. To increase the predictive power of this and other microalgae biomass growth models, a better understanding of the effects of mixing-induced rapid light dark cycles on photo-inhibition and short-term biomass losses due to dark respiration in the aphotic zone of the pond is needed.« less

  12. A validated model to predict microalgae growth in outdoor pond cultures subjected to fluctuating light intensities and water temperatures

    SciTech Connect (OSTI)

    Huesemann, Michael H.; Crowe, Braden J.; Waller, Peter; Chavis, Aaron R.; Hobbs, Samuel J.; Edmundson, Scott J.; Wigmosta, Mark S.

    2015-12-11

    Here, a microalgae biomass growth model was developed for screening novel strains for their potential to exhibit high biomass productivities under nutrient-replete conditions in outdoor ponds subjected to fluctuating light intensities and water temperatures. Growth is modeled by first estimating the light attenuation by biomass according to a scatter-corrected Beer-Lambert Law, and then calculating the specific growth rate in discretized culture volume slices that receive declining light intensities due to attenuation. The model requires the following experimentally determined strain-specific input parameters: specific growth rate as a function of light intensity and temperature, biomass loss rate in the dark as a function of temperature and average light intensity during the preceding light period, and the scatter-corrected biomass light absorption coefficient. The model was successful in predicting the growth performance and biomass productivity of three different microalgae species (Chlorella sorokiniana, Nannochloropsis salina, and Picochlorum sp.) in raceway pond cultures (batch and semi-continuous) subjected to diurnal sunlight intensity and water temperature variations. Model predictions were moderately sensitive to minor deviations in input parameters. To increase the predictive power of this and other microalgae biomass growth models, a better understanding of the effects of mixing-induced rapid light dark cycles on photo-inhibition and short-term biomass losses due to dark respiration in the aphotic zone of the pond is needed.

  13. Modelling Residential-Scale Combustion-Based Cogeneration in Building Simulation

    SciTech Connect (OSTI)

    Ferguson, A.; Kelly, N.; Weber, A.; Griffith, B.

    2009-03-01

    This article describes the development, calibration and validation of a combustion-cogeneration model for whole-building simulation. As part of IEA Annex 42, we proposed a parametric model for studying residentialscale cogeneration systems based on both Stirling and internal combustion engines. The model can predict the fuel use, thermal output and electrical generation of a cogeneration device in response to changing loads, coolant temperatures and flow rates, and control strategies. The model is now implemented in the publicly-available EnergyPlus, ESP-r and TRNSYS building simulation programs. We vetted all three implementations using a comprehensive comparative testing suite, and validated the model's theoretical basis through comparison to measured data. The results demonstrate acceptable-to-excellent agreement, and suggest the model can be used with confidence when studying the energy performance of cogeneration equipment in non-condensing operation.

  14. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    SciTech Connect (OSTI)

    Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin; van Bloemen Waanders, Bart Gustaaf; Fike, Jeffrey A.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier-Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the “Lyapunov inner product“. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed “ROM stabilization via optimization-based eigenvalue reassignment“, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of “lessons learned“ and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.

  15. High-throughput prediction of Acacia and eucalypt lignin syringyl/guaiacyl content using FT-Raman spectroscopy and partial least squares modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lupoi, Jason S.; Healey, Adam; Singh, Seema; Sykes, Robert; Davis, Mark; Lee, David J.; Shepherd, Merv; Simmons, Blake A.; Henry, Robert J.

    2015-01-16

    High-throughput techniques are necessary to efficiently screen potential lignocellulosic feedstocks for the production of renewable fuels, chemicals, and bio-based materials, thereby reducing experimental time and expense while supplanting tedious, destructive methods. The ratio of lignin syringyl (S) to guaiacyl (G) monomers has been routinely quantified as a way to probe biomass recalcitrance. Mid-infrared and Raman spectroscopy have been demonstrated to produce robust partial least squares models for the prediction of lignin S/G ratios in a diverse group of Acacia and eucalypt trees. The most accurate Raman model has now been used to predict the S/G ratio from 269 unknown Acaciamore » and eucalypt feedstocks. This study demonstrates the application of a partial least squares model composed of Raman spectral data and lignin S/G ratios measured using pyrolysis/molecular beam mass spectrometry (pyMBMS) for the prediction of S/G ratios in an unknown data set. The predicted S/G ratios calculated by the model were averaged according to plant species, and the means were not found to differ from the pyMBMS ratios when evaluating the mean values of each method within the 95 % confidence interval. Pairwise comparisons within each data set were employed to assess statistical differences between each biomass species. While some pairwise appraisals failed to differentiate between species, Acacias, in both data sets, clearly display significant differences in their S/G composition which distinguish them from eucalypts. In conclusion, this research shows the power of using Raman spectroscopy to supplant tedious, destructive methods for the evaluation of the lignin S/G ratio of diverse plant biomass materials.« less

  16. Solid phase evolution in the Biosphere 2 hillslope experiment as predicted by modeling of hydrologic and geochemical fluxes

    SciTech Connect (OSTI)

    Dontsova, K.; Steefel, C.I.; Desilets, S.; Thompson, A.; Chorover, J.

    2009-07-15

    A reactive transport geochemical modeling study was conducted to help predict the mineral transformations occurring over a ten year time-scale that are expected to impact soil hydraulic properties in the Biosphere 2 (B2) synthetic hillslope experiment. The modeling sought to predict the rate and extent of weathering of a granular basalt (selected for hillslope construction) as a function of climatic drivers, and to assess the feedback effects of such weathering processes on the hydraulic properties of the hillslope. Flow vectors were imported from HYDRUS into a reactive transport code, CrunchFlow2007, which was then used to model mineral weathering coupled to reactive solute transport. Associated particle size evolution was translated into changes in saturated hydraulic conductivity using Rosetta software. We found that flow characteristics, including velocity and saturation, strongly influenced the predicted extent of incongruent mineral weathering and neo-phase precipitation on the hillslope. Results were also highly sensitive to specific surface areas of the soil media, consistent with surface reaction controls on dissolution. Effects of fluid flow on weathering resulted in significant differences in the prediction of soil particle size distributions, which should feedback to alter hillslope hydraulic conductivities.

  17. Development of Modeling Methods and Tools for Predicting Coupled Reactive Transport Processes in Porous Media at Multiple Scales

    SciTech Connect (OSTI)

    Clement, T Prabhakar; Barnett, Mark O; Zheng, Chunmiao; Jones, Norman L

    2010-05-05

    DE-FG02-06ER64213: Development of Modeling Methods and Tools for Predicting Coupled Reactive Transport Processes in Porous Media at Multiple Scales Investigators: T. Prabhakar Clement (PD/PI) and Mark O. Barnett (Auburn), Chunmiao Zheng (Univ. of Alabama), and Norman L. Jones (BYU). The objective of this project was to develop scalable modeling approaches for predicting the reactive transport of metal contaminants. We studied two contaminants, a radioactive cation [U(VI)] and a metal(loid) oxyanion system [As(III/V)], and investigated their interactions with two types of subsurface materials, iron and manganese oxyhydroxides. We also developed modeling methods for describing the experimental results. Overall, the project supported 25 researchers at three universities. Produced 15 journal articles, 3 book chapters, 6 PhD dissertations and 6 MS theses. Three key journal articles are: 1) Jeppu et al., A scalable surface complexation modeling framework for predicting arsenate adsorption on goethite-coated sands, Environ. Eng. Sci., 27(2): 147-158, 2010. 2) Loganathan et al., Scaling of adsorption reactions: U(VI) experiments and modeling, Applied Geochemistry, 24 (11), 2051-2060, 2009. 3) Phillippi, et al., Theoretical solid/solution ratio effects on adsorption and transport: uranium (VI) and carbonate, Soil Sci. Soci. of America, 71:329-335, 2007

  18. Model-Predictive Cascade Mitigation in Electric Power Systems With Storage and Renewables-Part I: Theory and Implementation

    SciTech Connect (OSTI)

    Almassalkhi, MR; Hiskens, IA

    2015-01-01

    A novel model predictive control (MPC) scheme is developed for mitigating the effects of severe line-overload disturbances in electrical power systems. A piece-wise linear convex approximation of line losses is employed to model the effect of transmission line power flow on conductor temperatures. Control is achieved through a receding-horizon model predictive control (MPC) strategy which alleviates line temperature overloads and thereby prevents the propagation of outages. The MPC strategy adjusts line flows by rescheduling generation, energy storage and controllable load, while taking into account ramp-rate limits and network limitations. In Part II of this paper, the MPC strategy is illustrated through simulation of the IEEE RTS-96 network, augmented to incorporate energy storage and renewable generation.

  19. Failure Mode Classification for Life Prediction Modeling of Solid-State Lighting

    SciTech Connect (OSTI)

    Sakalaukus, Peter Joseph

    2015-08-01

    Since the passing of the Energy Independence and Security Act of 2007, the U.S. government has mandated greater energy independence which has acted as a catalyst for accelerating and facilitating research efforts toward the development and deployment of market-driven solutions for energy-saving homes, buildings and manufacturing, as well as sustainable transportation and renewable electricity generation. As part of this effort, an emphasis toward advancing solid-state lighting technology through research, development, demonstration, and commercial applications is assisting in the phase out of the common incandescent light bulb, as well as developing a more economical lighting source that is less toxic than compact fluorescent lighting. This has led lighting manufacturers to pursue SSL technologies for a wide range of consumer lighting applications. An SSL luminaire’s lifetime can be characterized in terms of lumen maintenance life. Lumen maintenance or lumen depreciation is the percentage decrease in the relative luminous flux from that of the original, pristine luminous flux value. Lumen maintenance life is the estimated operating time, in hours, when the desired failure threshold is projected to be reached at normal operating conditions. One accepted failure threshold of SSL luminaires is lumen maintenance of 70% -- a 30% reduction in the light output of the luminaire. Currently, the only approved lighting standard that puts forth a recommendation for long-term luminous flux maintenance projections towards a specified failure threshold of an SSL luminaire is the IES TM-28-14 (TM28) standard. iii TM28 was derived as a means to compare luminaires that have been tested at different facilities, research labs or companies. TM28 recommends the use of the Arrhenius equation to determine SSL device specific reaction rates from thermally driven failure mechanisms used to characterize a single failure mode – the relative change in the luminous flux output or “light power” of the SSL luminaire. The use of the Arrhenius equation necessitates two different temperature conditions, 25°C and 45°C are suggested by TM28, to determine the SSL lamp specific activation energy. One principal issue with TM28 is the lack of additional stresses or parameters needed to characterize non-temperature dependent failure mechanisms. Another principal issue with TM28 is the assumption that lumen maintenance or lumen depreciation gives an adequate comparison between SSL luminaires. Additionally, TM28 has no process for the determination of acceleration factors or lifetime estimations. Currently, a literature gap exists for established accelerated test methods for SSL devices to assess quality, reliability and durability before being introduced into the marketplace. Furthermore, there is a need for Physics-of-Failure based approaches to understand the processes and mechanisms that induce failure for the assessment of SSL reliability in order to develop generalized acceleration factors that better represent SSL product lifetime. This and the deficiencies in TM28 validate the need behind the development of acceleration techniques to quantify SSL reliability under a variety of environmental conditions. The ability to assess damage accrual and investigate reliability of SSL components and systems is essential to understanding the life time of the SSL device itself. The methodologies developed in this work increases the understanding of SSL devices iv through the investigation of component and device reliability under a variety of accelerated test conditions. The approaches for suitable lifetime predictions through the development of novel generalized acceleration factors, as well as a prognostics and health management framework, will greatly reduce the time and effort needed to produce SSL acceleration factors for the development of lifetime predictions.

  20. Predictive Simulation | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Predictive Simulation Predictive Simulation Empirical To First Principle Models Computing tools currently used in nuclear industry and regulatory practice are based primarily on empirical math models to approximate, or fit, existing experimental data. Many have a pedigree reaching back to the 1970s and 1980s and were designed to support decision making and evaluate everything from behavior of individual fuel pellets to severe accident scenarios for an entire power plant. Programs like SAPHIRE,

  1. CPUF - a chemical-structure-based polyurethane foam decomposition and foam response model.

    SciTech Connect (OSTI)

    Fletcher, Thomas H. (Brigham Young University, Provo, UT); Thompson, Kyle Richard; Erickson, Kenneth L.; Dowding, Kevin J.; Clayton, Daniel (Brigham Young University, Provo, UT); Chu, Tze Yao; Hobbs, Michael L.; Borek, Theodore Thaddeus III

    2003-07-01

    A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the experiments where the decomposition gases were vented sufficiently. The CPUF model results were not as good for the partially confined radiant heat experiments where the vent area was regulated to maintain pressure. Liquefaction and flow effects, which are not considered in the CPUF model, become important when the decomposition gases are confined.

  2. Final predictions of ambient conditions along the east-west crossdrift using the 3-D UZ site-scale model. Level 4 milestoneSP33ABM4.

    SciTech Connect (OSTI)

    Ritcey, A.C.; Sonnenthal, E.L.; Wu, Y.S.; Haukwa, C.; Bodvarsson,G.S.

    1998-03-01

    In 1998, the Yucca Mountain Site Characterization Project (YMP) is expected to continue construction of an East-West Cross Drift. The 5-meter diameter drift will extend from the North Ramp of the Exploratory Studies Facility (ESF), near Station 19+92, southwest through the repository block, and over to and through the Solitario Canyon Fault. This drift is part of a program designed to enhance characterization of Yucca Mountain and to complement existing surface-based and ESF testing studies. The objective of this milestone is to use the three-dimensional (3-D) unsaturated zone (UZ) site-scale model to predict ambient conditions along the East-West Cross Drift. These predictions provide scientists and engineers with a priori information that can support design and construction of the East-West Cross Drift and associated testing program. The predictions also provide, when compared with data collected after drift construction, an opportunity to test and verify the calibration of the 3-D UZ site-scale model.

  3. A Geothermal Field Model Based On Geophysical And Thermal Prospectings...

    Open Energy Info (EERE)

    Field Model Based On Geophysical And Thermal Prospectings In Nea Kessani (Ne Greece) Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article: A...

  4. A nonlocal, ordinary, state-based plasticity model for peridynamics...

    Office of Scientific and Technical Information (OSTI)

    An implicit time integration algorithm for a non-local, state-based, peridynamics plasticity model is developed. The flow rule was proposed in 3 without an integration strategy ...

  5. Physiologically-based pharmacokinetic model for Fentanyl in support of the development of Provisional Advisory Levels

    SciTech Connect (OSTI)

    Shankaran, Harish; Adeshina, Femi; Teeguarden, Justin G.

    2013-12-15

    Provisional Advisory Levels (PALs) are tiered exposure limits for toxic chemicals in air and drinking water that are developed to assist in emergency responses. Physiologically-based pharmacokinetic (PBPK) modeling can support this process by enabling extrapolations across doses, and exposure routes, thereby addressing gaps in the available toxicity data. Here, we describe the development of a PBPK model for Fentanyl – a synthetic opioid used clinically for pain management – to support the establishment of PALs. Starting from an existing model for intravenous Fentanyl, we first optimized distribution and clearance parameters using several additional IV datasets. We then calibrated the model using pharmacokinetic data for various formulations, and determined the absorbed fraction, F, and time taken for the absorbed amount to reach 90% of its final value, t90. For aerosolized pulmonary Fentanyl, F = 1 and t90 < 1 min indicating complete and rapid absorption. The F value ranged from 0.35 to 0.74 for oral and various transmucosal routes. Oral Fentanyl was absorbed the slowest (t90 ? 300 min); the absorption of intranasal Fentanyl was relatively rapid (t90 ? 20–40 min); and the various oral transmucosal routes had intermediate absorption rates (t90 ? 160–300 min). Based on these results, for inhalation exposures, we assumed that all of the Fentanyl inhaled from the air during each breath directly, and instantaneously enters the arterial circulation. We present model predictions of Fentanyl blood concentrations in oral and inhalation scenarios relevant for PAL development, and provide an analytical expression that can be used to extrapolate between oral and inhalation routes for the derivation of PALs. - Highlights: • We develop a Fentanyl PBPK model for relating external dose to internal levels. • We calibrate the model to oral and inhalation exposures using > 50 human datasets. • Model predictions are in good agreement with the available pharmacokinetic data. • The model can be used for extrapolating across routes, doses and exposure durations. • We illustrate how the model can be used for developing Provisional Advisory Levels.

  6. A Model-Based Approach to Scintillator/Photomultiplier System

    Office of Scientific and Technical Information (OSTI)

    Characterization (Technical Report) | SciTech Connect Technical Report: A Model-Based Approach to Scintillator/Photomultiplier System Characterization Citation Details In-Document Search Title: A Model-Based Approach to Scintillator/Photomultiplier System Characterization Abstract not provided. Authors: Candy, J. V. [1] + Show Author Affiliations Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States) Publication Date: 2014-12-15 OSTI Identifier: 1179437 Report Number(s):

  7. Physics-Based Constraints in the Forward Modeling Analysis of

    Office of Scientific and Technical Information (OSTI)

    Time-Correlated Image Data, (Long Version) (Technical Report) | SciTech Connect Technical Report: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Citation Details In-Document Search Title: Physics-Based Constraints in the Forward Modeling Analysis of Time-Correlated Image Data, (Long Version) Authors: Carroll, James L. [1] ; Tomkins, Christopher D. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date:

  8. Physics-based constraints in the forward modeling analysis of

    Office of Scientific and Technical Information (OSTI)

    time-correlated image data (Conference) | SciTech Connect Conference: Physics-based constraints in the forward modeling analysis of time-correlated image data Citation Details In-Document Search Title: Physics-based constraints in the forward modeling analysis of time-correlated image data Authors: Carroll, James [1] ; Tomkins, Chris [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2012-03-15 OSTI Identifier: 1209307 Report Number(s): LA-UR-12-01365;

  9. Physics-based statistical learning approach to mesoscopic model selection

    Office of Scientific and Technical Information (OSTI)

    (Journal Article) | SciTech Connect Physics-based statistical learning approach to mesoscopic model selection Citation Details In-Document Search This content will become publicly available on November 8, 2016 Title: Physics-based statistical learning approach to mesoscopic model selection Authors: Taverniers, SĂžren ; Haut, Terry S. ; Barros, Kipton ; Alexander, Francis J. ; Lookman, Turab Publication Date: 2015-11-09 OSTI Identifier: 1225546 Grant/Contract Number: AC52-06NA25396;

  10. Transient PVT measurements and model predictions for vessel heat transfer. Part II.

    SciTech Connect (OSTI)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.; Evans, Gregory Herbert; Rice, Steven F.

    2010-07-01

    Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models in which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.

  11. Modeling the Number of Ignitions Following an Earthquake: Developing Prediction Limits for Overdispersed Count Data

    Office of Environmental Management (EM)

    Department of Energy Model Documents for an Energy Savings Performance Contract Project Model Documents for an Energy Savings Performance Contract Project This page contains a model contract template and companion documents to help you launch energy efficiency projects through Energy Savings Performance Contracting (ESPC). Read about how these documents were developed. The ESPC Model Documents were prepared as resources that can be used when developing or updating procurement and contracting

  12. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect (OSTI)

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment to the design concept is quantitatively determined. A technique is then established to assimilate this data and produce posteriori uncertainties on key attributes and responses of the design concept. Several experiment perturbations based on engineering judgment are used to demonstrate these methods and also serve as an initial generation of the optimization problem. Finally, an optimization technique is developed which will simultaneously arrive at an optimized experiment to produce an optimized reactor design. Solution of this problem is made possible by the use of the simulated annealing algorithm for solution of optimization problems. The optimization examined in this work is based on maximizing the reactor cost savings associated with the modified design made possible by using the design margin gained through reduced basic nuclear data uncertainties. Cost values for experiment design specifications and reactor design specifications are established and used to compute a total savings by comparing the posteriori reactor cost to the a priori cost plus the cost of the experiment. The optimized solution arrives at a maximized cost savings.

  13. Predicting Overall Survival After Stereotactic Ablative Radiation Therapy in Early-Stage Lung Cancer: Development and External Validation of the Amsterdam Prognostic Model

    SciTech Connect (OSTI)

    Louie, Alexander V.; Haasbeek, Cornelis J.A.; Mokhles, Sahar; Rodrigues, George B.; Stephans, Kevin L.; Lagerwaard, Frank J.; Palma, David A.; Videtic, Gregory M.M.; Warner, Andrew; Takkenberg, Johanna J.M.; Reddy, Chandana A.; Maat, Alex P.W.M.; Woody, Neil M.; Slotman, Ben J.; Senan, Suresh

    2015-09-01

    Purpose: A prognostic model for 5-year overall survival (OS), consisting of recursive partitioning analysis (RPA) and a nomogram, was developed for patients with early-stage non-small cell lung cancer (ES-NSCLC) treated with stereotactic ablative radiation therapy (SABR). Methods and Materials: A primary dataset of 703 ES-NSCLC SABR patients was randomly divided into a training (67%) and an internal validation (33%) dataset. In the former group, 21 unique parameters consisting of patient, treatment, and tumor factors were entered into an RPA model to predict OS. Univariate and multivariate models were constructed for RPA-selected factors to evaluate their relationship with OS. A nomogram for OS was constructed based on factors significant in multivariate modeling and validated with calibration plots. Both the RPA and the nomogram were externally validated in independent surgical (n=193) and SABR (n=543) datasets. Results: RPA identified 2 distinct risk classes based on tumor diameter, age, World Health Organization performance status (PS) and Charlson comorbidity index. This RPA had moderate discrimination in SABR datasets (c-index range: 0.52-0.60) but was of limited value in the surgical validation cohort. The nomogram predicting OS included smoking history in addition to RPA-identified factors. In contrast to RPA, validation of the nomogram performed well in internal validation (r{sup 2}=0.97) and external SABR (r{sup 2}=0.79) and surgical cohorts (r{sup 2}=0.91). Conclusions: The Amsterdam prognostic model is the first externally validated prognostication tool for OS in ES-NSCLC treated with SABR available to individualize patient decision making. The nomogram retained strong performance across surgical and SABR external validation datasets. RPA performance was poor in surgical patients, suggesting that 2 different distinct patient populations are being treated with these 2 effective modalities.

  14. ALE3D Model Predictions and Materials Characterization for the Cookoff Response of PBXN-109

    SciTech Connect (OSTI)

    McClelland, M A; Maienschein, J L; Nichols, A L; Wardell, J F; Atwood, A I; Curran, P O

    2002-03-19

    ALE3D simulations are presented for the thermal explosion of PBXN-109 (RDX, AI, HTPB, DOA) in support of an effort by the U. S. Navy and Department of Energy (DOE) to validate computational models. The U.S. Navy is performing benchmark tests for the slow cookoff of PBXN-109 in a sealed tube. Candidate models are being tested using the ALE3D code, which can simulate the coupled thermal, mechanical, and chemical behavior during heating, ignition, and explosion. The strength behavior of the solid constituents is represented by a Steinberg-Guinan model while polynomial and gamma-law expressions are used for the Equation Of State (EOS) for the solid and gas species, respectively. A void model is employed to represent the air in gaps. ALE3D model 'parameters are specified using measurements of thermal and mechanical properties including thermal expansion, heat capacity, shear modulus, and bulk modulus. A standard three-step chemical kinetics model is used during the thermal ramp, and a pressure-dependent burn front model is employed during the rapid expansion. Parameters for the three-step kinetics model are specified using measurements of the One-Dimensional-Time-to-Explosion (ODTX), while measurements for burn rate of pristine and thermally damaged material are employed to determine parameters in the burn front model. Results are given for calculations in which heating, ignition, and explosion are modeled in a single simulation. We compare model results to measurements for the cookoff temperature and tube wall strain.

  15. Progress toward bridging from atomistic to continuum modeling to predict nuclear waste glass dissolution.

    SciTech Connect (OSTI)

    Zapol, Peter; Bourg, Ian; Criscenti, Louise Jacqueline; Steefel, Carl I.; Schultz, Peter Andrew

    2011-10-01

    This report summarizes research performed for the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Subcontinuum and Upscaling Task. The work conducted focused on developing a roadmap to include molecular scale, mechanistic information in continuum-scale models of nuclear waste glass dissolution. This information is derived from molecular-scale modeling efforts that are validated through comparison with experimental data. In addition to developing a master plan to incorporate a subcontinuum mechanistic understanding of glass dissolution into continuum models, methods were developed to generate constitutive dissolution rate expressions from quantum calculations, force field models were selected to generate multicomponent glass structures and gel layers, classical molecular modeling was used to study diffusion through nanopores analogous to those in the interfacial gel layer, and a micro-continuum model (K{mu}C) was developed to study coupled diffusion and reaction at the glass-gel-solution interface.

  16. Validation of a Fast-Fluid-Dynamics Model for Predicting Distribution of Particles with Low Stokes Number

    SciTech Connect (OSTI)

    Zuo, Wangda; Chen, Qingyan

    2011-06-01

    To design a healthy indoor environment, it is important to study airborne particle distribution indoors. As an intermediate model between multizone models and computational fluid dynamics (CFD), a fast fluid dynamics (FFD) model can be used to provide temporal and spatial information of particle dispersion in real time. This study evaluated the accuracy of the FFD for predicting transportation of particles with low Stokes number in a duct and in a room with mixed convection. The evaluation was to compare the numerical results calculated by the FFD with the corresponding experimental data and the results obtained by the CFD. The comparison showed that the FFD could capture major pattern of particle dispersion, which is missed in models with well-mixed assumptions. Although the FFD was less accurate than the CFD partially due to its simplification in numeric schemes, it was 53 times faster than the CFD.

  17. A Novel Method for Predicting Late Genitourinary Toxicity After Prostate Radiation Therapy and the Need for Age-Based Risk-Adapted Dose Constraints

    SciTech Connect (OSTI)

    Ahmed, Awad A.; Egleston, Brian; Alcantara, Pino; Li, Linna; Pollack, Alan; Horwitz, Eric M.; Buyyounouski, Mark K.

    2013-07-15

    Background: There are no well-established normal tissue sparing dose–volume histogram (DVH) criteria that limit the risk of urinary toxicity from prostate radiation therapy (RT). The aim of this study was to determine which criteria predict late toxicity among various DVH parameters when contouring the entire solid bladder and its contents versus the bladder wall. The area under the histogram curve (AUHC) was also analyzed. Methods and Materials: From 1993 to 2000, 503 men with prostate cancer received 3-dimensional conformal RT (median follow-up time, 71 months). The whole bladder and the bladder wall were contoured in all patients. The primary endpoint was grade ?2 genitourinary (GU) toxicity occurring ?3 months after completion of RT. Cox regressions of time to grade ?2 toxicity were estimated separately for the entire bladder and bladder wall. Concordance probability estimates (CPE) assessed model discriminative ability. Before training the models, an external random test group of 100 men was set aside for testing. Separate analyses were performed based on the mean age (? 68 vs >68 years). Results: Age, pretreatment urinary symptoms, mean dose (entire bladder and bladder wall), and AUHC (entire bladder and bladder wall) were significant (P<.05) in multivariable analysis. Overall, bladder wall CPE values were higher than solid bladder values. The AUHC for bladder wall provided the greatest discrimination for late bladder toxicity when compared with alternative DVH points, with CPE values of 0.68 for age ?68 years and 0.81 for age >68 years. Conclusion: The AUHC method based on bladder wall volumes was superior for predicting late GU toxicity. Age >68 years was associated with late grade ?2 GU toxicity, which suggests that risk-adapted dose constraints based on age should be explored.

  18. Development of Chemical Model to Predict the Interactions between Supercritical CO2and Fluid, and Rocks in EGS Reservoirs

    Broader source: Energy.gov [DOE]

    This project will develop a chemical model, based on existing models and databases, that is capable of simulating chemical reactions between supercritical (SC) CO2 and Enhanced Geothermal System (EGS) reservoir rocks of various compositions in aqueous, non-aqueous and 2-phase environments.

  19. A connectivity-based modeling approach for representing hysteresis in macroscopic two-phase flow properties

    SciTech Connect (OSTI)

    Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; Bianchi, Marco; Zhou, Quanlin; Illangasekare, Tissa

    2014-12-31

    During CO2 injection and storage in deep reservoirs, the injected CO2 enters into an initially brine saturated porous medium, and after the injection stops, natural groundwater flow eventually displaces the injected mobile-phase CO2, leaving behind residual non-wetting fluid. Accurate modeling of two-phase flow processes are needed for predicting fate and transport of injected CO2, evaluating environmental risks and designing more effective storage schemes. The entrapped non-wetting fluid saturation is typically a function of the spatially varying maximum saturation at the end of injection. At the pore-scale, distribution of void sizes and connectivity of void space play a major role for the macroscopic hysteresis behavior and capillary entrapment of wetting and non-wetting fluids. This paper presents development of an approach based on the connectivity of void space for modeling hysteretic capillary pressure-saturation-relative permeability relationships. The new approach uses void-size distribution and a measure of void space connectivity to compute the hysteretic constitutive functions and to predict entrapped fluid phase saturations. Two functions, the drainage connectivity function and the wetting connectivity function, are introduced to characterize connectivity of fluids in void space during drainage and wetting processes. These functions can be estimated through pore-scale simulations in computer-generated porous media or from traditional experimental measurements of primary drainage and main wetting curves. The hysteresis model for saturation-capillary pressure is tested successfully by comparing the model-predicted residual saturation and scanning curves with actual data sets obtained from column experiments found in the literature. A numerical two-phase model simulator with the new hysteresis functions is tested against laboratory experiments conducted in a quasi-two-dimensional flow cell (91.4cmŚ5.6cmŚ61cm), packed with homogeneous and heterogeneous sands. Initial results show that the model can predict spatial and temporal distribution of injected fluid during the experiments reasonably well. However, further analyses are needed for comprehensively testing the ability of the model to predict transient two-phase flow processes and capillary entrapment in geological reservoirs during geological carbon sequestration.

  20. A connectivity-based modeling approach for representing hysteresis in macroscopic two-phase flow properties

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; Bianchi, Marco; Zhou, Quanlin; Illangasekare, Tissa

    2014-12-31

    During CO2 injection and storage in deep reservoirs, the injected CO2 enters into an initially brine saturated porous medium, and after the injection stops, natural groundwater flow eventually displaces the injected mobile-phase CO2, leaving behind residual non-wetting fluid. Accurate modeling of two-phase flow processes are needed for predicting fate and transport of injected CO2, evaluating environmental risks and designing more effective storage schemes. The entrapped non-wetting fluid saturation is typically a function of the spatially varying maximum saturation at the end of injection. At the pore-scale, distribution of void sizes and connectivity of void space play a major role formore » the macroscopic hysteresis behavior and capillary entrapment of wetting and non-wetting fluids. This paper presents development of an approach based on the connectivity of void space for modeling hysteretic capillary pressure-saturation-relative permeability relationships. The new approach uses void-size distribution and a measure of void space connectivity to compute the hysteretic constitutive functions and to predict entrapped fluid phase saturations. Two functions, the drainage connectivity function and the wetting connectivity function, are introduced to characterize connectivity of fluids in void space during drainage and wetting processes. These functions can be estimated through pore-scale simulations in computer-generated porous media or from traditional experimental measurements of primary drainage and main wetting curves. The hysteresis model for saturation-capillary pressure is tested successfully by comparing the model-predicted residual saturation and scanning curves with actual data sets obtained from column experiments found in the literature. A numerical two-phase model simulator with the new hysteresis functions is tested against laboratory experiments conducted in a quasi-two-dimensional flow cell (91.4cm×5.6cm×61cm), packed with homogeneous and heterogeneous sands. Initial results show that the model can predict spatial and temporal distribution of injected fluid during the experiments reasonably well. However, further analyses are needed for comprehensively testing the ability of the model to predict transient two-phase flow processes and capillary entrapment in geological reservoirs during geological carbon sequestration.« less

  1. A grillage model for predicting wrinkles in annular graphene under circular shearing

    SciTech Connect (OSTI)

    Zhang, Z.; Duan, W. H.; Wang, C. M.

    2013-01-07

    This paper is concerned with a Timoshenko grillage model for modeling the wrinkling phenomenon in annular graphene under circular shearing applied at its inner edge. By calibrating the grillage model results against the molecular mechanics (MM) results, the grillage model comprising beams of elliptical cross-section orientated along the carbon-carbon bond has section dimensions of 0.06 nm for the major axis length and 0.036 nm for the minor axis length. Moreover, the beams are connected to one another at 0.00212 nm from the geometric centric. This eccentric connection of beams allows the proposed grillage model to cater for the cross-couplings among bonds that produce the out-of-plane wrinkling pattern. The out-of-plane to in-plane bending stiffnesses' ratio is 0.36, and the cross bending stiffness provided by the ellipse eccentricity is 0.025 times that of the in-plane bending stiffness. Besides furnishing identical wave numbers as well as amplitudes and wavelengths that are in good agreement with MM results, the grillage model can capture wrinkling patterns with a boundary layer, whereas plate and membrane models could not mimic the boundary layer.

  2. Fish Individual-based Numerical Simulator (FINS): A particle-based model of juvenile salmonid movement and dissolved gas exposure history in the Columbia River Basin

    SciTech Connect (OSTI)

    Scheibe, Timothy D.; Richmond, Marshall C.

    2002-01-30

    This paper describes a numerical model of juvenile salmonid migration in the Columbia and Snake Rivers. The model, called the Fish Individual-based Numerical Simulator or FINS, employs a discrete, particle-based approach to simulate the migration and history of exposure to dissolved gases of individual fish. FINS is linked to a two-dimensional (vertically-averaged) hydrodynamic simulator that quantifies local water velocity, temperature, and dissolved gas levels as a function of river flow rates and dam operations. Simulated gas exposure histories can be input to biological mortality models to predict the effects of various river configurations on fish injury and mortality due to dissolved gas supersaturation. Therefore, FINS serves as a critical linkage between hydrodynamic models of the river system and models of biological impacts. FINS was parameterized and validated based on observations of individual fish movements collected using radiotelemetry methods during 1997 and 1998. A quasi-inverse approach was used to decouple fish swimming movements from advection with the local water velocity, allowing inference of time series of non-advective displacements of individual fish from the radiotelemetry data. Statistical analyses of these displacements are presented, and confirm that strong temporal correlation of fish swimming behavior persists in some cases over several hours. A correlated random-walk model was employed to simulate the observed migration behavior, and parameters of the model were estimated that lead to close correspondence between predictions and observations.

  3. Development of a Mechanistic-Based Healing Model for Self-Healing Glass Seals

    SciTech Connect (OSTI)

    Xu, Wei; Stephens, Elizabeth V.; Sun, Xin; Khaleel, Mohammad A.; Zbib, Hussein M.

    2012-10-01

    Self-healing glass, a recent development of hermetic sealant materials, has the ability to effectively repair damage when heated to elevated temperatures; thus, able to extend its service life. Since crack healing morphological changes in the glass material are usually temperature and stress dependent, quantitative studies to determine the effects of thermo-mechanical conditions on the healing behavior of the self-healing glass sealants are extremely useful to accommodate the design and optimization of the sealing systems within SOFCs. The goal of this task is to develop a mechanistic-based healing model to quantify the stress and temperature dependent healing behavior. A two-step healing mechanism was developed and implemented into finite element (FE) models through user-subroutines. Integrated experimental/kinetic Monte Carlo (kMC) simulation methodology was taken to calibrate the model parameters. The crack healing model is able to investigate the effects of various thermo-mechanical factors; therefore, able to determine the critical conditions under which the healing mechanism will be activated. Furthermore, the predicted results can be used to formulate the continuum damage-healing model and to assist the SOFC stack level simulations in predicting and evaluating the effectiveness and the performance of various engineering seal designs.

  4. Cloud-Based Model Calibration Using OpenStudio: Preprint

    SciTech Connect (OSTI)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  5. Statistical circuit simulation with measurement-based active device models: Implications for process control and IC manufacturability

    SciTech Connect (OSTI)

    Root, D.E.; McGinty, D.; Hughes, B.

    1995-12-31

    This paper presents a new approach to statistical active circuit design which unifies device parametric-based process control and non-parametric circuit simulation. Predictions of circuit sensitivity to process variation and yield-loss of circuits fabricated in two different GaAs IC processes are described. The simulations make use of measurement-based active device models which are not formulated in terms of conventional parametric statistical variables. The technique is implemented in commercially available simulation software (HP MDS).

  6. Early prediction of tumor recurrence based on CT texture changes after stereotactic ablative radiotherapy (SABR) for lung cancer

    SciTech Connect (OSTI)

    Mattonen, Sarah A.; Palma, David A.; Department of Oncology, The University of Western Ontario, London, Ontario N6A 4L6; Division of Radiation Oncology, London Regional Cancer Program, London, Ontario N6A 4L6 ; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2014-03-15

    Purpose: Benign computed tomography (CT) changes due to radiation induced lung injury (RILI) are common following stereotactic ablative radiotherapy (SABR) and can be difficult to differentiate from tumor recurrence. The authors measured the ability of CT image texture analysis, compared to more traditional measures of response, to predict eventual cancer recurrence based on CT images acquired within 5 months of treatment. Methods: A total of 24 lesions from 22 patients treated with SABR were selected for this study: 13 with moderate to severe benign RILI, and 11 with recurrence. Three-dimensional (3D) consolidative and ground-glass opacity (GGO) changes were manually delineated on all follow-up CT scans. Two size measures of the consolidation regions (longest axial diameter and 3D volume) and nine appearance features of the GGO were calculated: 2 first-order features [mean density and standard deviation of density (first-order texture)], and 7 second-order texture features [energy, entropy, correlation, inverse difference moment (IDM), inertia, cluster shade, and cluster prominence]. For comparison, the corresponding response evaluation criteria in solid tumors measures were also taken for the consolidation regions. Prediction accuracy was determined using the area under the receiver operating characteristic curve (AUC) and two-fold cross validation (CV). Results: For this analysis, 46 diagnostic CT scans scheduled for approximately 3 and 6 months post-treatment were binned based on their recorded scan dates into 2–5 month and 5–8 month follow-up time ranges. At 2–5 months post-treatment, first-order texture, energy, and entropy provided AUCs of 0.79–0.81 using a linear classifier. On two-fold CV, first-order texture yielded 73% accuracy versus 76%–77% with the second-order features. The size measures of the consolidative region, longest axial diameter and 3D volume, gave two-fold CV accuracies of 60% and 57%, and AUCs of 0.72 and 0.65, respectively. Conclusions: Texture measures of the GGO appearance following SABR demonstrated the ability to predict recurrence in individual patients within 5 months of SABR treatment. Appearance changes were also shown to be more accurately predictive of recurrence, as compared to size measures within the same time period. With further validation, these results could form the substrate for a clinically useful computer-aided diagnosis tool which could provide earlier salvage of patients with recurrence.

  7. Improved evidence-based genome-scale metabolic models for maize leaf, embryo, and endosperm

    SciTech Connect (OSTI)

    Seaver, Samuel M.D.; Bradbury, Louis M.T.; Frelin, Océane; Zarecki, Raphy; Ruppin, Eytan; Hanson, Andrew D.; Henry, Christopher S.

    2015-03-10

    There is a growing demand for genome-scale metabolic reconstructions for plants, fueled by the need to understand the metabolic basis of crop yield and by progress in genome and transcriptome sequencing. Methods are also required to enable the interpretation of plant transcriptome data to study how cellular metabolic activity varies under different growth conditions or even within different organs, tissues, and developmental stages. Such methods depend extensively on the accuracy with which genes have been mapped to the biochemical reactions in the plant metabolic pathways. Errors in these mappings lead to metabolic reconstructions with an inflated number of reactions and possible generation of unreliable metabolic phenotype predictions. Here we introduce a new evidence-based genome-scale metabolic reconstruction of maize, with significant improvements in the quality of the gene-reaction associations included within our model. We also present a new approach for applying our model to predict active metabolic genes based on transcriptome data. This method includes a minimal set of reactions associated with low expression genes to enable activity of a maximum number of reactions associated with high expression genes. We apply this method to construct an organ-specific model for the maize leaf, and tissue specific models for maize embryo and endosperm cells. We validate our models using fluxomics data for the endosperm and embryo, demonstrating an improved capacity of our models to fit the available fluxomics data. All models are publicly available via the DOE Systems Biology Knowledgebase and PlantSEED, and our new method is generally applicable for analysis transcript profiles from any plant, paving the way for further in silico studies with a wide variety of plant genomes.

  8. Improved evidence-based genome-scale metabolic models for maize leaf, embryo, and endosperm

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Seaver, Samuel M.D.; Bradbury, Louis M.T.; Frelin, Océane; Zarecki, Raphy; Ruppin, Eytan; Hanson, Andrew D.; Henry, Christopher S.

    2015-03-10

    There is a growing demand for genome-scale metabolic reconstructions for plants, fueled by the need to understand the metabolic basis of crop yield and by progress in genome and transcriptome sequencing. Methods are also required to enable the interpretation of plant transcriptome data to study how cellular metabolic activity varies under different growth conditions or even within different organs, tissues, and developmental stages. Such methods depend extensively on the accuracy with which genes have been mapped to the biochemical reactions in the plant metabolic pathways. Errors in these mappings lead to metabolic reconstructions with an inflated number of reactions andmore » possible generation of unreliable metabolic phenotype predictions. Here we introduce a new evidence-based genome-scale metabolic reconstruction of maize, with significant improvements in the quality of the gene-reaction associations included within our model. We also present a new approach for applying our model to predict active metabolic genes based on transcriptome data. This method includes a minimal set of reactions associated with low expression genes to enable activity of a maximum number of reactions associated with high expression genes. We apply this method to construct an organ-specific model for the maize leaf, and tissue specific models for maize embryo and endosperm cells. We validate our models using fluxomics data for the endosperm and embryo, demonstrating an improved capacity of our models to fit the available fluxomics data. All models are publicly available via the DOE Systems Biology Knowledgebase and PlantSEED, and our new method is generally applicable for analysis transcript profiles from any plant, paving the way for further in silico studies with a wide variety of plant genomes.« less

  9. Short-term, econometrically based coal-supply model

    SciTech Connect (OSTI)

    Soyster, A.L.; Enscore, E.E.

    1984-01-01

    A short-term coal supply model is described. The model is econometric in nature and is based on several statistical regressions in which coal prices are regressed against such explanatory variables as productivity, wages and mine size. The basic objective is to relate coal prices with various economic and engineering variables. A whole set of alternative regressions is provided to account for different geographical regions as well as varying coal quality. 3 references, 1 figure, 3 tables.

  10. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems

    SciTech Connect (OSTI)

    Gutierrez, Marte

    2013-12-31

    This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understand of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.

  11. A voxel-based multiscale model to simulate the radiation response of hypoxic tumors

    SciTech Connect (OSTI)

    Espinoza, I.; Peschke, P.; Karger, C. P.

    2015-01-15

    Purpose: In radiotherapy, it is important to predict the response of tumors to irradiation prior to the treatment. This is especially important for hypoxic tumors, which are known to be highly radioresistant. Mathematical modeling based on the dose distribution, biological parameters, and medical images may help to improve this prediction and to optimize the treatment plan. Methods: A voxel-based multiscale tumor response model for simulating the radiation response of hypoxic tumors was developed. It considers viable and dead tumor cells, capillary and normal cells, as well as the most relevant biological processes such as (i) proliferation of tumor cells, (ii) hypoxia-induced angiogenesis, (iii) spatial exchange of cells leading to tumor growth, (iv) oxygen-dependent cell survival after irradiation, (v) resorption of dead cells, and (vi) spatial exchange of cells leading to tumor shrinkage. Oxygenation is described on a microscopic scale using a previously published tumor oxygenation model, which calculates the oxygen distribution for each voxel using the vascular fraction as the most important input parameter. To demonstrate the capabilities of the model, the dependence of the oxygen distribution on tumor growth and radiation-induced shrinkage is investigated. In addition, the impact of three different reoxygenation processes is compared and tumor control probability (TCP) curves for a squamous cells carcinoma of the head and neck (HNSSC) are simulated under normoxic and hypoxic conditions. Results: The model describes the spatiotemporal behavior of the tumor on three different scales: (i) on the macroscopic scale, it describes tumor growth and shrinkage during radiation treatment, (ii) on a mesoscopic scale, it provides the cell density and vascular fraction for each voxel, and (iii) on the microscopic scale, the oxygen distribution may be obtained in terms of oxygen histograms. With increasing tumor size, the simulated tumors develop a hypoxic core. Within the model, tumor shrinkage was found to be significantly more important for reoxygenation than angiogenesis or decreased oxygen consumption due to an increased fraction of dead cells. In the studied HNSSC-case, the TCD{sub 50} values (dose at 50% TCP) decreased from 71.0 Gy under hypoxic to 53.6 Gy under the oxic condition. Conclusions: The results obtained with the developed multiscale model are in accordance with expectations based on radiobiological principles and clinical experience. As the model is voxel-based, radiological imaging methods may help to provide the required 3D-characterization of the tumor prior to irradiation. For clinical application, the model has to be further validated with experimental and clinical data. If this is achieved, the model may be used to optimize fractionation schedules and dose distributions for the treatment of hypoxic tumors.

  12. Methods for measurement of a dimensional characteristic and methods of predictive modeling related thereto

    DOE Patents [OSTI]

    Robertson, Eric P; Christiansen, Richard L.

    2007-10-23

    A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.

  13. Methods and apparatus for measurement of a dimensional characteristic and methods of predictive modeling related thereto

    DOE Patents [OSTI]

    Robertson, Eric P; Christiansen, Richard L.

    2007-05-29

    A method of optically determining a change in magnitude of at least one dimensional characteristic of a sample in response to a selected chamber environment. A magnitude of at least one dimension of the at least one sample may be optically determined subsequent to altering the at least one environmental condition within the chamber. A maximum change in dimension of the at least one sample may be predicted. A dimensional measurement apparatus for indicating a change in at least one dimension of at least one sample. The dimensional measurement apparatus may include a housing with a chamber configured for accommodating pressure changes and an optical perception device for measuring a dimension of at least one sample disposed in the chamber. Methods of simulating injection of a gas into a subterranean formation, injecting gas into a subterranean formation, and producing methane from a coal bed are also disclosed.

  14. A new model for predicting the fouling deposit weight of coal

    SciTech Connect (OSTI)

    Yeakel, J.D. ); Finkelman, R.B. )

    1988-06-01

    One of the major problems associated with coal combustion is the buildup of sintered ash deposits in the convective passes of boilers. These deposits, referred to as fouling deposits, can drastically reduce heat transfer, cause erosion by channelizing gas flow, and contribute to the corrosion of exposed metal surfaces. Downtime for cleaning fouled commercial boilers can be a multi-million-dollar expense. Utility boilers generally are designed to burn coal that falls within a specific fouling behavior range. Therefore, to minimize the deleterious effects of boiler fouling and to maximize boiler efficiency, it is necessary to anticipate or assess the fouling characteristics of a coal prior to combustion. This paper introduces a new method for predicting fouling deposit weights by using commonly available coal quality data. The authors have developed a modified concept of the coal quality characteristics that influence fouling. This concept evolved from a review of the literature and from the statistical analysis of results from 44 combustion tests.

  15. Tritium monitoring in groundwater and evaluation of model predictions for the Hanford Site 200 Area Effluent Treatment Facility

    SciTech Connect (OSTI)

    Barnett, D.B.; Bergeron, M.P.; Cole, C.R.; Freshley, M.D.; Wurstner, S.K.

    1997-08-01

    The Effluent Treatment Facility (ETF) disposal site, also known as the State-Approved Land Disposal Site (SALDS), receives treated effluent containing tritium, which is allowed to infiltrate through the soil column to the water table. Tritium was first detected in groundwater monitoring wells around the facility in July 1996. The SALDS groundwater monitoring plan requires revision of a predictive groundwater model and reevaluation of the monitoring well network one year from the first detection of tritium in groundwater. This document is written primarily to satisfy these requirements and to report on analytical results for tritium in the SALDS groundwater monitoring network through April 1997. The document also recommends an approach to continued groundwater monitoring for tritium at the SALDS. Comparison of numerical groundwater models applied over the last several years indicate that earlier predictions, which show tritium from the SALDS approaching the Columbia River, were too simplified or overly robust in source assumptions. The most recent modeling indicates that concentrations of tritium above 500 pCi/L will extend, at most, no further than {approximately}1.5 km from the facility, using the most reasonable projections of ETF operation. This extent encompasses only the wells in the current SALDS tritium-tracking network.

  16. Midtemperature solar systems test facility predictions for thermal performance based on test data. Toltec two-axis tracking solar collector with 3M acrylic polyester film reflector surface

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-06-01

    Thermal performance predictions based on test data are presented for the Toltec solar collector, with acrylic film reflector surface, for three output temperatures at five cities in the United States.

  17. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  18. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems (EGS)

    Broader source: Energy.gov [DOE]

    Project objectives: Develop a true 3D hydro-thermal fracturing and proppant flow/transport simulator that is particularly suited for EGS reservoir creation. Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator.

  19. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect (OSTI)

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  20. REDUCING UNCERTAINTIES IN MODEL PREDICTIONS VIA HISTORY MATCHING OF CO2 MIGRATION AND REACTIVE TRANSPORT MODELING OF CO2 FATE AT THE SLEIPNER PROJECT

    SciTech Connect (OSTI)

    Zhu, Chen

    2015-03-31

    An important question for the Carbon Capture, Storage, and Utility program is “can we adequately predict the CO2 plume migration?” For tracking CO2 plume development, the Sleipner project in the Norwegian North Sea provides more time-lapse seismic monitoring data than any other sites, but significant uncertainties still exist for some of the reservoir parameters. In Part I, we assessed model uncertainties by applying two multi-phase compositional simulators to the Sleipner Benchmark model for the uppermost layer (Layer 9) of the Utsira Sand and calibrated our model against the time-lapsed seismic monitoring data for the site from 1999 to 2010. Approximate match with the observed plume was achieved by introducing lateral permeability anisotropy, adding CH4 into the CO2 stream, and adjusting the reservoir temperatures. Model-predicted gas saturation, CO2 accumulation thickness, and CO2 solubility in brine—none were used as calibration metrics—were all comparable with the interpretations of the seismic data in the literature. In Part II & III, we evaluated the uncertainties of predicted long-term CO2 fate up to 10,000 years, due to uncertain reaction kinetics. Under four scenarios of the kinetic rate laws, the temporal and spatial evolution of CO2 partitioning into the four trapping mechanisms (hydrodynamic/structural, solubility, residual/capillary, and mineral) was simulated with ToughReact, taking into account the CO2-brine-rock reactions and the multi-phase reactive flow and mass transport. Modeling results show that different rate laws for mineral dissolution and precipitation reactions resulted in different predicted amounts of trapped CO2 by carbonate minerals, with scenarios of the conventional linear rate law for feldspar dissolution having twice as much mineral trapping (21% of the injected CO2) as scenarios with a Burch-type or Alekseyev et al.–type rate law for feldspar dissolution (11%). So far, most reactive transport modeling (RTM) studies for CCUS have used the conventional rate law and therefore simulated the upper bound of mineral trapping. However, neglecting the regional flow after injection, as most previous RTM studies have done, artificially limits the extent of geochemical reactions as if it were in a batch system. By replenishing undersaturated groundwater from upstream, the Utsira Sand is reactive over a time scale of 10,000 years. The results from this project have been communicated via five peer-reviewed journal articles, four conference proceeding papers, and 19 invited and contributed presentations at conferences and seminars.

  1. HELIOSPHERIC PROPAGATION OF CORONAL MASS EJECTIONS: COMPARISON OF NUMERICAL WSA-ENLIL+CONE MODEL AND ANALYTICAL DRAG-BASED MODEL

    SciTech Connect (OSTI)

    Vršnak, B.; Žic, T.; Dumbovi?, M.; Temmer, M.; Möstl, C.; Veronig, A. M.; Taktakishvili, A.; Mays, M. L.; Odstr?il, D. E-mail: tzic@geof.hr E-mail: manuela.temmer@uni-graz.at E-mail: astrid.veronig@uni-graz.at E-mail: m.leila.mays@nasa.gov

    2014-08-01

    Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter ? = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s{sup –1}. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is ?-bar =0.09±9.0 hr with an average of the absolute-value differences of |?|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying ? = 0.2 with w = 450 km s{sup –1} in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity.

  2. Comparison of high pressure transient PVT measurements and model predictions. Part I.

    SciTech Connect (OSTI)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Evans, Gregory Herbert; Rice, Steven F.; Winters, William Stanley, Jr.

    2010-07-01

    A series of experiments consisting of vessel-to-vessel transfers of pressurized gas using Transient PVT methodology have been conducted to provide a data set for optimizing heat transfer correlations in high pressure flow systems. In rapid expansions such as these, the heat transfer conditions are neither adiabatic nor isothermal. Compressible flow tools exist, such as NETFLOW that can accurately calculate the pressure and other dynamical mechanical properties of such a system as a function of time. However to properly evaluate the mass that has transferred as a function of time these computational tools rely on heat transfer correlations that must be confirmed experimentally. In this work new data sets using helium gas are used to evaluate the accuracy of these correlations for receiver vessel sizes ranging from 0.090 L to 13 L and initial supply pressures ranging from 2 MPa to 40 MPa. The comparisons show that the correlations developed in the 1980s from sparse data sets perform well for the supply vessels but are not accurate for the receivers, particularly at early time during the transfers. This report focuses on the experiments used to obtain high quality data sets that can be used to validate computational models. Part II of this report discusses how these data were used to gain insight into the physics of gas transfer and to improve vessel heat transfer correlations. Network flow modeling and CFD modeling is also discussed.

  3. A Physically Based Runoff Routing Model for Land Surface and Earth System Models

    SciTech Connect (OSTI)

    Li, Hongyi; Wigmosta, Mark S.; Wu, Huan; Huang, Maoyi; Ke, Yinghai; Coleman, Andre M.; Leung, Lai-Yung R.

    2013-06-13

    A new physically based runoff routing model, called the Model for Scale Adaptive River Transport (MOSART), has been developed to be applicable across local, regional, and global scales. Within each spatial unit, surface runoff is first routed across hillslopes and then discharged along with subsurface runoff into a ‘‘tributary subnetwork’’ before entering the main channel. The spatial units are thus linked via routing through the main channel network, which is constructed in a scale-consistent way across different spatial resolutions. All model parameters are physically based, and only a small subset requires calibration.MOSART has been applied to the Columbia River basin at 1/ 168, 1/ 88, 1/ 48, and 1/ 28 spatial resolutions and was evaluated using naturalized or observed streamflow at a number of gauge stations. MOSART is compared to two other routing models widely used with land surface models, the River Transport Model (RTM) in the Community Land Model (CLM) and the Lohmann routing model, included as a postprocessor in the Variable Infiltration Capacity (VIC) model package, yielding consistent performance at multiple resolutions. MOSART is further evaluated using the channel velocities derived from field measurements or a hydraulic model at various locations and is shown to be capable of producing the seasonal variation and magnitude of channel velocities reasonably well at different resolutions. Moreover, the impacts of spatial resolution on model simulations are systematically examined at local and regional scales. Finally, the limitations ofMOSART and future directions for improvements are discussed.

  4. In-situ model analysis of STARS missile flight data and comparison to per-flight predictions from test-reconciled models

    SciTech Connect (OSTI)

    James, G.H.; Carne, T.G.; Marek, E.L.

    1994-08-01

    The Natural Excitation Technique (NExT) was used to analyze STARS launch data during first and second stage flight using telemetered acceleration data. A continuous track of modal frequencies and modal damping was acquired for the first and second elastic modes of the system during first stage flight and for the first mode during second stage flight. Generally, the first mode was predicted to be lower than seen in actual flight. The second mode predictions were very close to those seen in flight. Damping values were found to be within the range estimated by ground testing or slightly less. The results from this modal analysis of launch data allowed a final quantification of the inherent bias errors which resulted from the STARS ground-based modal tests as well as pointing out structures which were in need of further test/analysis correlation.

  5. Predicting Individual Fuel Economy

    SciTech Connect (OSTI)

    Lin, Zhenhong; Greene, David L

    2011-01-01

    To make informed decisions about travel and vehicle purchase, consumers need unbiased and accurate information of the fuel economy they will actually obtain. In the past, the EPA fuel economy estimates based on its 1984 rules have been widely criticized for overestimating on-road fuel economy. In 2008, EPA adopted a new estimation rule. This study compares the usefulness of the EPA's 1984 and 2008 estimates based on their prediction bias and accuracy and attempts to improve the prediction of on-road fuel economies based on consumer and vehicle attributes. We examine the usefulness of the EPA fuel economy estimates using a large sample of self-reported on-road fuel economy data and develop an Individualized Model for more accurately predicting an individual driver's on-road fuel economy based on easily determined vehicle and driver attributes. Accuracy rather than bias appears to have limited the usefulness of the EPA 1984 estimates in predicting on-road MPG. The EPA 2008 estimates appear to be equally inaccurate and substantially more biased relative to the self-reported data. Furthermore, the 2008 estimates exhibit an underestimation bias that increases with increasing fuel economy, suggesting that the new numbers will tend to underestimate the real-world benefits of fuel economy and emissions standards. By including several simple driver and vehicle attributes, the Individualized Model reduces the unexplained variance by over 55% and the standard error by 33% based on an independent test sample. The additional explanatory variables can be easily provided by the individuals.

  6. Predictive models of circulating fluidized bed combustors. 12th technical progress report

    SciTech Connect (OSTI)

    Gidaspow, D.

    1992-07-01

    Steady flows influenced by walls cannot be described by inviscid models. Flows in circulating fluidized beds have significant wall effects. Particles in the form of clusters or layers can be seen to run down the walls. Hence modeling of circulating fluidized beds (CFB) without a viscosity is not possible. However, in interpreting Equations (8-1) and (8-2) it must be kept in mind that CFB or most other two phase flows are never in a true steady state. Then the viscosity in Equations (8-1) and (8-2) may not be the true fluid viscosity to be discussed next, but an Eddy type viscosity caused by two phase flow oscillations usually referred to as turbulence. In view of the transient nature of two-phase flow, the drag and the boundary layer thickness may not be proportional to the square root of the intrinsic viscosity but depend upon it to a much smaller extent. As another example, liquid-solid flow and settling of colloidal particles in a lamella electrosettler the settling process is only moderately affected by viscosity. Inviscid flow with settling is a good first approximation to this electric field driven process. The physical meaning of the particulate phase viscosity is described in detail in the chapter on kinetic theory. Here the conventional derivation resented in single phase fluid mechanics is generalized to multiphase flow.

  7. Evaluation Of The Integrated Solubility Model, A Graded Approach For Predicting Phase Distribution In Hanford Tank Waste

    SciTech Connect (OSTI)

    Pierson, Kayla L.; Belsher, Jeremy D.; Seniow, Kendra R.

    2012-10-19

    The mission of the DOE River Protection Project (RPP) is to store, retrieve, treat and dispose of Hanford's tank waste. Waste is retrieved from the underground tanks and delivered to the Waste Treatment and Immobilization Plant (WTP). Waste is processed through a pretreatment facility where it is separated into low activity waste (LAW), which is primarily liquid, and high level waste (HLW), which is primarily solid. The LAW and HLW are sent to two different vitrification facilities and glass canisters are then disposed of onsite (for LAW) or shipped off-site (for HLW). The RPP mission is modeled by the Hanford Tank Waste Operations Simulator (HTWOS), a dynamic flowsheet simulator and mass balance model that is used for mission analysis and strategic planning. The integrated solubility model (ISM) was developed to improve the chemistry basis in HTWOS and better predict the outcome of the RPP mission. The ISM uses a graded approach to focus on the components that have the greatest impact to the mission while building the infrastructure for continued future improvement and expansion. Components in the ISM are grouped depending upon their relative solubility and impact to the RPP mission. The solubility of each group of components is characterized by sub-models of varying levels of complexity, ranging from simplified correlations to a set of Pitzer equations used for the minimization of Gibbs Energy.

  8. Feasibility of High-Power Diode Laser Array Surrogate to Support Development of Predictive Laser Lethality Model

    SciTech Connect (OSTI)

    Lowdermilk, W H; Rubenchik, A M; Springer, H K

    2011-01-13

    Predictive modeling and simulation of high power laser-target interactions is sufficiently undeveloped that full-scale, field testing is required to assess lethality of military directed-energy (DE) systems. The cost and complexity of such testing programs severely limit the ability to vary and optimize parameters of the interaction. Thus development of advanced simulation tools, validated by experiments under well-controlled and diagnosed laboratory conditions that are able to provide detailed physics insight into the laser-target interaction and reduce requirements for full-scale testing will accelerate development of DE weapon systems. The ultimate goal is a comprehensive end-to-end simulation capability, from targeting and firing the laser system through laser-target interaction and dispersal of target debris; a 'Stockpile Science' - like capability for DE weapon systems. To support development of advanced modeling and simulation tools requires laboratory experiments to generate laser-target interaction data. Until now, to make relevant measurements required construction and operation of very high power and complex lasers, which are themselves costly and often unique devices, operating in dedicated facilities that don't permit experiments on targets containing energetic materials. High power diode laser arrays, pioneered by LLNL, provide a way to circumvent this limitation, as such arrays capable of delivering irradiances characteristic of De weapon requires are self-contained, compact, light weight and thus easily transportable to facilities, such as the High Explosives Applications Facility (HEAF) at Lawrence Livermore National Laboratory (LLNL) where testing with energetic materials can be performed. The purpose of this study was to establish the feasibility of using such arrays to support future development of advanced laser lethality and vulnerability simulation codes through providing data for materials characterization and laser-material interaction models and to validate the accuracy of code predictions. This project was a Feasibility Study under the LLNL Laboratory Directed Research and Development (LDRD) Program.

  9. Life Prediction and Classification of Failure Modes in Solid State Luminaires Using Bayesian Probabilistic Models

    SciTech Connect (OSTI)

    Lall, Pradeep; Wei, Junchao; Sakalaukus, Peter

    2014-05-27

    A new method has been developed for assessment of the onset of degradation in solid state luminaires to classify failure mechanisms by using metrics beyond lumen degradation that are currently used for identification of failure. Luminous Flux output, Correlated Color Temperature Data on Philips LED Lamps has been gathered under 85°C/85%RH till lamp failure. The acquired data has been used in conjunction with Bayesian Probabilistic Models to identify luminaires with onset of degradation much prior to failure through identification of decision boundaries between lamps with accrued damage and lamps beyond the failure threshold in the feature space. In addition luminaires with different failure modes have been classified separately from healthy pristine luminaires. It is expected that, the new test technique will allow the development of failure distributions without testing till L70 life for the manifestation of failure.

  10. Bayesian Models for Life Prediction and Fault-Mode Classification in Solid State Lamps

    SciTech Connect (OSTI)

    Lall, Pradeep; Wei, Junchao; Sakalaukus, Peter

    2015-04-19

    A new method has been developed for assessment of the onset of degradation in solid state luminaires to classifY failure mechanisms by using metrics beyond lumen degradation that are currently used for identification of failure. Luminous Flux output, Correlated Color Temperature Data on Philips LED Lamps has been gathered under 85°C/85%RH till lamp failure. The acquired data has been used in conjunction with Bayesian Probabilistic Models to identifY luminaires with onset of degradation much prior to failure through identification of decision boundaries between lamps with accrued damage and lamps beyond the failure threshold in the feature space. In addition luminaires with different failure modes have been classified separately from healthy pristine luminaires. It is expected that, the new test technique will allow the development of failure distributions without testing till L 70 life for the manifestation of failure.

  11. Nuclear Shell Model Analyses and Predictions of Double-Beta Decay Observables

    SciTech Connect (OSTI)

    Horoi, Mihai [Department of Physics, Central Michigan University, Mount Pleasant, Michigan, 48859 (United States)

    2010-11-24

    Recent results from neutrino oscillation experiments have convincingly demonstrated that neutrinos have mass and they can mix. The neutrinoless double beta decay is the most sensitive process to determine the absolute scale of the neutrino masses, and the only one that can distinguish whether neutrino is a Dirac or a Majorana particle. A key ingredient for extracting the absolute neutrino masses from neutrinoless double beta decay experiments is a precise knowledge of the nuclear matrix elements (NME) for this process. Newly developed shell model approaches for computing the NME and half-lifes for the two-neutrino and neutrinoless double beta decay modes using modern effective interactions are presented. The implications of the new results on the experimental limits of the effective neutrino mass are discussed by comparing the decays of {sup 48}Ca and {sup 76}Ge.

  12. An integrated model supporting histological and biometric responses as predictive biomarkers of fish health status

    SciTech Connect (OSTI)

    Torres Junior, Audalio Rebelo; Sousa, DĂ©bora Batista Pinheiro; Neta, Raimunda Nonata Fortes Carvalho

    2014-10-06

    In this work, an experimental system of histological (branchial lesions) biomarkers and biometric data in catfish (Sciades herzbergii) was modeled. The fish were sampled along known pollution areas (S1) and from environmental protect areas (S2) in SĂŁo Marcos' Bay, Brazil. Gills were fixed in 10% formalin and usual histological techniques were used in the first gill arch right. The lesions were observed by light microscopy. There were no histopathological changes in animals captured at reference site (S1). However, in the catfish collected in the potentially contaminated area (S2) was observed several branchial lesions, such as lifting of the lamellar epithelium, fusion of some secondary lamellae, hypertrophy of epithelial cells and lamellar aneurysm. The analysis using the biometric data showed significant differences, being highest in fish analyzed in the reference area. This approach revealed spatial differences related with biometric patterns and morphological modifications of catfish.

  13. Ammonia concentration modeling based on retained gas sampler data

    SciTech Connect (OSTI)

    Terrones, G.; Palmer, B.J.; Cuta, J.M.

    1997-09-01

    The vertical ammonia concentration distributions determined by the retained gas sampler (RGS) apparatus were modeled for double-shell tanks (DSTs) AW-101, AN-103, AN-104, and AN-105 and single-shell tanks (SSTs) A-101, S-106, and U-103. One the vertical transport of ammonia in the tanks were used for the modeling. Transport in the non-convective settled solids and floating solids layers is assumed to occur primarily via some type of diffusion process, while transport in the convective liquid layers is incorporated into the model via mass transfer coefficients based on empirical correlations. Mass transfer between the top of the waste and the tank headspace and the effects of ventilation of the headspace are also included in the models. The resulting models contain a large number of parameters, but many of them can be determined from known properties of the waste configuration or can be estimated within reasonable bounds from data on the waste samples themselves. The models are used to extract effective diffusion coefficients for transport in the nonconvective layers based on the measured values of ammonia from the RGS apparatus. The modeling indicates that the higher concentrations of ammonia seen in bubbles trapped inside the waste relative to the ammonia concentrations in the tank headspace can be explained by a combination of slow transport of ammonia via diffusion in the nonconvective layers and ventilation of the tank headspace by either passive or active means. Slow transport by diffusion causes a higher concentration of ammonia to build up deep within the waste until the concentration gradients between the interior and top of the waste are sufficient to allow ammonia to escape at the same rate at which it is being generated in the waste.

  14. Predictive modeling of CO{sub 2} sequestration in deep saline sandstone reservoirs: Impacts of geochemical kinetics

    SciTech Connect (OSTI)

    Balashov, Victor N.; Guthrie, George D.; Hakala, J. Alexandra; Lopano, Christina L. J.; Rimstidt, Donald; Brantley, Susan L.

    2013-03-01

    One idea for mitigating the increase in fossil-fuel generated CO{sub 2} in the atmosphere is to inject CO{sub 2} into subsurface saline sandstone reservoirs. To decide whether to try such sequestration at a globally significant scale will require the ability to predict the fate of injected CO{sub 2}. Thus, models are needed to predict the rates and extents of subsurface rock-water-gas interactions. Several reactive transport models for CO{sub 2} sequestration created in the last decade predicted sequestration in sandstone reservoirs of ~17 to ~90 kg CO{sub 2} m{sup -3|. To build confidence in such models, a baseline problem including rock + water chemistry is proposed as the basis for future modeling so that both the models and the parameterizations can be compared systematically. In addition, a reactive diffusion model is used to investigate the fate of injected supercritical CO{sub 2} fluid in the proposed baseline reservoir + brine system. In the baseline problem, injected CO{sub 2} is redistributed from the supercritical (SC) free phase by dissolution into pore brine and by formation of carbonates in the sandstone. The numerical transport model incorporates a full kinetic description of mineral-water reactions under the assumption that transport is by diffusion only. Sensitivity tests were also run to understand which mineral kinetics reactions are important for CO{sub 2} trapping. The diffusion transport model shows that for the first ~20 years after CO{sub 2} diffusion initiates, CO{sub 2} is mostly consumed by dissolution into the brine to form CO{sub 2,aq} (solubility trapping). From 20-200 years, both solubility and mineral trapping are important as calcite precipitation is driven by dissolution of oligoclase. From 200 to 1000 years, mineral trapping is the most important sequestration mechanism, as smectite dissolves and calcite precipitates. Beyond 2000 years, most trapping is due to formation of aqueous HCO{sub 3}{sup -}. Ninety-seven percent of the maximum CO{sub 2} sequestration, 34.5 kg CO{sub 2} per m{sup 3} of sandstone, is attained by 4000 years even though the system does not achieve chemical equilibrium until ~25,000 years. This maximum represents about 20% CO{sub 2} dissolved as CO{sub 2},aq, 50% dissolved as HCO{sub 3}{sup -}{sub ,aq}, and 30% precipitated as calcite. The extent of sequestration as HCO{sub 3}{sup -} at equilibrium can be calculated from equilibrium thermodynamics and is roughly equivalent to the amount of Na+ in the initial sandstone in a soluble mineral (here, oligoclase). Similarly, the extent of trapping in calcite is determined by the amount of Ca2+ in the initial oligoclase and smectite. Sensitivity analyses show that the rate of CO{sub 2} sequestration is sensitive to the mineral-water reaction kinetic constants between approximately 10 and 4000 years. The sensitivity of CO{sub 2} sequestration to the rate constants decreases in magnitude respectively from oligoclase to albite to smectite.

  15. Predicting for thermodynamic instabilities in water/oil/surfactant microemulsions: A mesoscopic modelling approach

    SciTech Connect (OSTI)

    Duvail, Magali Zemb, Thomas; Dufrêche, Jean-François; Arleth, Lise

    2014-04-28

    The thermodynamics and structural properties of flexible and rigid nonionic water/oil/surfactant microemulsions have been investigated using a two level-cut Gaussian random field method based on the Helfrich formalism. Ternary stability diagrams and scattering spectra have been calculated for different surfactant rigidities and spontaneous curvatures. A more important contribution of the Gaussian elastic constants compared to the bending one is observed on the ternary stability diagrams. Furthermore, influence of the spontaneous curvature of the surfactant points out a displacement of the instability domains which corresponds to the difference between the spontaneous and effective curvatures. We enlighten that a continuous transition from a connected water in oil droplets to a frustrated locally lamellar (oil in water in oil droplets) microstructure is found to occur when increasing the temperature for an oil-rich microemulsion. This continuous transition translated in a shift in the scattering functions, points out that the phase inversion phenomenon occurs by a coalescence of the water droplets.

  16. Predicting tropospheric ozone and hydroxyl radical in a global, three-dimensional, chemistry, transport, and deposition model

    SciTech Connect (OSTI)

    Atherton, C.S.

    1995-01-05

    Two of the most important chemically reactive tropospheric gases are ozone (O{sub 3}) and the hydroxyl radical (OH). Although ozone in the stratosphere is a necessary protector against the sun`s radiation, tropospheric ozone is actually a pollutant which damages materials and vegetation, acts as a respiratory irritant, and is a greenhouse gas. One of the two main sources of ozone in the troposphere is photochemical production. The photochemistry is initiated when hydrocarbons and carbon monoxide (CO) react with nitrogen oxides (NO{sub x} = NO + NO{sub 2}) in the presence of sunlight. Reaction with the hydroxyl radical, OH, is the main sink for many tropospheric gases. The hydroxyl radical is highly reactive and has a lifetime on the order of seconds. Its formation is initiated by the photolysis of tropospheric ozone. Tropospheric chemistry involves a complex, non-linear set of chemical reactions between atmospheric species that vary substantially in time and space. To model these and other species on a global scale requires the use of a global, three-dimensional chemistry, transport, and deposition (CTD) model. In this work, I developed two such three dimensional CTD models. The first model incorporated the chemistry necessary to model tropospheric ozone production from the reactions of nitrogen oxides with carbon monoxide (CO) and methane (CH{sub 4}). The second also included longer-lived alkane species and the biogenic hydrocarbon isoprene, which is emitted by growing plants and trees. The models` ability to predict a number of key variables (including the concentration of O{sub 3}, OH, and other species) were evaluated. Then, several scenarios were simulated to understand the change in the chemistry of the troposphere since preindustrial times and the role of anthropogenic NO{sub x} on present day conditions.

  17. Experiment-Based Model for the Chemical Interactions between Geothermal

    Broader source: Energy.gov (indexed) [DOE]

    Rocks, Supercritical Carbon Dioxide and Water | Department of Energy Experiment-Based Model for the Chemical Interactions between Geothermal Rocks, Supercritical Carbon Dioxide and Water presentation at the April 2013 peer review meeting held in Denver, Colorado. PDF icon palto_alto_research_center_peer2013.pdf More Documents & Publications Enhanced Geothermal Systems (EGS) with CO2as Heat Transmission Fluid Chemical Impact of Elevated CO2on Geothermal Energy Production R & D

  18. Energy-efficient housing alternatives: a predictive model of factors affecting household perceptions

    SciTech Connect (OSTI)

    Schreckengost, R.L.

    1985-01-01

    The major purpose of this investigation was to assess the impact of household socio-economic factors, dwelling characteristics, energy conservation behavior, and energy attitudes on the perceptions of energy-efficient housing alternatives. Perceptions of passive solar, active solar, earth sheltered, and retrofitted housing were examined. Data used were from the Southern Regional Research Project, S-141, Housing for Low and Moderate Income Families. Responses from 1804 households living in seven southern states were analyzed. A conceptual model was proposed to test the hypothesized relationships which were examined by path analysis. Perceptions of energy efficient housing alternatives were found to be a function of selected household and dwelling characteristics, energy attitude, household economic factors, and household conservation behavior. Age and education of the respondent, family size, housing-income ratio, utility income ratio, energy attitude, and size of the dwelling unit were found to have direct and indirect effects on perceptions of energy-efficient housing alternatives. Energy conservation behavior made a significant direct impact with behavioral energy conservation changes having the most profound influence. Conservation behavior was influenced by selected household and dwelling characteristics, energy attitude, and household economic factors.

  19. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    SciTech Connect (OSTI)

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.; Maranas, Costas D.

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.

  20. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.; Maranas, Costas D.

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  1. Validation of the thermal transport model used for ITER startup scenario predictions with DIII-D experimental data

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Casper, T. A.; Meyer, W. H.; Jackson, G. L.; Luce, T. C.; Hyatt, A. W.; Humphreys, D. A.; Turco, F.

    2010-12-08

    We are exploring characteristics of ITER startup scenarios in similarity experiments conducted on the DIII-D Tokamak. In these experiments, we have validated scenarios for the ITER current ramp up to full current and developed methods to control the plasma parameters to achieve stability. Predictive simulations of ITER startup using 2D free-boundary equilibrium and 1D transport codes rely on accurate estimates of the electron and ion temperature profiles that determine the electrical conductivity and pressure profiles during the current rise. Here we present results of validation studies that apply the transport model used by the ITER team to DIII-D discharge evolutionmore » and comparisons with data from our similarity experiments.« less

  2. THERMODYNAMIC MODEL FOR URANIUM DIOXIDE BASED NUCLEAR FUEL

    SciTech Connect (OSTI)

    Thompson, Dr. William T.; Lewis, Dr. Brian J; Corcoran, E. C.; Kaye, Dr. Matthew H.; White, S. J.; Akbari, F.; Higgs, Jamie D.; Thompson, D. M.; Besmann, Theodore M; Vogel, S. C.

    2007-01-01

    Many projects involving nuclear fuel rest on a quantitative understanding of the co-existing phases at various stages of burnup. Since the many fission products have considerably different abilities to chemically associate with oxygen, and the oxygen-to-metal molar ratio is slowly changing, the chemical potential of oxygen is a function of burnup. Concurrently, well-recognized small fractions of new phases such as inert gas, noble metals, zirconates, etc. also develop. To further complicate matters, the dominant UO2 fuel phase may be non-stoichiometric and most of the minor phases themselves have a variable composition dependent on temperature and possible contact with the coolant in the event of a sheathing breach. A thermodynamic fuel model to predict the phases in partially burned CANDU (CANada Deuterium Uranium) nuclear fuel containing many major fission products has been under development. The building blocks of the model are the standard Gibbs energies of formation of the many possible compounds expressed as a function of temperature. To these data are added mixing terms associated with the appearance of the component species in particular phases. In operational terms, the treatment rests on the ability to minimize the Gibbs energy in a multicomponent system, in our case using the algorithms developed by Eriksson. The model is capable of handling non-stoichiometry in the UO2 fluorite phase, dilute solution behaviour of significant solute oxides, noble metal inclusions, a second metal solid solution U(Pd-Rh-Ru)3, zirconate, molybdate, and uranate solutions as well as other minor solid phases, and volatile gaseous species.

  3. A Habitat-based Wind-Wildlife Collision Model with Application to the Upper Great Plains Region

    SciTech Connect (OSTI)

    Forcey, Greg, M.

    2012-08-28

    Most previous studies on collision impacts at wind facilities have taken place at the site-specific level and have only examined small-scale influences on mortality. In this study, we examine landscape-level influences using a hierarchical spatial model combined with existing datasets and life history knowledge for: Horned Lark, Red-eyed Vireo, Mallard, American Avocet, Golden Eagle, Whooping Crane, red bat, silver-haired bat, and hoary bat. These species were modeled in the central United States within Bird Conservation Regions 11, 17, 18, and 19. For the bird species, we modeled bird abundance from existing datasets as a function of habitat variables known to be preferred by each species to develop a relative abundance prediction for each species. For bats, there are no existing abundance datasets so we identified preferred habitat in the landscape for each species and assumed that greater amounts of preferred habitat would equate to greater abundance of bats. The abundance predictions for bird and bats were modeled with additional exposure factors known to influence collisions such as visibility, wind, temperature, precipitation, topography, and behavior to form a final mapped output of predicted collision risk within the study region. We reviewed published mortality studies from wind farms in our study region and collected data on reported mortality of our focal species to compare to our modeled predictions. We performed a sensitivity analysis evaluating model performance of 6 different scenarios where habitat and exposure factors were weighted differently. We compared the model performance in each scenario by evaluating observed data vs. our model predictions using spearmans rank correlations. Horned Lark collision risk was predicted to be highest in the northwestern and west-central portions of the study region with lower risk predicted elsewhere. Red-eyed Vireo collision risk was predicted to be the highest in the eastern portions of the study region and in the forested areas of the western portion; the lowest risk was predicted in the treeless portions of the northwest portion of the study area. Mallard collision risk was predicted to be highest in the eastern central portion of the prairie potholes and in Iowa which has a high density of pothole wetlands; lower risk was predicted in the more arid portions of the study area. Predicted collision risk for American Avocet was similar to Mallard and was highest in the prairie pothole region and lower elsewhere. Golden Eagle collision risk was predicted to be highest in the mountainous areas of the western portion of the study area and lowest in the eastern portion of the prairie potholes. Whooping Crane predicted collision risk was highest within the migration corridor that the birds follow through in the central portion of the study region; predicted collision risk was much lower elsewhere. Red bat collision risk was highly driven by large tracts of forest and river corridors which made up most of the areas of higher collision risk. Silver-haired bat and hoary bat predicted collision risk were nearly identical and driven largely by forest and river corridors as well as locations with warmer temperatures, and lower average wind speeds. Horned Lark collisions were mostly influenced by abundance and predictions showed a moderate correlation between observed and predicted mortality (r = 0.55). Red bat, silver-haired bat, and hoary bat predictions were much higher and shown a strong correlations with observed mortality with correlations of 0.85, 0.90, and 0.91 respectively. Red bat collisions were influenced primarily by habitat, while hoary bat and silver-haired bat collisions were influenced mainly by exposure variables. Stronger correlations between observed and predicted collision for bats than for Horned Larks can likely be attributed to stronger habitat associations and greater influences of weather on behavior for bats. Although the collision predictions cannot be compared among species, our model outputs provide a convenient and easy landscape-level tool to quickly screen for siting issues at a high level. The model resolution is suitable for state or multi-county siting but users are cautioned against using these models for micrositing. The U.S. Fish and Wildlife Service recently released voluntary land-based wind energy guidelines for assessing impacts of a wind facility to wildlife using a tiered approach. The tiered approach uses an iterative approach for assessing impacts to wildlife in levels of increasing detail from landscape-level screening to site-specific field studies. Our models presented in this paper would be applicable to be used as tools to conduct screening at the tier 1 level and would not be appropriate to complete smaller scale tier 2 and tier 3 level studies. For smaller scale screening ancillary field studies should be conducted at the site-specific level to validate collision predictions.

  4. Application of a fuzzy neural network model in predicting polycyclic aromatic hydrocarbon-mediated perturbations of the Cyp1b1 transcriptional regulatory network in mouse skin

    SciTech Connect (OSTI)

    Larkin, Andrew; Siddens, Lisbeth K.; Krueger, Sharon K.; Tilton, Susan C.; Waters, Katrina M.; Williams, David E.; Baird, William M.

    2013-03-01

    Polycyclic aromatic hydrocarbons (PAHs) are present in the environment as complex mixtures with components that have diverse carcinogenic potencies and mostly unknown interactive effects. Non-additive PAH interactions have been observed in regulation of cytochrome P450 (CYP) gene expression in the CYP1 family. To better understand and predict biological effects of complex mixtures, such as environmental PAHs, an 11 gene input-1 gene output fuzzy neural network (FNN) was developed for predicting PAH-mediated perturbations of dermal Cyp1b1 transcription in mice. Input values were generalized using fuzzy logic into low, medium, and high fuzzy subsets, and sorted using k-means clustering to create Mamdani logic functions for predicting Cyp1b1 mRNA expression. Model testing was performed with data from microarray analysis of skin samples from FVB/N mice treated with toluene (vehicle control), dibenzo[def,p]chrysene (DBC), benzo[a]pyrene (BaP), or 1 of 3 combinations of diesel particulate extract (DPE), coal tar extract (CTE) and cigarette smoke condensate (CSC) using leave-one-out cross-validation. Predictions were within 1 log{sub 2} fold change unit of microarray data, with the exception of the DBC treatment group, where the unexpected down-regulation of Cyp1b1 expression was predicted but did not reach statistical significance on the microarrays. Adding CTE to DPE was predicted to increase Cyp1b1 expression, whereas adding CSC to CTE and DPE was predicted to have no effect, in agreement with microarray results. The aryl hydrocarbon receptor repressor (Ahrr) was determined to be the most significant input variable for model predictions using back-propagation and normalization of FNN weights. - Highlights: ? Tested a model to predict PAH mixture-mediated changes in Cyp1b1 expression ? Quantitative predictions in agreement with microarrays for Cyp1b1 induction ? Unexpected difference in expression between DBC and other treatments predicted ? Model predictions for combining PAH mixtures in agreement with microarrays ? Predictions highly dependent on aryl hydrocarbon receptor repressor expression.

  5. Midtemperature solar systems test facility predictions for thermal performance based on test data: Sun-Heet nontracking solar collector

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-03-01

    Sandia National Laboratories, Albuquerque (SNLA), is currently conducting a program to predict the performance and measure the characteristics of commercially available solar collectors that have the potential for use in industrial process heat and enhanced oil recovery applications. The thermal performance predictions for the Sun-Heet nontracking, line-focusing parabolic trough collector at five cities in the US are presented. (WHK)

  6. Formulation of an experimental substructure model using a Craig-Bampton based transmission simulator

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kammer, Daniel C.; Allen, Matthew S.; Mayes, Randall L.

    2015-09-26

    An experimental–analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. This method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinatesmore » to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig–Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. Moreover, these modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.« less

  7. REVIEW OF EXPERIMENTAL CAPABILITIES AND HYDRODYNAMIC DATA FOR VALIDATION OF CFD BASED PREDICTIONS FOR SLURRY BUBBLE COLUMN REACTORS

    SciTech Connect (OSTI)

    Donna Post Guillen; Daniel S. Wendt

    2007-11-01

    The purpose of this paper is to document the review of several open-literature sources of both experimental capabilities and published hydrodynamic data to aid in the validation of a Computational Fluid Dynamics (CFD) based model of a slurry bubble column (SBC). The review included searching the Web of Science, ISI Proceedings, and Inspec databases, internet searches as well as other open literature sources. The goal of this study was to identify available experimental facilities and relevant data. Integral (i.e., pertaining to the SBC system), as well as fundamental (i.e., separate effects are considered), data are included in the scope of this effort. The fundamental data is needed to validate the individual mechanistic models or closure laws used in a Computational Multiphase Fluid Dynamics (CMFD) simulation of a SBC. The fundamental data is generally focused on simple geometries (i.e., flow between parallel plates or cylindrical pipes) or custom-designed tests to focus on selected interfacial phenomena. Integral data covers the operation of a SBC as a system with coupled effects. This work highlights selected experimental capabilities and data for the purpose of SBC model validation, and is not meant to be an exhaustive summary.

  8. REVIEW OF EXPERIMENTAL CAPABILITIES AND HYDRODYNAMIC DATA FOR VALIDATION OF CFD-BASED PREDICTIONS FOR SLURRY BUBBLE COLUMN REACTORS

    SciTech Connect (OSTI)

    Donna Post Guillen; Daniel S. Wendt; Steven P. Antal; Michael Z. Podowski

    2007-11-01

    The purpose of this paper is to document the review of several open-literature sources of both experimental capabilities and published hydrodynamic data to aid in the validation of a Computational Fluid Dynamics (CFD) based model of a slurry bubble column (SBC). The review included searching the Web of Science, ISI Proceedings, and Inspec databases, internet searches as well as other open literature sources. The goal of this study was to identify available experimental facilities and relevant data. Integral (i.e., pertaining to the SBC system), as well as fundamental (i.e., separate effects are considered), data are included in the scope of this effort. The fundamental data is needed to validate the individual mechanistic models or closure laws used in a Computational Multiphase Fluid Dynamics (CMFD) simulation of a SBC. The fundamental data is generally focused on simple geometries (i.e., flow between parallel plates or cylindrical pipes) or custom-designed tests to focus on selected interfacial phenomena. Integral data covers the operation of a SBC as a system with coupled effects. This work highlights selected experimental capabilities and data for the purpose of SBC model validation, and is not meant to be an exhaustive summary.

  9. A multiscale MDCT image-based breathing lung model with time-varying regional ventilation

    SciTech Connect (OSTI)

    Yin, Youbing, E-mail: youbing-yin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Choi, Jiwoong, E-mail: jiwoong-choi@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A., E-mail: eric-hoffman@uiowa.edu [Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Internal Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Tawhai, Merryn H., E-mail: m.tawhai@auckland.ac.nz [Auckland Bioengineering Institute, The University of Auckland, Auckland (New Zealand); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States) [Department of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242 (United States); IIHR-Hydroscience and Engineering, The University of Iowa, Iowa City, IA 52242 (United States)

    2013-07-01

    A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C{sub 1} continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung.

  10. Ecological Impacts of the Cerro Grande Fire: Predicting Elk Movement and Distribution Patterns in Response to Vegetative Recovery through Simulation Modeling October 2005

    SciTech Connect (OSTI)

    S.P. Rupp

    2005-10-01

    In May 2000, the Cerro Grande Fire burned approximately 17,200 ha in north-central New Mexico as the result of an escaped prescribed burn initiated by Bandelier National Monument. The interaction of large-scale fires, vegetation, and elk is an important management issue, but few studies have addressed the ecological implications of vegetative succession and landscape heterogeneity on ungulate populations following large-scale disturbance events. Primary objectives of this research were to identify elk movement pathways on local and landscape scales, to determine environmental factors that influence elk movement, and to evaluate movement and distribution patterns in relation to spatial and temporal aspects of the Cerro Grande Fire. Data collection and assimilation reflect the collaborative efforts of National Park Service, U.S. Forest Service, and Department of Energy (Los Alamos National Laboratory) personnel. Geographic positioning system (GPS) collars were used to track 54 elk over a period of 3+ years and locational data were incorporated into a multi-layered geographic information system (GIS) for analysis. Preliminary tests of GPS collar accuracy indicated a strong effect of 2D fixes on position acquisition rates (PARs) depending on time of day and season of year. Slope, aspect, elevation, and land cover type affected dilution of precision (DOP) values for both 2D and 3D fixes, although significant relationships varied from positive to negative making it difficult to delineate the mechanism behind significant responses. Two-dimensional fixes accounted for 34% of all successfully acquired locations and may affect results in which those data were used. Overall position acquisition rate was 93.3% and mean DOP values were consistently in the range of 4.0 to 6.0 leading to the conclusion collar accuracy was acceptable for modeling purposes. SAVANNA, a spatially explicit, process-oriented ecosystem model, was used to simulate successional dynamics. Inputs to the SAVANNA included a land cover map, long-term weather data, soil maps, and a digital elevation model. Parameterization and calibration were conducted using field plots. Model predictions of herbaceous biomass production and weather were consistent with available data and spatial interpolations of snow were considered reasonable for this study. Dynamic outputs generated by SAVANNA were integrated with static variables, movement rules, and parameters developed for the individual-based model through the application of a habitat suitability index. Model validation indicated reasonable model fit when compared to an independent test set. The finished model was applied to 2 realistic management scenarios for the Jemez Mountains and management implications were discussed. Ongoing validation of the individual-based model presented in this dissertation provides an adaptive management tool that integrates interdisciplinary experience and scientific information, which allows users to make predictions about the impact of alternative management policies.

  11. Chiller condition monitoring using topological case-based modeling

    SciTech Connect (OSTI)

    Tsutsui, Hiroaki; Kamimura, Kazuyuki

    1996-11-01

    To increase energy efficiency and economy, commercial building projects now often utilize centralized, shared sources of heat such as district heating and cooling (DHC) systems. To maintain efficiency, precise monitoring and scheduling of maintenance for chillers and heat pumps is essential. Low-performance operation results in energy loss, while unnecessary maintenance is expensive and wasteful. Plant supervisors are responsible for scheduling and supervising maintenance. Modeling systems that assist in analyzing system deterioration are of great benefit for these tasks. Topological case-based modeling (TCBM) (Tsutsui et al. 1993; Tsutsui 1995) is an effective tool for chiller performance deterioration monitoring. This paper describes TCBM and its application to this task using recorded historical performance data.

  12. A dislocation-based, strain–gradient–plasticity strengthening model for deformation processed metal–metal composites

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Tian, Liang; Russell, Alan; Anderson, Iver

    2014-01-03

    Deformation processed metal–metal composites (DMMCs) are high-strength, high-electrical conductivity composites developed by severe plastic deformation of two ductile metal phases. The extraordinarily high strength of DMMCs is underestimated using the rule of mixture (or volumetric weighted average) of conventionally work-hardened metals. A dislocation-density-based, strain–gradient–plasticity model is proposed to relate the strain-gradient effect with the geometrically necessary dislocations emanating from the interface to better predict the strength of DMMCs. The model prediction was compared with our experimental findings of Cu–Nb, Cu–Ta, and Al–Ti DMMC systems to verify the applicability of the new model. The results show that this model predicts themore » strength of DMMCs better than the rule-of-mixture model. The strain-gradient effect, responsible for the exceptionally high strength of heavily cold worked DMMCs, is dominant at large deformation strain since its characteristic microstructure length is comparable with the intrinsic material length.« less

  13. A dislocation-based, strain–gradient–plasticity strengthening model for deformation processed metal–metal composites

    SciTech Connect (OSTI)

    Tian, Liang; Russell, Alan; Anderson, Iver

    2014-01-03

    Deformation processed metal–metal composites (DMMCs) are high-strength, high-electrical conductivity composites developed by severe plastic deformation of two ductile metal phases. The extraordinarily high strength of DMMCs is underestimated using the rule of mixture (or volumetric weighted average) of conventionally work-hardened metals. A dislocation-density-based, strain–gradient–plasticity model is proposed to relate the strain-gradient effect with the geometrically necessary dislocations emanating from the interface to better predict the strength of DMMCs. The model prediction was compared with our experimental findings of Cu–Nb, Cu–Ta, and Al–Ti DMMC systems to verify the applicability of the new model. The results show that this model predicts the strength of DMMCs better than the rule-of-mixture model. The strain-gradient effect, responsible for the exceptionally high strength of heavily cold worked DMMCs, is dominant at large deformation strain since its characteristic microstructure length is comparable with the intrinsic material length.

  14. Predictivity of dog co-culture model, primary human hepatocytes and HepG2 cells for the detection of hepatotoxic drugs in humans

    SciTech Connect (OSTI)

    Atienzar, Franck A.; Novik, Eric I.; Gerets, Helga H.; Parekh, Amit; Delatour, Claude; Cardenas, Alvaro; MacDonald, James; Yarmush, Martin L.; Dhalluin, Stéphane

    2014-02-15

    Drug Induced Liver Injury (DILI) is a major cause of attrition during early and late stage drug development. Consequently, there is a need to develop better in vitro primary hepatocyte models from different species for predicting hepatotoxicity in both animals and humans early in drug development. Dog is often chosen as the non-rodent species for toxicology studies. Unfortunately, dog in vitro models allowing long term cultures are not available. The objective of the present manuscript is to describe the development of a co-culture dog model for predicting hepatotoxic drugs in humans and to compare the predictivity of the canine model along with primary human hepatocytes and HepG2 cells. After rigorous optimization, the dog co-culture model displayed metabolic capacities that were maintained up to 2 weeks which indicates that such model could be also used for long term metabolism studies. Most of the human hepatotoxic drugs were detected with a sensitivity of approximately 80% (n = 40) for the three cellular models. Nevertheless, the specificity was low approximately 40% for the HepG2 cells and hepatocytes compared to 72.7% for the canine model (n = 11). Furthermore, the dog co-culture model showed a higher superiority for the classification of 5 pairs of close structural analogs with different DILI concerns in comparison to both human cellular models. Finally, the reproducibility of the canine system was also satisfactory with a coefficient of correlation of 75.2% (n = 14). Overall, the present manuscript indicates that the dog co-culture model may represent a relevant tool to perform chronic hepatotoxicity and metabolism studies. - Highlights: • Importance of species differences in drug development. • Relevance of dog co-culture model for metabolism and toxicology studies. • Hepatotoxicity: higher predictivity of dog co-culture vs HepG2 and human hepatocytes.

  15. Determination of High-Frequency Current Distribution Using EMTP-Based Transmission Line Models with Resulting Radiated Electromagnetic Fields

    SciTech Connect (OSTI)

    Mork, B; Nelson, R; Kirkendall, B; Stenvig, N

    2009-11-30

    Application of BPL technologies to existing overhead high-voltage power lines would benefit greatly from improved simulation tools capable of predicting performance - such as the electromagnetic fields radiated from such lines. Existing EMTP-based frequency-dependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.

  16. FINAL REPORT: Mechanistically-Base Field Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    SciTech Connect (OSTI)

    Wood, Brian D.

    2013-11-04

    Biogeochemical reactive transport processes in the subsurface environment are important to many contemporary environmental issues of significance to DOE. Quantification of risks and impacts associated with environmental management options, and design of remediation systems where needed, require that we have at our disposal reliable predictive tools (usually in the form of numerical simulation models). However, it is well known that even the most sophisticated reactive transport models available today have poor predictive power, particularly when applied at the field scale. Although the lack of predictive ability is associated in part with our inability to characterize the subsurface and limitations in computational power, significant advances have been made in both of these areas in recent decades and can be expected to continue. In this research, we examined the upscaling (pore to Darcy and Darcy to field) the problem of bioremediation via biofilms in porous media. The principle idea was to start with a conceptual description of the bioremediation process at the pore scale, and apply upscaling methods to formally develop the appropriate upscaled model at the so-called Darcy scale. The purpose was to determine (1) what forms the upscaled models would take, and (2) how one might parameterize such upscaled models for applications to bioremediation in the field. We were able to effectively upscale the bioremediation process to explain how the pore-scale phenomena were linked to the field scale. The end product of this research was to produce a set of upscaled models that could be used to help predict field-scale bioremediation. These models were mechanistic, in the sense that they directly incorporated pore-scale information, but upscaled so that only the essential features of the process were needed to predict the effective parameters that appear in the model. In this way, a direct link between the microscale and the field scale was made, but the upscaling process helped inform potential users of the model what kinds of information would be needed to accurately characterize the system.

  17. Midtemperature Solar Systems Test Facility predictions for thermal performance based on test data: Custom Engineering trough with glass reflector surface and Sandia-designed receivers

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-05-01

    Thermal performance predictions based on test data are presented for the Custom Engineering trough and Sandia-designed receivers, with glass reflector surface, for three output temperatures at five cities in the United States. Two experimental receivers were tested, one with an antireflective coating on the glass envelope around the receiver tube and one without the antireflective coating.

  18. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development

    SciTech Connect (OSTI)

    Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie

    2015-09-01

    Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.

  19. Project Profile: Physics-Based Reliability Models for Supercritical-CO2 Turbomachinery Components

    Broader source: Energy.gov [DOE]

    GE, under the Physics of Reliability: Evaluating Design Insights for Component Technologies in Solar (PREDICTS) Program will be leveraging internally developed models to predict the reliability of hybrid gas bearing (HGB) and dry gas seal (DGS) components in the turboexpander of a supercritical CO2 turbine. The Bayesian model is to include phase changes, low cycle fatigue/high cycle fatigue, dynamic instabilities, and corrosion processes.

  20. Generic vehicle speed models based on traffic simulation: Development and application

    SciTech Connect (OSTI)

    Margiotta, R.; Cohen, H.; Elkins, G.; Rathi, A.; Venigalla, M.

    1994-12-15

    This paper summarizes the findings of a research project to develop new methods of estimating speeds for inclusion in the Highway Performance Monitoring System (HPMS) Analytical Process. The paper focuses on the effects of traffic conditions excluding incidents (recurring congestion) on daily average ed and excess fuel consumption. A review of the literature revealed that many techniques have been used to predict speeds as a function of congestion but most fail to address the effects of queuing. However, the method of Dowling and Skabardonis avoids this limitation and was adapted to the research. The methodology used the FRESIM and NETSIM microscopic traffic simulation models to develop uncongested speed functions and as a calibration base for the congested flow functions. The chief contributions of the new speed models are the simplicity of application and their explicit accounting for the effects of queuing. Specific enhancements include: (1) the inclusion of a queue discharge rate for freeways; (2) use of newly defined uncongested flow speed functions; (3) use of generic temporal distributions that account for peak spreading; and (4) a final model form that allows incorporation of other factors that influence speed, such as grades and curves. The main limitation of the new speed models is the fact that they are based on simulation results and not on field observations. They also do not account for the effect of incidents on speed. While appropriate for estimating average national conditions, the use of fixed temporal distributions may not be suitable for analyzing specific facilities, depending on observed traffic patterns. Finally, it is recommended that these and all future speed models be validated against field data where incidents can be adequately identified in the data.

  1. Experimentally validated long-term energy production prediction model for solar dish/Stirling electric generating systems

    SciTech Connect (OSTI)

    Stine, W.B.

    1995-12-31

    Dish/Stirling solar electric systems are currently being tested for performance and longevity in order to bring them to the electric power generation market. Studies both in Germany and the United States indicate that a significant market exists for these systems if they perform in actual installations according to tested conditions, and if, when produced in large numbers their cost will drop to goals currently being projected. In the 1980`s, considerable experience was gained operating eight dish/Stirling systems of three different designs. One of these recorded the world`s record for converting solar energy into electricity of 29.4%. The approach to system performance prediction taken in this presentation results from lessons learned in testing these early systems, and those currently being tested. Recently the IEA through the SolarPACES working group, has embarked on a program to develop uniform guidelines for measuring and presenting performance data. These guidelines are to help potential buyers who want to evaluate a specific system relative to other dish/Stirling systems, or relative to other technologies such as photovoltaic, parabolic trough or central receiver systems. In this paper, a procedure is described that permits modeling of long-term energy production using only a few experimentally determined parameters. The benefit of using this technique is that relatively simple tests performed over a period of a few months can provide performance parameters that can be used in a computer model requiring only the input of insolation and ambient temperature data to determine long-term energy production information. A portion of this analytical procedure has been tested on the three 9-kW(e) systems in operation in Almeria, Spain. Further evaluation of these concepts is planned on a 7.5-kW(e) system currently undergoing testing at Cal Poly University in Pomona, California and later on the 25 kW(e) USJVP systems currently under development.

  2. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO2 and comparing themore » predictions with experiments.« less

  3. Characterization and Modeling of a Water-based Liquid Scintillator

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    L. J. Bignell; Beznosko, D.; Diwan, M. V.; Hans, S.; Jaffe, D. E.; S. Kettell; Rosero, R.; Themann, H. W.; Viren, B.; Worcester, E.; et al

    2015-12-15

    We characterised Water-based Liquid Scintillator (WbLS) using low energy protons, UV-VIS absorbance, and fluorescence spectroscopy. We have also developed and validated a simulation model that describes the behaviour of WbLS in our detector configurations for proton beam energies of 210 MeV, 475 MeV, and 2 GeV and for two WbLS compositions. These results have enabled us to estimate the light yield and ionisation quenching of WbLS, as well as to understand the influence of the wavelength shifting of Cherenkov light on our measurements. These results are relevant to the suitability of WbLS materials for next generation intensity frontier experiments.

  4. On deformation twinning in a 17.5%Mn-TWIP steel: A physically-based phenomenological model

    SciTech Connect (OSTI)

    Soulami, Ayoub; Choi, Kyoo Sil; Shen, Y. F.; Liu, Wenning N.; Sun, Xin; Khaleel, Mohammad A.

    2011-01-25

    TWinning Induced Plasticity (TWIP) steel is a typical representative of the 2nd generation of advanced high strength steel (AHSS) which exhibits a combination of high strength and excellent ductility due to the twinning mechanisms. This paper discusses the principal features of deformation twinning in faced-centered cubic austenitic steels and shows how a physiscally-based macroscopic model can be derived from microscopic considerations. In fact, a dislocation-based phenomenological model, with internal state variables such as dislocation density and micro-twins volume fraction representing the microstructure evolution during deformation process, is proposed to describe the deformation behavior of TWIP steels. The contribution of this work is the incorporation of a physically-based twin’s nucleation and volume fraction evolution model in a conventional dislocation-based approach. Microstructural level investigations, using scanning electron microscope (SEM) and transmission electron microscope (TEM) techniques, for the TWIP steel Fe–17.5 wt.% Mn–1.4 wt.% Al- 0.56 wt.% C, are used to validate and verify modeling assumptions. The model could be regarded as a semi-phenomenological approach with sufficient links between microstructure and overall properties and therefore offers good predictive capabilities. Its simplicity also allows a modular implementation in finite element-based metal forming simulations.

  5. Absorption of ethanol, acetone, benzene and 1,2-dichloroethane through human skin in vitro: a test of diffusion model predictions

    SciTech Connect (OSTI)

    Gajjar, Rachna M.; Kasting, Gerald B.

    2014-11-15

    The overall goal of this research was to further develop and improve an existing skin diffusion model by experimentally confirming the predicted absorption rates of topically-applied volatile organic compounds (VOCs) based on their physicochemical properties, the skin surface temperature, and the wind velocity. In vitro human skin permeation of two hydrophilic solvents (acetone and ethanol) and two lipophilic solvents (benzene and 1,2-dichloroethane) was studied in Franz cells placed in a fume hood. Four doses of each {sup 14}C-radiolabed compound were tested — 5, 10, 20, and 40 ÎŒL cm{sup −2}, corresponding to specific doses ranging in mass from 5.0 to 63 mg cm{sup −2}. The maximum percentage of radiolabel absorbed into the receptor solutions for all test conditions was 0.3%. Although the absolute absorption of each solvent increased with dose, percentage absorption decreased. This decrease was consistent with the concept of a stratum corneum deposition region, which traps small amounts of solvent in the upper skin layers, decreasing the evaporation rate. The diffusion model satisfactorily described the cumulative absorption of ethanol; however, values for the other VOCs were underpredicted in a manner related to their ability to disrupt or solubilize skin lipids. In order to more closely describe the permeation data, significant increases in the stratum corneum/water partition coefficients, K{sub sc}, and modest changes to the diffusion coefficients, D{sub sc}, were required. The analysis provided strong evidence for both skin swelling and barrier disruption by VOCs, even by the minute amounts absorbed under these in vitro test conditions. - Highlights: ‱ Human skin absorption of small doses of VOCs was measured in vitro in a fume hood. ‱ The VOCs tested were ethanol, acetone, benzene and 1,2-dichloroethane. ‱ Fraction of dose absorbed for all compounds at all doses tested was less than 0.3%. ‱ The more aggressive VOCs absorbed at higher levels than diffusion model predictions. ‱ We conclude that even small exposures to VOCs temporarily alter skin permeability.

  6. Agent-Based Knowledge Discovery for Modeling and Simulation

    SciTech Connect (OSTI)

    Haack, Jereme N.; Cowell, Andrew J.; Marshall, Eric J.; Fligg, Alan K.; Gregory, Michelle L.; McGrath, Liam R.

    2009-09-15

    This paper describes an approach to using agent technology to extend the automated discovery mechanism of the Knowledge Encapsulation Framework (KEF). KEF is a suite of tools to enable the linking of knowledge inputs (relevant, domain-specific evidence) to modeling and simulation projects, as well as other domains that require an effective collaborative workspace for knowledge-based tasks. This framework can be used to capture evidence (e.g., trusted material such as journal articles and government reports), discover new evidence (covering both trusted and social media), enable discussions surrounding domain-specific topics and provide automatically generated semantic annotations for improved corpus investigation. The current KEF implementation is presented within a semantic wiki environment, providing a simple but powerful collaborative space for team members to review, annotate, discuss and align evidence with their modeling frameworks. The novelty in this approach lies in the combination of automatically tagged and user-vetted resources, which increases user trust in the environment, leading to ease of adoption for the collaborative environment.

  7. Integration of the predictions of two models with dose measurements in a case study of children exposed to the emissions of a lead smelter

    SciTech Connect (OSTI)

    Bonnard, R.; McKone, T.E.

    2009-03-01

    The predictions of two source-to-dose models are systematically evaluated with observed data collected in a village polluted by a currently operating secondary lead smelter. Both models were built up from several sub-models linked together and run using Monte-Carlo simulation, to calculate the distribution children's blood lead levels attributable to the emissions from the facility. The first model system is composed of the CalTOX model linked to a recoded version of the IEUBK model. This system provides the distribution of the media-specific lead concentrations (air, soil, fruit, vegetables and blood) in the whole area investigated. The second model consists of a statistical model to estimate the lead deposition on the ground, a modified version of the model HHRAP and the same recoded version of the IEUBK model. This system provides an estimate of the concentration of exposure of specific individuals living in the study area. The predictions of the first model system were improved in terms of accuracy and precision by performing a sensitivity analysis and using field data to correct the default value provided for the leaf wet density. However, in this case study, the first model system tends to overestimate the exposure due to exposed vegetables. The second model was tested for nine children with contrasting exposure conditions. It managed to capture the blood levels for eight of them. In the last case, the exposure of the child by pathways not considered in the model may explain the failure of the model. The interest of this integrated model is to provide outputs with lower variance than the first model system, but at the moment further tests are necessary to conclude about its accuracy.

  8. Advanced product realization through model-based design and virtual prototyping

    SciTech Connect (OSTI)

    Andreas, R.D.

    1995-03-01

    Several government agencies and industrial sectors have recognized the need for, and payoff of, investing in the methodologies and associated technologies for improving the product realization process. Within the defense community as well as commercial industry, there are three major needs. First, they must reduce the cost of military products, of related manufacturing processes, and of the enterprises that have to be maintained. Second, they must reduce the time required to realize products while still applying the latest technologies. Finally, they must improve the predictability of process attributes, product performance, cost, schedule and quality. They must continue to advance technology, quickly incorporate their innovations in new products and in processes to produce them, and they need to capitalize on the raw computational power and communications bandwidth that continues to become available at decreasing cost. Sandia National Laboratories initiative is pursuing several interrelated, key concepts and technologies in order to enable such product realization process improvements: model-based design; intelligent manufacturing processes; rapid virtual and physical prototyping; and agile people/enterprises. While progress in each of these areas is necessary, this paper only addresses a portion of the overall initiative. First a vision of a desired future capability in model-based design and virtual prototyping is presented. This is followed by a discussion of two specific activities parametric design analysis of Synthetic Aperture Radars (SARs) and virtual prototyping of miniaturized high-density electronics -- that exemplify the vision as well as provide a status report on relevant work in progress.

  9. Integrated Experimental and Model-based Analysis Reveals the Spatial Aspects of EGFR Activation Dynamics

    SciTech Connect (OSTI)

    Shankaran, Harish; Zhang, Yi; Chrisler, William B.; Ewald, Jonathan A.; Wiley, H. S.; Resat, Haluk

    2012-10-02

    The epidermal growth factor receptor (EGFR) belongs to the ErbB family of receptor tyrosine kinases, and controls a diverse set of cellular responses relevant to development and tumorigenesis. ErbB activation is a complex process involving receptor-ligand binding, receptor dimerization, phosphorylation, and trafficking (internalization, recycling and degradation), which together dictate the spatio-temporal distribution of active receptors within the cell. The ability to predict this distribution, and elucidation of the factors regulating it, would help to establish a mechanistic link between ErbB expression levels and the cellular response. Towards this end, we constructed mathematical models for deconvolving the contributions of receptor dimerization and phosphorylation to EGFR activation, and to examine the dependence of these processes on sub-cellular location. We collected experimental datasets for EGFR activation dynamics in human mammary epithelial cells, with the specific goal of model parameterization, and used the data to estimate parameters for several alternate models. Model-based analysis indicated that: 1) signal termination via receptor dephosphorylation in late endosomes, prior to degradation, is an important component of the response, 2) less than 40% of the receptors in the cell are phosphorylated at any given time, even at saturating ligand doses, and 3) receptor dephosphorylation rates at the cell surface and early endosomes are comparable. We validated the last finding by measuring EGFR dephosphorylation rates at various times following ligand addition both in whole cells, and in endosomes using ELISAs and fluorescent imaging. Overall, our results provide important information on how EGFR phosphorylation levels are regulated within cells. Further, the mathematical model described here can be extended to determine receptor dimer abundances in cells co-expressing various levels of ErbB receptors. This study demonstrates that an iterative cycle of experiments and modeling can be used to gain mechanistic insight regarding complex cell signaling networks.

  10. Integrated Sensing and Controls for Coal Gasification - Development of Model-Based Controls for GE's Gasifier and Syngas Cooler

    SciTech Connect (OSTI)

    Aditya Kumar

    2010-12-30

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a comprehensive systems approach to integrated design of sensing and control systems for an Integrated Gasification Combined Cycle (IGCC) plant, using advanced model-based techniques. In particular, this program is focused on the model-based sensing and control system design for the core gasification section of an IGCC plant. The overall approach consists of (i) developing a first-principles physics-based dynamic model of the gasification section, (ii) performing model-reduction where needed to derive low-order models suitable for controls analysis and design, (iii) developing a sensing system solution combining online sensors with model-based estimation for important process variables not measured directly, and (iv) optimizing the steady-state and transient operation of the plant for normal operation as well as for startup using model predictive controls (MPC). Initially, available process unit models were implemented in a common platform using Matlab/Simulink{reg_sign}, and appropriate model reduction and model updates were performed to obtain the overall gasification section dynamic model. Also, a set of sensor packages were developed through extensive lab testing and implemented in the Tampa Electric Company IGCC plant at Polk power station in 2009, to measure temperature and strain in the radiant syngas cooler (RSC). Plant operation data was also used to validate the overall gasification section model. The overall dynamic model was then used to develop a sensing solution including a set of online sensors coupled with model-based estimation using nonlinear extended Kalman filter (EKF). Its performance in terms of estimating key unmeasured variables like gasifier temperature, carbon conversion, etc., was studied through extensive simulations in the presence sensing errors (noise and bias) and modeling errors (e.g. unknown gasifier kinetics, RSC fouling). In parallel, an MPC solution was initially developed using ideal sensing to optimize the plant operation during startup pre-heating as well as steady state and transient operation under normal high-pressure conditions, e.g. part-load, base-load, load transition and fuel changes. The MPC simulation studies showed significant improvements both for startup pre-heating and for normal operation. Finally, the EKF and MPC solutions were coupled to achieve the integrated sensing and control solution and its performance was studied through extensive steady state and transient simulations in the presence of sensor and modeling errors. The results of each task in the program and overall conclusions are summarized in this final report.

  11. MODEL-BASED HYDROACOUSTIC BLOCKAGE ASSESSMENT AND DEVELOPMENT OF AN EXPLOSIVE SOURCE DATABASE

    SciTech Connect (OSTI)

    Matzel, E; Ramirez, A; Harben, P

    2005-07-11

    We are continuing the development of the Hydroacoustic Blockage Assessment Tool (HABAT) which is designed for use by analysts to predict which hydroacoustic monitoring stations can be used in discrimination analysis for any particular event. The research involves two approaches (1) model-based assessment of blockage, and (2) ground-truth data-based assessment of blockage. The tool presents the analyst with a map of the world, and plots raypath blockages from stations to sources. The analyst inputs source locations and blockage criteria, and the tool returns a list of blockage status from all source locations to all hydroacoustic stations. We are currently using the tool in an assessment of blockage criteria for simple direct-path arrivals. Hydroacoustic data, predominantly from earthquake sources, are read in and assessed for blockage at all available stations. Several measures are taken. First, can the event be observed at a station above background noise? Second, can we establish backazimuth from the station to the source. Third, how large is the decibel drop at one station relative to other stations. These observational results are then compared with model estimates to identify the best set of blockage criteria and used to create a set of blockage maps for each station. The model-based estimates are currently limited by the coarse bathymetry of existing databases and by the limitations inherent in the raytrace method. In collaboration with BBN Inc., the Hydroacoustic Coverage Assessment Model (HydroCAM) that generates the blockage files that serve as input to HABAT, is being extended to include high-resolution bathymetry databases in key areas that increase model-based blockage assessment reliability. An important aspect of this capability is to eventually include reflected T-phases where they reliably occur and to identify the associated reflectors. To assess how well any given hydroacoustic discriminant works in separating earthquake and in-water explosion populations it is necessary to have both a database of reference earthquake events and of reference in-water explosive events. Although reference earthquake events are readily available, explosive reference events are not. Consequently, building an in-water explosion reference database requires the compilation of events from many sources spanning a long period of time. We have developed a database of small implosive and explosive reference events from the 2003 Indian Ocean Cruise data. These events were recorded at some or all of the IMS Indian Ocean hydroacoustic stations: Diego Garcia, Cape Leeuwin, and Crozet Island. We have also reviewed many historical large in-water explosions and identified five that have adequate source information and can be positively associated to the hydrophone recordings. The five events are: Cannekin, Longshot, CHASE-3, CHASE-5, and IITRI-1. Of these, the first two are nuclear tests on land but near water. The latter three are in-water conventional explosive events with yields from ten to hundreds of tons TNT equivalent. The objective of this research is to enhance discrimination capabilities for events located in the world's oceans. Two research and development efforts are needed to achieve this: (1) improvement in discrimination algorithms and their joint statistical application to events, and (2) development of an automated and accurate blockage prediction capability that will identify all stations and phases (direct and reflected) from a given event that will have adequate signal to be used in a discrimination analysis. The strategy for improving blockage prediction in the world's oceans is to improve model-based prediction of blockage and to develop a ground-truth database of reference events to assess blockage. Currently, research is focused on the development of a blockage assessment software tool. The tool is envisioned to develop into a sophisticated and unifying package that optimally and automatically assesses both model and data based blockage predictions in all ocean basins, for all NDC stations, and accounting for reflected phases (Pulli et al., 2000). Currently, we have focused our efforts on the Diego Garcia, Cape Leeuwin and Crozet Island hydroacoustic stations in the Indian Ocean.

  12. A CFD-based wind solver for a fast response transport and dispersion model

    SciTech Connect (OSTI)

    Gowardhan, Akshay A; Brown, Michael J; Pardyjak, Eric R; Senocak, Inanc

    2010-01-01

    In many cities, ambient air quality is deteriorating leading to concerns about the health of city inhabitants. In urban areas with narrow streets surrounded by clusters of tall buildings, called street canyons, air pollution from traffic emissions and other sources is difficult to disperse and may accumulate resulting in high pollutant concentrations. For various situations, including the evacuation of populated areas in the event of an accidental or deliberate release of chemical, biological and radiological agents, it is important that models should be developed that produce urban flow fields quickly. For these reasons it has become important to predict the flow field in urban street canyons. Various computational techniques have been used to calculate these flow fields, but these techniques are often computationally intensive. Most fast response models currently in use are at a disadvantage in these cases as they are unable to correlate highly heterogeneous urban structures with the diagnostic parameterizations on which they are based. In this paper, a fast and reasonably accurate computational fluid dynamics (CFD) technique that solves the Navier-Stokes equations for complex urban areas has been developed called QUIC-CFD (Q-CFD). This technique represents an intermediate balance between fast (on the order of minutes for a several block problem) and reasonably accurate solutions. The paper details the solution procedure and validates this model for various simple and complex urban geometries.

  13. Durability-Based Design Guide for an Automotive Structural Composite: Part 2. Background Data and Models

    SciTech Connect (OSTI)

    Corum, J.M.; Battiste, R.L.; Brinkman, C.R.; Ren, W.; Ruggles, M.B.; Weitsman, Y.J.; Yahr, G.T.

    1998-02-01

    This background report is a companion to the document entitled ''Durability-Based Design Criteria for an Automotive Structural Composite: Part 1. Design Rules'' (ORNL-6930). The rules and the supporting material characterization and modeling efforts described here are the result of a U.S. Department of Energy Advanced Automotive Materials project entitled ''Durability of Lightweight Composite Structures.'' The overall goal of the project is to develop experimentally based, durability-driven design guidelines for automotive structural composites. The project is closely coordinated with the Automotive Composites Consortium (ACC). The initial reference material addressed by the rules and this background report was chosen and supplied by ACC. The material is a structural reaction injection-molded isocyanurate (urethane), reinforced with continuous-strand, swirl-mat, E-glass fibers. This report consists of 16 position papers, each summarizing the observations and results of a key area of investigation carried out to provide the basis for the durability-based design guide. The durability issues addressed include the effects of cyclic and sustained loadings, temperature, automotive fluids, vibrations, and low-energy impacts (e.g., tool drops and roadway kickups) on deformation, strength, and stiffness. The position papers cover these durability issues. Topics include (1) tensile, compressive, shear, and flexural properties; (2) creep and creep rupture; (3) cyclic fatigue; (4) the effects of temperature, environment, and prior loadings; (5) a multiaxial strength criterion; (6) impact damage and damage tolerance design; (7) stress concentrations; (8) a damage-based predictive model for time-dependent deformations; (9) confirmatory subscale component tests; and (10) damage development and growth observations.

  14. Prediction of Thermal Conductivity for Irradiated SiC/SiC Composites by Informing Continuum Models with Molecular Dynamics Data

    SciTech Connect (OSTI)

    Nguyen, Ba Nghiep; Gao, Fei; Henager, Charles H.; Kurtz, Richard J.

    2014-05-01

    This article proposes a new method to estimate the thermal conductivity of SiC/SiC composites subjected to neutron irradiation. The modeling method bridges different scales from the atomic scale to the scale of a 2D SiC/SiC composite. First, it studies the irradiation-induced point defects in perfect crystalline SiC using molecular dynamics (MD) simulations to compute the defect thermal resistance as a function of vacancy concentration and irradiation dose. The concept of defect thermal resistance is explored explicitly in the MD data using vacancy concentrations and thermal conductivity decrements due to phonon scattering. Point defect-induced swelling for chemical vapor deposited (CVD) SiC as a function of irradiation dose is approximated by scaling the corresponding MD results for perfect crystal ?-SiC to experimental data for CVD-SiC at various temperatures. The computed thermal defect resistance, thermal conductivity as a function of grain size, and definition of defect thermal resistance are used to compute the thermal conductivities of CVD-SiC, isothermal chemical vapor infiltrated (ICVI) SiC and nearly-stoichiometric SiC fibers. The computed fiber and ICVI-SiC matrix thermal conductivities are then used as input for an Eshelby-Mori-Tanaka approach to compute the thermal conductivities of 2D SiC/SiC composites subjected to neutron irradiation within the same irradiation doses. Predicted thermal conductivities for an irradiated Tyranno-SA/ICVI-SiC composite are found to be comparable to available experimental data for a similar composite ICVI-processed with these fibers.

  15. A non-local, ordinary-state-based viscoelasticity model for peridynamics.

    Office of Scientific and Technical Information (OSTI)

    (Technical Report) | SciTech Connect non-local, ordinary-state-based viscoelasticity model for peridynamics. Citation Details In-Document Search Title: A non-local, ordinary-state-based viscoelasticity model for peridynamics. A non-local, ordinary-state-based, peridynamics viscoelasticity model is developed. In this model, viscous effects are added to deviatoric deformations and the bulk response remains elastic. The model uses internal state variables and is conceptually similar to

  16. Three orbital model for the iron-based superconductors

    SciTech Connect (OSTI)

    Daghofer, Maria [ORNL; Nicholson, Andrew D [ORNL; Moreo, Adriana [ORNL; Dagotto, Elbio R [ORNL

    2010-01-01

    The theoretical need to study the properties of the Fe-based high-Tc superconductors using reliable manybody techniques has highlighted the importance of determining what is the minimum number of orbital degrees of freedom that will capture the physics of these materials. While the shape of the Fermi surface FS obtained with the local-density approximation LDA can be reproduced by a two-orbital model, it has been argued that the bands that cross the chemical potential result from the strong hybridization of three of the Fe 3d orbitals. For this reason, a three orbital Hamiltonian for LaOFeAs obtained with the Slater-Koster formalism by considering the hybridization of the As p orbitals with the Fe dxz, dyz, and dxy orbitals is discussed here. This model reproduces qualitatively the FS shape and orbital composition obtained by LDA calculations for undoped LaOFeAs when four electrons per Fe are considered. Within a mean-field approximation, its magnetic and orbital properties in the undoped case are here described for intermediate values of J/U. Increasing the Coulomb repulsion U at zero temperature, four different regimes are obtained: 1 paramagnetic, 2 magnetic ,0 spin order, 3 the same ,0 spin order but now including orbital order, and finally 4 a magnetic and orbital ordered insulator. The spin-singlet pairing operators allowed by the lattice and orbital symmetries are also constructed. It is found that for pairs of electrons involving up to diagonal nearest-neighbors sites, the only fully gapped and purely intraband spin-singlet pairing operator is given by k= fkdk,, d k,, with fk=1 or cos kx cos ky which would arise only if the electrons in all different orbitals couple with equal strength to the source of pairing.

  17. Dynamic model of Italy`s Progetto Energia cogeneration plants aims to better predict plant performance, cut start-up costs

    SciTech Connect (OSTI)

    1996-12-31

    Over the next four years, the Progetto Energia project will be building several cogeneration plants to help satisfy the increasing demands of Italy`s industrial users and the country`s demand for electrical power. Located at six different sites within Italy, these combined-cycle cogeneration plants will supply a total of 500 MW of electricity and 100 tons/hr of process steam to Italian industries and residences. To ensure project success, a dynamic model of the 50-MW base unit was developed. The goal established for the model was to predict the dynamic behavior of the complex thermodynamic system in order to assess equipment performance and control system effectiveness for normal operation and, more importantly, abrupt load changes. In addition to fulfilling its goals, the dynamic study guided modifications to controller logic that significantly improved steam drum pressure control and bypassed steam desuperheating performance simulations of normal and abrupt transient events allowed engineers to define optimum controller gain coefficients. The dynamic study will undoubtedly reduce the associated plant start-up costs and contribute to a smooth commercial plant acceptance. As a result of the work, the control system has already been through its check-out and performance evaluation, usually performed during the plant start-up phase. Field engineers will directly benefit from this effort to identify and resolve control system {open_quotes}bugs{close_quotes} before the equipment reaches the field. High thermal efficiency, rapid dispatch and high plant availability were key reasons why the natural gas combined-cycle plant was chosen. Other favorable attributes of the combined-cycle plant contributing to the decision were: Minimal environmental impact; a simple and effective process and control philosophy to result in safe and easy plant operation; a choice of technologies and equipment proven in a large number of applications.

  18. Microsoft Word - NRAP-TRS-I-005-2014_Use of Science-Based Prediction to Characterize Reservoir Behavior as a Function of Inject

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Use of Science-Based Prediction to Characterize Reservoir Behavior as a Function of Injection Characteristics, Geological Variables, and Time 12 November 2014 Office of Fossil Energy NRAP-TRS-I-005-2014 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for

  19. Recovery Act: Web-based CO{sub 2} Subsurface Modeling

    SciTech Connect (OSTI)

    Paolini, Christopher; Castillo, Jose

    2012-11-30

    The Web-based CO{sub 2} Subsurface Modeling project focused primarily on extending an existing text-only, command-line driven, isothermal and isobaric, geochemical reaction-transport simulation code, developed and donated by Sienna Geodynamics, into an easier-to-use Web-based application for simulating long-term storage of CO{sub 2} in geologic reservoirs. The Web-based interface developed through this project, publically accessible via URL http://symc.sdsu.edu/, enables rapid prototyping of CO{sub 2} injection scenarios and allows students without advanced knowledge of geochemistry to setup a typical sequestration scenario, invoke a simulation, analyze results, and then vary one or more problem parameters and quickly re-run a simulation to answer what-if questions. symc.sdsu.edu has 2x12 core AMD Opteronℱ 6174 2.20GHz processors and 16GB RAM. The Web-based application was used to develop a new computational science course at San Diego State University, COMP 670: Numerical Simulation of CO{sub 2} Sequestration, which was taught during the fall semester of 2012. The purpose of the class was to introduce graduate students to Carbon Capture, Use and Storage (CCUS) through numerical modeling and simulation, and to teach students how to interpret simulation results to make predictions about long-term CO{sub 2} storage capacity in deep brine reservoirs. In addition to the training and education component of the project, significant software development efforts took place. Two computational science doctoral and one geological science masters student, under the direction of the PIs, extended the original code developed by Sienna Geodynamics, named Sym.8. New capabilities were added to Sym.8 to simulate non-isothermal and non-isobaric flows of charged aqueous solutes in porous media, in addition to incorporating HPC support into the code for execution on many-core XSEDE clusters. A successful outcome of this project was the funding and training of three new computational science students and one geological science student in technologies relevant to carbon sequestration and problems involving flow in subsurface media. The three computational science students are currently finishing their doctorial studies on different aspects of modeling CO{sub 2} sequestration, while the geological science student completed his master’s thesis in modeling the thermal response of CO{sub 2} injection in brine and, as a direct result of participation in this project, is now employed at ExxonMobil as a full-time staff geologist.

  20. On-line Chemistry within WRF: Description and Evaluation of a State-of-the-Art Multiscale Air Quality and Weather Prediction Model

    SciTech Connect (OSTI)

    Grell, Georg; Fast, Jerome D.; Gustafson, William I.; Peckham, Steven E.; McKeen, Stuart A.; Salzmann, Marc; Freitas, Saulo

    2010-01-01

    This is a conference proceeding that is now being put together as a book. This is chapter 2 of the book: "INTEGRATED SYSTEMS OF MESO-METEOROLOGICAL AND CHEMICAL TRANSPORT MODELS" published by Springer. The chapter title is "On-line Chemistry within WRF: Description and Evaluation of a State-of-the-Art Multiscale Air Quality and Weather Prediction Model." The original conference was the COST-728/NetFAM workshop on Integrated systems of meso-meteorological and chemical transport models, Danish Meteorological Institute, Copenhagen, May 21-23, 2007.

  1. Lipid-Based Nanodiscs as Models for Studying Mesoscale Coalescence A Transport Limited Case

    SciTech Connect (OSTI)

    Hu, Andrew; Fan, Tai-Hsi; Katsaras, John; Xia, Yan; Li, Ming; Nieh, Mu-Ping

    2014-01-01

    Lipid-based nanodiscs (bicelles) are able to form in mixtures of long- and short-chain lipids. Initially, they are of uniform size but grow upon dilution. Previously, nanodisc growth kinetics have been studied using time-resolved small angle neutron scattering (SANS), a technique which is not well suited for probing their change in size immediately after dilution. To address this, we have used dynamic light scattering (DLS), a technique which permits the collection of useful data in a short span of time after dilution of the system. The DLS data indicate that the negatively charged lipids in nanodiscs play a significant role in disc stability and growth. Specifically, the charged lipids are most likely drawn out from the nanodiscs into solution, thereby reducing interparticle repulsion and enabling the discs to grow. We describe a population balance model, which takes into account Coulombic interactions and adequately predicts the initial growth of nanodiscs with a single parameter i.e., surface potential. The results presented here strongly support the notion that the disc coalescence rate strongly depends on nanoparticle charge density. The present system containing low-polydispersity lipid nanodiscs serves as a good model for understanding how charged discoidal micelles coalesce.

  2. Operational forecasting based on a modified Weather Research and Forecasting model

    SciTech Connect (OSTI)

    Lundquist, J; Glascoe, L; Obrecht, J

    2010-03-18

    Accurate short-term forecasts of wind resources are required for efficient wind farm operation and ultimately for the integration of large amounts of wind-generated power into electrical grids. Siemens Energy Inc. and Lawrence Livermore National Laboratory, with the University of Colorado at Boulder, are collaborating on the design of an operational forecasting system for large wind farms. The basis of the system is the numerical weather prediction tool, the Weather Research and Forecasting (WRF) model; large-eddy simulations and data assimilation approaches are used to refine and tailor the forecasting system. Representation of the atmospheric boundary layer is modified, based on high-resolution large-eddy simulations of the atmospheric boundary. These large-eddy simulations incorporate wake effects from upwind turbines on downwind turbines as well as represent complex atmospheric variability due to complex terrain and surface features as well as atmospheric stability. Real-time hub-height wind speed and other meteorological data streams from existing wind farms are incorporated into the modeling system to enable uncertainty quantification through probabilistic forecasts. A companion investigation has identified optimal boundary-layer physics options for low-level forecasts in complex terrain, toward employing decadal WRF simulations to anticipate large-scale changes in wind resource availability due to global climate change.

  3. A neural network model for predicting the silicon content of the hot metal at No. 2 blast furnace of SSAB Luleaa

    SciTech Connect (OSTI)

    Zuo Guangqing; Ma Jitang; Bo, B.

    1996-12-31

    To predict the silicon content of hot metal at No. 2 blast furnace, SSAB, Luleaa Works, a three-layer Back-Propagation network model has been established. The network consists of twenty-eight inputs, six middle nodes and one output and uses a generalized delta rule for training. Different network structures and different training strategies have been tested. A well-functioning network with dynamic updating has been designed. The off-line test and the on-line application results showed that more than 80% of the predictions can match the actual silicon content in hot metal in a normal operation, if the allowable prediction error was set to {+-}0.05% Si, while the actual fluctuation of the silicon content was larger than {+-}0.10% Si.

  4. COLLABORATIVE RESEARCH: TOWARDS ADVANCED UNDERSTANDING AND PREDICTIVE CAPABILITY OF CLIMATE CHANGE IN THE ARCTIC USING A HIGH-RESOLUTION REGIONAL ARCTIC CLIMATE SYSTEM MODEL

    SciTech Connect (OSTI)

    Gutowski, William J.

    2013-02-07

    The motivation for this project was to advance the science of climate change and prediction in the Arctic region. Its primary goals were to (i) develop a state-of-the-art Regional Arctic Climate system Model (RACM) including high-resolution atmosphere, land, ocean, sea ice and land hydrology components and (ii) to perform extended numerical experiments using high performance computers to minimize uncertainties and fundamentally improve current predictions of climate change in the northern polar regions. These goals were realized first through evaluation studies of climate system components via one-way coupling experiments. Simulations were then used to examine the effects of advancements in climate component systems on their representation of main physics, time-mean fields and to understand variability signals at scales over many years. As such this research directly addressed some of the major science objectives of the BER Climate Change Research Division (CCRD) regarding the advancement of long-term climate prediction.

  5. Correlation of a hypoxia based tumor control model with observed local control rates in nasopharyngeal carcinoma treated with chemoradiotherapy

    SciTech Connect (OSTI)

    Avanzo, Michele; Stancanello, Joseph; Franchin, Giovanni; Sartor, Giovanna; Jena, Rajesh; Drigo, Annalisa; Dassie, Andrea; Gigante, Marco; Capra, Elvira

    2010-04-15

    Purpose: To extend the application of current radiation therapy (RT) based tumor control probability (TCP) models of nasopharyngeal carcinoma (NPC) to include the effects of hypoxia and chemoradiotherapy (CRT). Methods: A TCP model is described based on the linear-quadratic model modified to account for repopulation, chemotherapy, heterogeneity of dose to the tumor, and hypoxia. Sensitivity analysis was performed to determine which parameters exert the greatest influence on the uncertainty of modeled TCP. On the basis of the sensitivity analysis, the values of specific radiobiological parameters were set to nominal values reported in the literature for NPC or head and neck tumors. The remaining radiobiological parameters were determined by fitting TCP to clinical local control data from published randomized studies using both RT and CRT. Validation of the model was performed by comparison of estimated TCP and average overall local control rate (LCR) for 45 patients treated at the institution with conventional linear-accelerator-based or helical tomotherapy based intensity-modulated RT and neoadjuvant chemotherapy. Results: Sensitivity analysis demonstrates that the model is most sensitive to the radiosensitivity term {alpha} and the dose per fraction. The estimated values of {alpha} and OER from data fitting were 0.396 Gy{sup -1} and 1.417. The model estimate of TCP (average 90.9%, range 26.9%-99.2%) showed good correlation with the LCR (86.7%). Conclusions: The model implemented in this work provides clinicians with a useful tool to predict the success rate of treatment, optimize treatment plans, and compare the effects of multimodality therapy.

  6. Inter-comparison of Computer Codes for TRISO-based Fuel Micro-Modeling and Performance Assessment

    SciTech Connect (OSTI)

    Brian Boer; Chang Keun Jo; Wen Wu; Abderrafi M. Ougouag; Donald McEachren; Francesco Venneri

    2010-10-01

    The Next Generation Nuclear Plant (NGNP), the Deep Burn Pebble Bed Reactor (DB-PBR) and the Deep Burn Prismatic Block Reactor (DB-PMR) are all based on fuels that use TRISO particles as their fundamental constituent. The TRISO particle properties include very high durability in radiation environments, hence the designs reliance on the TRISO to form the principal barrier to radioactive materials release. This durability forms the basis for the selection of this fuel type for applications such as Deep Bun (DB), which require exposures up to four times those expected for light water reactors. It follows that the study and prediction of the durability of TRISO particles must be carried as part of the safety and overall performance characterization of all the designs mentioned above. Such evaluations have been carried out independently by the performers of the DB project using independently developed codes. These codes, PASTA, PISA and COPA, incorporate models for stress analysis on the various layers of the TRISO particle (and of the intervening matrix material for some of them), model for fission products release and migration then accumulation within the SiC layer of the TRISO particle, just next to the layer, models for free oxygen and CO formation and migration to the same location, models for temperature field modeling within the various layers of the TRISO particle and models for the prediction of failure rates. All these models may be either internal to the code or external. This large number of models and the possibility of different constitutive data and model formulations and the possibility of a variety of solution techniques makes it highly unlikely that the model would give identical results in the modeling of identical situations. The purpose of this paper is to present the results of an inter-comparison between the codes and to identify areas of agreement and areas that need reconciliation. The inter-comparison has been carried out by the cooperating institutions using a set of pre-defined TRISO conditions (burnup levels, temperature or power levels, etc.) and the outcome will be tabulated in the full length paper. The areas of agreement will be pointed out and the areas that require further modeling or reconciliation will be shown. In general the agreement between the codes is good within less than one order of magnitude in the prediction of TRISO failure rates.

  7. Comparison of Hydrodynamic Load Predictions Between Engineering Models and Computational Fluid Dynamics for the OC4-DeepCwind Semi-Submersible: Preprint

    SciTech Connect (OSTI)

    Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.; Stewart, G. M.; Jonkman, J.; Robertson, A.

    2014-09-01

    Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptions in HydroDyn are evaluated based on this code-to-code comparison.

  8. Spectral softening in the X-RAY afterglow of GRB 130925A as predicted by the dust scattering model

    SciTech Connect (OSTI)

    Zhao, Yi-Nan; Shao, Lang, E-mail: lshao@hebtu.edu.cn [Department of Space Science and Astronomy, Hebei Normal University, Shijiazhuang 050024 (China)

    2014-07-01

    Gamma-ray bursts (GRBs) usually occur in a dense star-forming region with a massive circumburst medium. The small-angle scattering of intense prompt X-ray emission off the surrounding dust grains will have observable consequences and sometimes can dominate the X-ray afterglow. In most of the previous studies, only the Rayleigh-Gans (RG) approximation is employed for describing the scattering process, which works accurately for the typical size of grains (with radius of a ? 0.1 ?m) in the diffuse interstellar medium. When the size of the grains may significantly increase, as in a more dense region where GRBs would occur, the RG approximation may not be valid enough for modeling detailed observational data. In order to study the temporal and spectral properties of the scattered X-ray emission more accurately with potentially larger dust grains, we provide a practical approach using the series expansions of anomalous diffraction (AD) approximation based on the complicated Mie theory. We apply our calculations to understand the puzzling X-ray afterglow of recently observed GRB 130925A that showed a significant spectral softening. We find that the X-ray scattering scenarios with either AD or RG approximation adopted could well reproduce both the temporal and spectral profile simultaneously. Given the plateau present in the early X-ray light curve, a typical distribution of smaller grains as in the interstellar medium would be suggested for GRB 130925A.

  9. Constraint-Based Modeling of Carbon Fixation and the Energetics of Electron Transfer in Geobacter metallireducens

    SciTech Connect (OSTI)

    Feist, AM; Nagarajan, H; Rotaru, AE; Tremblay, PL; Zhang, T; Nevin, KP; Lovley, DR; Zengler, K

    2014-04-24

    Geobacter species are of great interest for environmental and biotechnology applications as they can carry out direct electron transfer to insoluble metals or other microorganisms and have the ability to assimilate inorganic carbon. Here, we report on the capability and key enabling metabolic machinery of Geobacter metallireducens GS-15 to carry out CO2 fixation and direct electron transfer to iron. An updated metabolic reconstruction was generated, growth screens on targeted conditions of interest were performed, and constraint-based analysis was utilized to characterize and evaluate critical pathways and reactions in G. metallireducens. The novel capability of G. metallireducens to grow autotrophically with formate and Fe(III) was predicted and subsequently validated in vivo. Additionally, the energetic cost of transferring electrons to an external electron acceptor was determined through analysis of growth experiments carried out using three different electron acceptors (Fe(III), nitrate, and fumarate) by systematically isolating and examining different parts of the electron transport chain. The updated reconstruction will serve as a knowledgebase for understanding and engineering Geobacter and similar species. Author Summary The ability of microorganisms to exchange electrons directly with their environment has large implications for our knowledge of industrial and environmental processes. For decades, it has been known that microbes can use electrodes as electron acceptors in microbial fuel cell settings. Geobacter metallireducens has been one of the model organisms for characterizing microbe-electrode interactions as well as environmental processes such as bioremediation. Here, we significantly expand the knowledge of metabolism and energetics of this model organism by employing constraint-based metabolic modeling. Through this analysis, we build the metabolic pathways necessary for carbon fixation, a desirable property for industrial chemical production. We further discover a novel growth condition which enables the characterization of autotrophic (i.e., carbon-fixing) metabolism in Geobacter. Importantly, our systems-level modeling approach helped elucidate the key metabolic pathways and the energetic cost associated with extracellular electron transfer. This model can be applied to characterize and engineer the metabolism and electron transfer capabilities of Geobacter for biotechnological applications.

  10. Macromodel for assessing residential concentrations of combustion-generated pollutants: Model development and preliminary predictions for CO, NO/sub 2/, and respirable suspended particles

    SciTech Connect (OSTI)

    Traynor, G.W.; Aceti, J.C.; Apte, M.G.; Smith, B.V.; Green, L.L.; Smith-Reiser, A.; Novak, K.M.; Moses, D.O.

    1989-01-01

    A simulation model (also called a ''macromodel'') has been developed to predict residential air pollutant concentration distributions for specified populations. The model inputs include the market penetration of pollution sources, pollution source characteristics (e.g., emission rates, source usage rates), building characteristics (e.g., house volume, air exchange rates), and meteorological parameters (e.g., outside temperature). Four geographically distinct regions of the US have been modeled using Monte Carlo and deterministic simulation techniques. Single-source simulations were also conducted. The highest predicted CO and NO/sub 2/ residential concentrations were associated with the winter-time use of unvented gas and kerosene space heaters. The highest predicted respirable suspended particulate concentrations were associated with indoor cigarette smoking and the winter-time use of non-airtight wood stoves, radiant kerosene heaters, convective unvented gas space heaters, and oil forced-air furnaces. Future field studies in this area should (1) fill information gaps identified in this report, and (2) collect information on the macromodel input parameters to properly interpret the results. It is almost more important to measure the parameters that affect indoor concentration than it is to measure the concentrations themselves.

  11. Development of a GIS Based Dust Dispersion Modeling System.

    SciTech Connect (OSTI)

    Rutz, Frederick C.; Hoopes, Bonnie L.; Crandall, Duard W.; Allwine, K Jerry

    2004-08-12

    With residential areas moving closer to military training sites, the effects upon the environment and neighboring civilians due to dust generated by training exercises has become a growing concern. Under a project supported by the Strategic Environmental Research and Development Program (SERDP) of the Department of Defense, a custom application named DUSTRAN is currently under development that integrates a system of EPA atmospheric dispersion models with the ArcGIS application environment in order to simulate the dust dispersion generated by a planned training maneuver. This integration between modeling system and GIS application allows for the use of real world geospatial data such as terrain, land-use, and domain size as input by the modeling system. Output generated by the modeling system, such as concentration and deposition plumes, can then be displayed upon accurate maps representing the training site. This paper discusses the development of this integration between modeling system and Arc GIS application.

  12. Physics-based statistical learning approach to mesoscopic model...

    Office of Scientific and Technical Information (OSTI)

    Title: Physics-based statistical learning approach to ... Type: Publisher's Accepted Manuscript Journal Name: Physical ... Country of Publication: United States Language: English Word ...

  13. Empirical Evaluation of Four Microwave Radiative Forward Models Based on Ground-Based Radiometer Data Near 20 and 30 GHz

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Empirical Evaluation of Four Microwave Radiative Forward Models Based on Ground-Based Radiometer Data Near 20 and 30 GHz C. Cimini Centre of Excellence on Atmospheric Modeling and Remote Sensing University of L'Aquila L'Aquila, Italy and Science and Technology Corporation Hampton, Virginia E. R. Westwater Cooperative Institute for Research in Environmental Sciences University of Colorado National Oceanic and Atmospheric Administration Environmental Technology Laboratory Boulder, Colorado S. J.

  14. Prediction and analysis of infra and low-frequency noise of upwind horizontal axis wind turbine using statistical wind speed model

    SciTech Connect (OSTI)

    Lee, Gwang-Se; Cheong, Cheolung

    2014-12-15

    Despite increasing concern about low-frequency noise of modern large horizontal-axis wind turbines (HAWTs), few studies have focused on its origin or its prediction methods. In this paper, infra- and low-frequency (the ILF) wind turbine noise are closely examined and an efficient method is developed for its prediction. Although most previous studies have assumed that the ILF noise consists primarily of blade passing frequency (BPF) noise components, these tonal noise components are seldom identified in the measured noise spectrum, except for the case of downwind wind turbines. In reality, since modern HAWTs are very large, during rotation, a single blade of the turbine experiences inflow with variation in wind speed in time as well as in space, breaking periodic perturbations of the BPF. Consequently, this transforms acoustic contributions at the BPF harmonics into broadband noise components. In this study, the ILF noise of wind turbines is predicted by combining Lowson’s acoustic analogy with the stochastic wind model, which is employed to reproduce realistic wind speed conditions. In order to predict the effects of these wind conditions on pressure variation on the blade surface, unsteadiness in the incident wind speed is incorporated into the XFOIL code by varying incident flow velocities on each blade section, which depend on the azimuthal locations of the rotating blade. The calculated surface pressure distribution is subsequently used to predict acoustic pressure at an observing location by using Lowson’s analogy. These predictions are compared with measured data, which ensures that the present method can reproduce the broadband characteristics of the measured low-frequency noise spectrum. Further investigations are carried out to characterize the IFL noise in terms of pressure loading on blade surface, narrow-band noise spectrum and noise maps around the turbine.

  15. Collaborative Research: Towards Advanced Understanding and Predictive Capability of Climate Change in the Arctic Using a High-Resolution Regional Arctic Climate Model

    SciTech Connect (OSTI)

    Cassano, John

    2013-06-30

    The primary research task completed for this project was the development of the Regional Arctic Climate Model (RACM). This involved coupling existing atmosphere, ocean, sea ice, and land models using the National Center for Atmospheric Research (NCAR) Community Climate System Model (CCSM) coupler (CPL7). RACM is based on the Weather Research and Forecasting (WRF) atmospheric model, the Parallel Ocean Program (POP) ocean model, the CICE sea ice model, and the Variable Infiltration Capacity (VIC) land model. A secondary research task for this project was testing and evaluation of WRF for climate-scale simulations on the large pan-Arctic model domain used in RACM. This involved identification of a preferred set of model physical parameterizations for use in our coupled RACM simulations and documenting any atmospheric biases present in RACM.

  16. A nonlocal, ordinary, state-based plasticity model for peridynamics...

    Office of Scientific and Technical Information (OSTI)

    It is shown that the resulting constitutive model does not violate the 2nd law of ... Report Number(s): SAND2011-3166 TRN: US201114%%157 DOE Contract Number: AC04-94AL85000 ...

  17. Approximate Bisimulation-Based Reduction of Power System Dynamic Models

    SciTech Connect (OSTI)

    Stankovic, AM; Dukic, SD; Saric, AT

    2015-05-01

    In this paper we propose approximate bisimulation relations and functions for reduction of power system dynamic models in differential- algebraic (descriptor) form. The full-size dynamic model is obtained by linearization of the nonlinear transient stability model. We generalize theoretical results on approximate bisimulation relations and bisimulation functions, originally derived for a class of constrained linear systems, to linear systems in descriptor form. An algorithm for transient stability assessment is proposed and used to determine whether the power system is able to maintain the synchronism after a large disturbance. Two benchmark power systems are used to illustrate the proposed algorithm and to evaluate the applicability of approximate bisimulation relations and bisimulation functions for reduction of the power system dynamic models.

  18. Identification and design of novel polymer-based mechanical transducers: A nano-structural model for thin film indentation

    SciTech Connect (OSTI)

    Villanueva, Joshua; Huang, Qian; Sirbuly, Donald J.

    2014-09-14

    Mechanical characterization is important for understanding small-scale systems and developing devices, particularly at the interface of biology, medicine, and nanotechnology. Yet, monitoring sub-surface forces is challenging with current technologies like atomic force microscopes (AFMs) or optical tweezers due to their probe sizes and sophisticated feedback mechanisms. An alternative transducer design relying on the indentation mechanics of a compressible thin polymer would be an ideal system for more compact and versatile probes, facilitating measurements in situ or in vivo. However, application-specific tuning of a polymer's mechanical properties can be burdensome via experimental optimization. Therefore, efficient transducer design requires a fundamental understanding of how synthetic parameters such as the molecular weight and grafting density influence the bulk material properties that determine the force response. In this work, we apply molecular-level polymer scaling laws to a first order elastic foundation model, relating the conformational state of individual polymer chains to the macroscopic compression of thin film systems. A parameter sweep analysis was conducted to observe predicted model trends under various system conditions and to understand how nano-structural elements influence the material stiffness. We validate the model by comparing predicted force profiles to experimental AFM curves for a real polymer system and show that it has reasonable predictive power for initial estimates of the force response, displaying excellent agreement with experimental force curves. We also present an analysis of the force sensitivity of an example transducer system to demonstrate identification of synthetic protocols based on desired mechanical properties. These results highlight the usefulness of this simple model as an aid for the design of a new class of compact and tunable nanomechanical force transducers.

  19. Model-Based Transient Calibration Optimization for Next Generation Diesel

    Broader source: Energy.gov (indexed) [DOE]

    of Energy These Model Repair Specifications are intended to cover routine repair and rewind of low-voltage random-wound three-phase AC squirrel cage induction motors. PDF icon Model Repair Specifications for Low Voltage Induction Motors (November 1999) More Documents & Publications DOE Navigant Master Presentation Improving Motor and Drive System Performance - A Sourcebook for Industry Novel Flux Coupling Machine without Permanent Magnets Innovation | Department of Energy

    Image of

  20. BRANCH-BASED MODEL FOR THE DIAMETERS OF THE PULMONARY AIRWAYS: ACCOUNTING

    Office of Scientific and Technical Information (OSTI)

    FOR DEPARTURES FROM SELF-CONSISTENCY AND REGISTRATION ERRORS (Journal Article) | SciTech Connect BRANCH-BASED MODEL FOR THE DIAMETERS OF THE PULMONARY AIRWAYS: ACCOUNTING FOR DEPARTURES FROM SELF-CONSISTENCY AND REGISTRATION ERRORS Citation Details In-Document Search Title: BRANCH-BASED MODEL FOR THE DIAMETERS OF THE PULMONARY AIRWAYS: ACCOUNTING FOR DEPARTURES FROM SELF-CONSISTENCY AND REGISTRATION ERRORS We examine a previously published branch-based approach to modeling airway diameters

  1. Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Electrolysis Production | Department of Energy Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production Download the presentation slides from the U.S. Department of Energy Fuel Cell Technologies Office webinar, "Wind-to-Hydrogen Cost Modeling and Project Findings," held on January 17, 2013. PDF icon Wind-to-Hydrogen Cost Modeling and Project Findings Webinar

  2. Image-based Stokes flow modeling in bulk proppant packs and propped...

    Office of Scientific and Technical Information (OSTI)

    Image-based Stokes flow modeling in bulk proppant packs and propped fractures under high loading stresses Citation Details In-Document Search This content will become publicly ...

  3. Application for managing model-based material properties for simulation-based engineering

    DOE Patents [OSTI]

    Hoffman, Edward L.

    2009-03-03

    An application for generating a property set associated with a constitutive model of a material includes a first program module adapted to receive test data associated with the material and to extract loading conditions from the test data. A material model driver is adapted to receive the loading conditions and a property set and operable in response to the loading conditions and the property set to generate a model response for the material. A numerical optimization module is adapted to receive the test data and the model response and operable in response to the test data and the model response to generate the property set.

  4. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    SciTech Connect (OSTI)

    Sengupta, M.; Gotseff, P.

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear sky model performance.

  5. Mathematical modeling of the lithium deposition overcharge reaction in lithium-ion batteries using carbon-based negative electrodes

    SciTech Connect (OSTI)

    Arora, P.; Doyle, M.; White, R.E.

    1999-10-01

    Two major issues facing lithium-ion battery technology are safety and capacity grade during cycling. A significant amount of work has been done to improve the cycle life and to reduce the safety problems associated with these cells. This includes newer and better electrode materials, lower-temperature shutdown separators, nonflammable or self-extinguishing electrolytes, and improved cell designs. The goal of this work is to predict the conditions for the lithium deposition overcharge reaction on the negative electrode (graphite and coke) and to investigate the effect of various operating conditions, cell designs and charging protocols on the lithium deposition side reaction. The processes that lead to capacity fading affect severely the cycle life and rate behavior of lithium-ion cells. One such process is the overcharge of the negative electrode causing lithium deposition, which can lead to capacity losses including a loss of active lithium and electrolyte and represents a potential safety hazard. A mathematical model is presented to predict lithium deposition on the negative electrode under a variety of operating conditions. The Li{sub x}C{sub 6} {vert{underscore}bar} 1 M LiPF{sub 6}, 2:1 ethylene carbonate/dimethyl carbonate, poly(vinylidene fluoride-hexafluoropropylene) {vert{underscore}bar} LiMn{sub 2}O{sub 4} cell is simulated to investigate the influence of lithium deposition on the charging behavior of intercalation electrodes. The model is used to study the effect of key design parameters (particle size, electrode thickness, and mass ratio) on the lithium deposition overcharge reaction. The model predictions are compared for coke and graphite-based negative electrodes. The cycling behavior of these cells is simulated before and after overcharge to understand the hazards and capacity fade problems, inherent in these cells, can be minimized.

  6. Nuclear matrix elements for 0??{sup ?}?{sup ?} decays: Comparative analysis of the QRPA, shell model and IBM predictions

    SciTech Connect (OSTI)

    Civitarese, Osvaldo; Suhonen, Jouni

    2013-12-30

    In this work we report on general properties of the nuclear matrix elements involved in the neutrinoless double ?{sup ?} decays (0??{sup ?}?{sup ?} decays) of several nuclei. A summary of the values of the NMEs calculated along the years by the Jyväskylä-La Plata collaboration is presented. These NMEs, calculated in the framework of the quasiparticle random phase approximation (QRPA), are compared with those of the other available calculations, like the Shell Model (ISM) and the interacting boson model (IBA-2)

  7. Demonstrating and Validating a Next Generation Model-Based Controller for

    Broader source: Energy.gov (indexed) [DOE]

    Fuel Efficient, Low Emissions Diesel Engines | Department of Energy Fully model-based, practically-mapless engine control concept is viable PDF icon deer09_allain.pdf More Documents & Publications Increased Engine Efficiency via Advancements in Engine Combustion Systems Integration of Control System Components for Optimum Engine Response Model-Based Transient Calibration Optimization for Next Generation Diesel Engines

  8. DREAM tool increases space weather predictions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DREAM tool increases space weather predictions DREAM tool increases space weather predictions Model addresses radiation hazards of the space environment on space systems. April 13, ...

  9. Predictive Technology Development and Crash Energy Management...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Predictive Technology Development and Crash Energy Management Predictive Technology ... Merit Review 2015: Validation of Material Models for Crash Simulation of Automotive Carbon ...

  10. A knowledge based model of electric utility operations. Final report

    SciTech Connect (OSTI)

    1993-08-11

    This report consists of an appendix to provide a documentation and help capability for an analyst using the developed expert system of electric utility operations running in CLIPS. This capability is provided through a separate package running under the WINDOWS Operating System and keyed to provide displays of text, graphics and mixed text and graphics that explain and elaborate on the specific decisions being made within the knowledge based expert system.

  11. Systems Advisor Model | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Systems Advisor Model (SAM) makes performance predictions and cost of energy estimates for grid-connected power projects based on installation and operating costs and system design ...

  12. Predicting long-term carbon sequestration in response to CO2 enrichment: How and why do current ecosystem models differ?

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Walker, Anthony P.; Zaehle, Sönke; Medlyn, Belinda E.; De Kauwe, Martin G.; Asao, Shinichi; Hickler, Thomas; Parton, William; Ricciuto, Daniel M.; Wang, Ying -Ping; WÄrlind, David; et al

    2015-01-01

    Large uncertainty exists in model projections of the land carbon (C) sink response to increasing atmospheric CO2. Free-Air CO2 Enrichment (FACE) experiments lasting a decade or more have investigated ecosystem responses to a step change in atmospheric CO2 concentration. To interpret FACE results in the context of gradual increases in atmospheric CO2 over decades to centuries, we used a suite of seven models to simulate the Duke and Oak Ridge FACE experiments extended for 300 years of CO2 enrichment. We also determine key modeling assumptions that drive divergent projections of terrestrial C uptake and evaluate whether these assumptions can bemore » constrained by experimental evidence. All models simulated increased terrestrial C pools resulting from CO2 enrichment, though there was substantial variability in quasi-equilibrium C sequestration and rates of change. In two of two models that assume that plant nitrogen (N) uptake is solely a function of soil N supply, the net primary production response to elevated CO2 became progressively N limited. In four of five models that assume that N uptake is a function of both soil N supply and plant N demand, elevated CO2 led to reduced ecosystem N losses and thus progressively relaxed nitrogen limitation. Many allocation assumptions resulted in increased wood allocation relative to leaves and roots which reduced the vegetation turnover rate and increased C sequestration. Additionally, self-thinning assumptions had a substantial impact on C sequestration in two models. As a result, accurate representation of N process dynamics (in particular N uptake), allocation, and forest self-thinning is key to minimizing uncertainty in projections of future C sequestration in response to elevated atmospheric CO2.« less

  13. Workshop on Current Issues in Predictive Approaches to Intelligence and Security Analytics: Fostering the Creation of Decision Advantage through Model Integration and Evaluation

    SciTech Connect (OSTI)

    Sanfilippo, Antonio P.

    2010-05-23

    The increasing asymmetric nature of threats to the security, health and sustainable growth of our society requires that anticipatory reasoning become an everyday activity. Currently, the use of anticipatory reasoning is hindered by the lack of systematic methods for combining knowledge- and evidence-based models, integrating modeling algorithms, and assessing model validity, accuracy and utility. The workshop addresses these gaps with the intent of fostering the creation of a community of interest on model integration and evaluation that may serve as an aggregation point for existing efforts and a launch pad for new approaches.

  14. Feature Based Tolerancing Product Modeling V4.1

    Energy Science and Technology Software Center (OSTI)

    2001-11-30

    FBTol is a component technology in the form of software linkable library. The purpose of FBToI is to augment the shape of a nominal solid model with an explicit representation of a product’s tolerances and other non-shape attributes. This representation enforces a complete and unambiguous definition of non-shape information, permits an open architecture to dynamically create, modify, delete, and query tolerance information, and incorporates verify and checking algorithms to assure the quality of the tolerancemore » design.« less

  15. Normalized Elution Time Prediction Utility

    Energy Science and Technology Software Center (OSTI)

    2011-02-17

    This program is used to compute the predicted normalized elution time (NET) for a list of peptide sequences. It includes the Kangas/Petritis neural network trained model, the Krokhin hydrophobicity model, and the Mant hydrophobicity model. In addition, it can compute the predicted strong cation exchange (SCX) fraction (on a 0 to 1 scale) in which a given peptide will appear.

  16. An improved multiscale model for dilute turbulent gas particle flows based

    Office of Scientific and Technical Information (OSTI)

    on the equilibration of energy concept (Thesis/Dissertation) | SciTech Connect Thesis/Dissertation: An improved multiscale model for dilute turbulent gas particle flows based on the equilibration of energy concept Citation Details In-Document Search Title: An improved multiscale model for dilute turbulent gas particle flows based on the equilibration of energy concept Many particle-laden flows in engineering applications involve turbulent gas flows. Modeling multiphase turbulent flows is an

  17. The potential use of Chernobyl fallout data to test and evaluate the predictions of environmental radiological assessment models

    SciTech Connect (OSTI)

    Richmond, C.R.; Hoffman, F.O.; Blaylock, B.G.; Eckerman, K.F.; Lesslie, P.A.; Miller, C.W.; Ng, Y.C.; Till, J.E.

    1988-06-01

    The objectives of the Model Validation Committee were to collaborate with US and foreign scientists to collect, manage, and evaluate data for identifying critical research issues and data needs to support an integrated assessment of the Chernobyl nuclear accident; test environmental transport, human dosimetric, and health effects models against measured data to determine their efficacy in guiding decisions on protective actions and in estimating exposures to populations and individuals following a nuclear accident; and apply Chernobyl data to quantifications of key processes governing the environmental transport, fate and effects of radionuclides and other trace substances. 55 refs.

  18. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    SciTech Connect (OSTI)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.; Fujimori, Shinichiro

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. It then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.

  19. Reaction-based reactive transport modeling of Fe(III)

    SciTech Connect (OSTI)

    Kemner, K.M.; Kelly, S.D.; Burgos, Bill; Roden, Eric

    2006-06-01

    This research project (started Fall 2004) was funded by a grant to Argonne National Laboratory, The Pennsylvania State University, and The University of Alabama in the Integrative Studies Element of the NABIR Program (DE-FG04-ER63914/63915/63196). Dr. Eric Roden, formerly at The University of Alabama, is now at the University of Wisconsin, Madison. Our project focuses on the development of a mechanistic understanding and quantitative models of coupled Fe(III)/U(VI) reduction in FRC Area 2 sediments. This work builds on our previous studies of microbial Fe(III) and U(VI) reduction, and is directly aligned with the Scheibe et al. NABIR FRC Field Project at Area 2.

  20. SU-E-QI-13: Predictable Models for Radio-Sensitizing Agent Kinetics: Application to Stereotactic Synchrotron Radiation Therapy

    SciTech Connect (OSTI)

    Obeid, L; Schmitt, M; Esteve, F; Adam, J

    2014-06-15

    Purpose: Iodine-enhanced radiotherapy is an innovative treatment combining the selective accumulation of an iodinated contrast agent in brain tumors with irradiations using monochromatic medium energy x-rays. The radiation dose enhancement depends on the time course of iodine in the tumors. A prolonged CT scanning (∌30 min) is required to follow-up iodine kinetics for recruited patients. This protocol could lead to substantial radiation dose to the patient. A novel method is proposed to reduce the acquisition time. Methods: 12 patients received an intravenous bolus of iodinated contrast agent, followed by a steady-state infusion to ensure stable intra-tumoral amounts of iodine during the treatment. Absolute iodine concentrations (IC) were derived from 40 multi-slice dynamic conventional CT images of the brain. The impulse response function (IRF) to the bolus was estimated using the adiabatic approximation of the Johnson and Wilson's model. The arterial input function (AIF) of the steady-state infusion was fitted with several models: Gamma, Gamma with recirculation and hybrid. Estimated IC were calculated by convolving the IRF with the modeled AIF and were compared to the measured data. Results: The gamma variate function was not relevant to model the AIF due to high differences with the measured AIF. The hybrid and the gamma with recirculation models provided differences below 8% during the whole acquisition time. The absolute difference between the measured and the estimated IC was lower than 0.5 mg/ml, which corresponds to 5% of dose enhancement error. Conclusion: The proposed method allows a good estimation of the iodine time course with reduced scanning delays (3 instead of 30 min) and dose to the patient. The results suggest that the dose errors may stay within the radiotherapy standards.

  1. Relative potency based on hepatic enzyme induction predicts immunosuppressive effects of a mixture of PCDDS/PCDFS and PCBS

    SciTech Connect (OSTI)

    Smialowicz, R.J.; DeVito, M.J. Williams, W.C.; Birnbaum, L.S.

    2008-03-15

    The toxic equivalency factor (TEF) approach was employed to compare immunotoxic potency of mixtures containing polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and polychlorinated biphenyls relative to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), using the antibody response to sheep erythrocytes (SRBC). Mixture-1 (MIX-1) contained TCDD, 1,2,3,7,8-pentachlorodibenzo-p-dioxin (PeCDD), 2,3,7,8-tetrachlorodibenzofuran (TCDF), 1,2,3,7,8-pentachlorodibenzofuran (1-PeCDF), 2,3,4,7,8-pentachlorodibenzofuran (4-PeCDF), and 1,2,3,4,6,7,8,9-octachlorodibenzofuran (OCDF). Mixture-2 (MIX-2) contained MIX-1 and the following PCBs, 3,3',4,4'-tetrachlorobiphenyl (IUPAC No. 77), 3,3',4,4',5-pentachlorobiphenyl (126), 3,3',4,4',5,5N-hexachlorobiphenyl (169), 2,3,3',4,4'-pentachlorobiphenyl (105), 2,3',4,4',5-pentachlorobiphenyl (118), and 2,3,3',4,4',5-hexachlorobiphenyl (156). The mixture compositions were based on relative chemical concentrations in food and human tissues. TCDD equivalents (TEQ) of the mixture were estimated using relative potency factors from hepatic enzyme induction in mice [DeVito, M.J., Diliberto, J.J., Ross, D.G., Menache, M.G., Birnbaum, L.S., 1997. Dose-response relationships for polyhalogenated dioxins and dibenzofurans following subchronic treatment in mice. I .CYP1A1 and CYP1A2 enzyme activity in liver, lung and skin. Toxicol. Appl. Pharmacol. 130, 197-208; DeVito, M.J., Menache, G., Diliberto, J.J., Ross, D.G., Birnbaum L.S., 2000. Dose-response relationships for induction of CYP1A1 and CYP1A2 enzyme activity in liver, lung, and skin in female mice following subchronic exposure to polychlorinated biphenyls. Toxicol. Appl. Pharmacol. 167, 157-172] Female mice received 0, 1.5, 15, 150 or 450 ng TCDD/kg/day or approximately 0, 1.5, 15, 150 or 450 ng TEQ/kg/day of MIX-1 or MIX-2 by gavage 5 days per week for 13 weeks. Mice were immunized 3 days after the last exposure and 4 days later, body, spleen, thymus, and liver weights were measured, and antibody response to SRBCs was observed. Exposure to TCDD, MIX-1, and MIX-2 suppressed the antibody response in a dose-dependent manner. Two-way ANOVA indicated no differences in the response between TCDD and the mixtures for body weight, spleen/body weight and decreased antibody responses. The results support the use of the TEF methodology and suggest that immune suppression by dioxin-like chemicals may be of concern at or near background human exposures.

  2. Magnetic BiMn-α phase synthesis prediction: First-principles calculation, thermodynamic modeling and nonequilibrium chemical partitioning

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhou, S. H.; Liu, C.; Yao, Y. X.; Du, Y.; Zhang, L. J.; Wang, C. -Z.; Ho, K. -M.; Kramer, M. J.

    2016-04-29

    BiMn-α is promising permanent magnet. Due to its peritectic formation feature, there is a synthetic challenge to produce single BiMn-α phase. The objective of this study is to assess driving force for crystalline phase pathways under far-from-equilibrium conditions. First-principles calculations with Hubbard U correction are performed to provide a robust description of the thermodynamic behavior. The energetics associated with various degrees of the chemical partitioning are quantified to predict temperature, magnetic field, and time dependence of the phase selection. By assessing the phase transformation under the influence of the chemical partitioning, temperatures, and cooling rate from our calculations, we suggestmore » that it is possible to synthesize the magnetic BiMn-α compound in a congruent manner by rapid solidification. The external magnetic field enhances the stability of the BiMn-α phase. In conclusion, the compositions of the initial compounds from these highly driven liquids can be far from equilibrium.« less

  3. Sensitization and IGSCC susceptibility prediction in stainless steel pipe weldments

    SciTech Connect (OSTI)

    Atteridge, D.G.; Simmons, J.W.; Li, Ming ); Bruemmer, S.M. )

    1991-11-01

    An analytical model, based on prediction of chromium depletion, has been developed for predicting thermomechanical effects on austenitic stainless steel intergranular stress corrosion cracking (IGSCC) susceptibility. Model development and validation is based on sensitization development analysis of over 30 Type 316 and 304 stainless steel heats. The data base included analysis of deformation effects on resultant sensitization development. Continuous Cooling sensitization behavior is examined and modelled with and without strain. Gas tungsten are (GTA) girth pipe weldments are also characterized by experimental measurements of heat affected zone (HAZ) temperatures, strains and sensitization during/after each pass; pass by pass thermal histories are also predicted. The model is then used to assess pipe chemistry changes on IGSCC resistance.

  4. APT Blanket System Model Based on Initial Conceptual Design - Integrated 1D TRAC System Model

    SciTech Connect (OSTI)

    Hamm, L.L.

    1998-10-07

    This report documents the approaches taken in establishing a 1-dimensional integrated blanket system model using the TRAC code, developed by Los Alamos National Laboratory.

  5. Phase formation sequences in the silicon-phosphorous system : determined by in-situ synchrotron andj conventional x-ray diffraction measurements and predicted by a theoretical model.

    SciTech Connect (OSTI)

    Carlsson, J. R. A.; Clevenger, L.; Madsen, L. D.; Hultman, L.; Li, X.-H.; Jordan-Sweet, J.; Lavoie, C.; Roy, R. A.; Cabral, C., Jr.; Morales, G.; Ludwig, K. L.; Stephenson, G. B.; Hentzell, H. T. G.; Materials Science Division; Linkoeping Univ.; IBM T. J. Watson Research Center; Boston Univ.

    1997-01-01

    The phase formation sequences of Si-P alloy thin films with P concentrations between 20 and 44 at. % have been studied. The samples were annealed at progressively higher temperatures and the newly formed phases were identified both after each annealing step by ex-situ conventional X-ray diffraction (XRD) and continuously by in-situ synchrotron XRD. It was found that Si was the only phase to form in a sample with 20 at.% P since the evaporation of P at the crystallization temperature prevented phosphides from forming. For a sample with 30at.% P, the Si{sub 12}P{sub 5} phase formed prior to the SiP phase. For samples with 35 and 44at.%P, the formation of SiP preceded the formation of the Si{sub 12}P{sub 5} phase. The experimentally determined phase formation sequences were successfully predicted by a proposed model. According to the model, the first and second crystalline phases to form are those with the lowest and next-lowest crystallization temperatures of the competing compounds predicted by the Gibbs free-energy diagram.

  6. Near-edge band structures and band gaps of Cu-based semiconductors predicted by the modified Becke-Johnson potential plus an on-site Coulomb U

    SciTech Connect (OSTI)

    Zhang, Yubo; Zhang, Jiawei; Wang, Youwei; Gao, Weiwei; Abtew, Tesfaye A.; Zhang, Peihong E-mail: wqzhang@mail.sic.ac.cn; Beijing Computational Science Research Center, Beijing 100084 ; Zhang, Wenqing E-mail: wqzhang@mail.sic.ac.cn; School of Chemistry and Chemical Engineering and Sate Key Laboratory of Coordination Chemistry, Nanjing University, Jiangsu 210093

    2013-11-14

    Diamond-like Cu-based multinary semiconductors are a rich family of materials that hold promise in a wide range of applications. Unfortunately, accurate theoretical understanding of the electronic properties of these materials is hindered by the involvement of Cu d electrons. Density functional theory (DFT) based calculations using the local density approximation or generalized gradient approximation often give qualitative wrong electronic properties of these materials, especially for narrow-gap systems. The modified Becke-Johnson (mBJ) method has been shown to be a promising alternative to more elaborate theory such as the GW approximation for fast materials screening and predictions. However, straightforward applications of the mBJ method to these materials still encounter significant difficulties because of the insufficient treatment of the localized d electrons. We show that combining the promise of mBJ potential and the spirit of the well-established DFT + U method leads to a much improved description of the electronic structures, including the most challenging narrow-gap systems. A survey of the band gaps of about 20 Cu-based semiconductors calculated using the mBJ + U method shows that the results agree with reliable values to within ±0.2 eV.

  7. A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system

    SciTech Connect (OSTI)

    Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken

    2010-12-15

    This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)

  8. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    SciTech Connect (OSTI)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will generally not be applicable to other broad classes of problems, we believe that this approach (if applied over time to many types of problems) offers greater potential for long-term progress than attempts to discover a universal solution or theory. We are developing and testing this approach using porous media and model reaction systems that can be both experimentally measured and quantitatively simulated at the pore scale, specifically biofilm development and metal reduction in granular porous media. The general approach we are using in this research follows the following steps: (1) Perform pore-scale characterization of pore geometry and biofilm development in selected porous media systems. (2) Simulate selected reactive transport processes at the pore scale in experimentally measured pore geometries. (3) Validate pore-scale models against laboratory-scale experiments. (4) Perform upscaling to derive continuum-scale (local darcy scale) process descriptions and effective parameters. (5) Use upscaled models and parameters to simulate reactive transport at the continuum scale in a macroscopically heterogeneous medium.

  9. Solution-based thermodynamic modeling of the Ni-Al-Mo system using

    Office of Scientific and Technical Information (OSTI)

    first-principles calculations (Journal Article) | SciTech Connect Solution-based thermodynamic modeling of the Ni-Al-Mo system using first-principles calculations Citation Details In-Document Search Title: Solution-based thermodynamic modeling of the Ni-Al-Mo system using first-principles calculations A solution-based thermodynamic description of the ternary Ni-Al-Mo system is developed here, incorporating first-principles calculations and reported modeling of the binary Ni-Al, Ni-Mo and

  10. Web-based DOE/ORNL Heat Pump Design Model Released | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Web-based DOE/ORNL Heat Pump Design Model Released Web-based DOE/ORNL Heat Pump Design Model Released May 2, 2016 - 12:28pm Addthis The new Heat Pump Design Model web interface, developed by Oak Ridge National Laboratory, expands the toolñ€™s online accessibility. <br /> Credit: Oak Ridge National Laboratory The new Heat Pump Design Model web interface, developed by Oak Ridge National Laboratory, expands the tool's online accessibility. Credit: Oak Ridge National Laboratory This article

  11. A Physically Based Framework for Modelling the Organic Fractionation of Sea Spray Aerosol from Bubble Film Langmuir Equilibria

    SciTech Connect (OSTI)

    Burrows, Susannah M.; Ogunro, O.; Frossard, Amanda; Russell, Lynn M.; Rasch, Philip J.; Elliott, S.

    2014-12-19

    The presence of a large fraction of organic matter in primary sea spray aerosol (SSA) can strongly affect its cloud condensation nuclei activity and interactions with marine clouds. Global climate models require new parameterizations of the SSA composition in order to improve the representation of these processes. Existing proposals for such a parameterization use remotely-sensed chlorophyll-a concentrations as a proxy for the biogenic contribution to the aerosol. However, both observations and theoretical considerations suggest that existing relationships with chlorophyll-a, derived from observations at only a few locations, may not be representative for all ocean regions. We introduce a novel framework for parameterizing the fractionation of marine organic matter into SSA based on a competitive Langmuir adsorption equilibrium at bubble surfaces. Marine organic matter is partitioned into classes with differing molecular weights, surface excesses, and Langmuir adsorption parameters. The classes include a lipid-like mixture associated with labile dissolved organic carbon (DOC), a polysaccharide-like mixture associated primarily with semi-labile DOC, a protein-like mixture with concentrations intermediate between lipids and polysaccharides, a processed mixture associated with recalcitrant surface DOC, and a deep abyssal humic-like mixture. Box model calculations have been performed for several cases of organic adsorption to illustrate the underlying concepts. We then apply the framework to output from a global marine biogeochemistry model, by partitioning total dissolved organic carbon into several classes of macromolecule. Each class is represented by model compounds with physical and chemical properties based on existing laboratory data. This allows us to globally map the predicted organic mass fraction of the nascent submicron sea spray aerosol. Predicted relationships between chlorophyll-\\textit{a} and organic fraction are similar to existing empirical parameterizations, but can vary between biologically productive and non-productive regions, and seasonally within a given region. Major uncertainties include the bubble film thickness at bursting and the variability of organic surfactant activity in the ocean, which is poorly constrained. In addition, marine colloids and cooperative adsorption of polysaccharides may make important contributions to the aerosol, but are not included here. This organic fractionation framework is an initial step towards a closer linking of ocean biogeochemistry and aerosol chemical composition in Earth system models. Future work should focus on improving constraints on model parameters through new laboratory experiments or through empirical fitting to observed relationships in the real ocean and atmosphere, as well as on atmospheric implications of the variable composition of organic matter in sea spray.

  12. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    SciTech Connect (OSTI)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  13. Physics based model for online fault detection in autonomous cryogenic loading system

    SciTech Connect (OSTI)

    Kashani, Ali; Ponizhovskaya, Ekaterina; Luchinsky, Dmitry; Smelyanskiy, Vadim; Patterson-Hine, Anna; Sass, Jared; Brown, Barbara

    2014-01-29

    We report the progress in the development of the chilldown model for a rapid cryogenic loading system developed at NASA-Kennedy Space Center. The nontrivial characteristic feature of the analyzed chilldown regime is its active control by dump valves. The two-phase flow model of the chilldown is approximated as one-dimensional homogeneous fluid flow with no slip condition for the interphase velocity. The model is built using commercial SINDA/FLUINT software. The results of numerical predictions are in good agreement with the experimental time traces. The obtained results pave the way to the application of the SINDA/FLUINT model as a verification tool for the design and algorithm development required for autonomous loading operation.

  14. Steady state and dynamic modeling of a packed bed reactor for the partial oxidation of methanol to formaldehyde: experimental results compared with model predictions

    SciTech Connect (OSTI)

    Schwedock, M.J.; Windes, L.C.; Ray, W.H.

    1985-01-01

    Heterogeneous and pseudohomogeneous models are compared to experimental data from a packed bed reactor for the partical oxidation of methanol to formaldehyde over an iron oxide-molybdenum oxide catalyst. Heat transfer parameters which were successful in matching data from experiments without reaction were not successful in matching temperature data from experiments with reaction. This made it necessary to decrease the fluid radial heat transfer to obtain good fit. A good fit was obtained for steady state composition profiles by optimizing selected frequency factors and the activation energy for methanol. A redox rate expression for the oxidation of formaldehyde to carbon monoxide was proposed since a simple first-order rate expression did not fit the data. The pseudohomogeneous model gave results similar to the heterogeneous model for both steady state and dynamic experiments and has been recommended for future experimental state estimation and control studies. 21 refs., 31 figs., 6 tabs.

  15. A Subbasin-based framework to represent land surface processes in an Earth System Model

    SciTech Connect (OSTI)

    Tesfa, Teklu K.; Li, Hongyi; Leung, Lai-Yung R.; Huang, Maoyi; Ke, Yinghai; Sun, Yu; Liu, Ying

    2014-05-20

    Realistically representing spatial heterogeneity and lateral land surface processes within and between modeling units in earth system models is important because of their implications to surface energy and water exchange. The traditional approach of using regular grids as computational units in land surface models and earth system models may lead to inadequate representation of lateral movements of water, energy and carbon fluxes, especially when the grid resolution increases. Here a new subbasin-based framework is introduced in the Community Land Model (CLM), which is the land component of the Community Earth System Model (CESM). Local processes are represented assuming each subbasin as a grid cell on a pseudo grid matrix with no significant modifications to the existing CLM modeling structure. Lateral routing of water within and between subbasins is simulated with the subbasin version of a recently-developed physically based routing model, Model for Scale Adaptive River Routing (MOSART). As an illustration, this new framework is implemented in the topographically diverse region of the U.S. Pacific Northwest. The modeling units (subbasins) are delineated from high-resolution Digital Elevation Model while atmospheric forcing and surface parameters are remapped from the corresponding high resolution datasets. The impacts of this representation on simulating hydrologic processes are explored by comparing it with the default (grid-based) CLM representation. In addition, the effects of DEM resolution on parameterizing topography and the subsequent effects on runoff processes are investigated. Limited model evaluation and comparison showed that small difference between the averaged forcing can lead to more significant difference in the simulated runoff and streamflow because of nonlinear horizontal processes. Topographic indices derived from high resolution DEM may not improve the overall water balance, but affect the partitioning between surface and subsurface runoff. More systematic analyses are needed to determine the relative merits of the subbasin representation compared to the commonly used grid-based representation, especially when land surface models are approaching higher resolutions.

  16. Estimation of net primary productivity using a process-based model in Gansu

    Office of Scientific and Technical Information (OSTI)

    Province, Northwest China (Journal Article) | SciTech Connect Journal Article: Estimation of net primary productivity using a process-based model in Gansu Province, Northwest China Citation Details In-Document Search Title: Estimation of net primary productivity using a process-based model in Gansu Province, Northwest China The ecological structure in the arid and semi-arid region of Northwest China with forest, grassland, agriculture, Gobi, and desert, is complex, vulnerable, and unstable.

  17. Atomistic simulation of laser-pulse surface modification: Predictions of models with various length and time scales

    SciTech Connect (OSTI)

    Starikov, Sergey V. Pisarev, Vasily V.

    2015-04-07

    In this work, the femtosecond laser pulse modification of surface is studied for aluminium (Al) and gold (Au) by use of two-temperature atomistic simulation. The results are obtained for various atomistic models with different scales: from pseudo-one-dimensional to full-scale three-dimensional simulation. The surface modification after laser irradiation can be caused by ablation and melting. For low energy laser pulses, the nanoscale ripples may be induced on a surface by melting without laser ablation. In this case, nanoscale changes of the surface are due to a splash of molten metal under temperature gradient. Laser ablation occurs at a higher pulse energy when a crater is formed on the surface. There are essential differences between Al ablation and Au ablation. In the first step of shock-wave induced ablation, swelling and void formation occur for both metals. However, the simulation of ablation in gold shows an additional athermal type of ablation that is associated with electron pressure relaxation. This type of ablation takes place at the surface layer, at a depth of several nanometers, and does not induce swelling.

  18. Development and Integration of Genome-Enabled Techniques to Track and Predict the Cycling of Carbon in Model Microbial Communities

    SciTech Connect (OSTI)

    Banfield, Jillian

    2014-11-26

    The primary objective of this project was to establish widely applicable, high-throughput “omics” methods for tracking carbon flow in microbial communities at a strain-resolved molecular level. We developed and applied these methods to study a well-established microbial community model system with a long history of “omics” innovation: chemoautotrophic biofilms grown in an acid mine drainage (AMD) environment. The methods are now being transitioned (in a new project) to study soil. Using metagenomics, stable-isotope proteomics, stable-isotope metabolomics, transcriptomics, and microscopy, we tracked carbon flow during initial biofilm growth involving CO2 fixation, through the maturing biofilm community consisting of multiple trophic levels, and during an anaerobic degradative phase after biofilms sink. This work included explicit consideration of the often overlooked roles of archaea and microbial eukaryotes (fungi) in carbon turnover. We also analyzed where the eosystem begins to fail in response to thermal perturbation, and how perturbation propagates through a carbon cycle. We investigated the form of strain variation in microbial communities, the importance of strain variants, and the rate and form of strain evolution. Overall, the project generated an array of new, integrated ‘omics’ approaches and provided unprecedented insight into the functioning of a natural ecosystem. This project supported graduate training for five Ph.D. students and three post doctoral fellows and contributed directly to at least 26 publications (two in Science).

  19. Chemical Kinetic Modeling of Non-Petroleum Based Fuels | Department of

    Broader source: Energy.gov (indexed) [DOE]

    Energy 2 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting PDF icon ft010_pitz_2012_o.pdf More Documents & Publications Chemical Kinetic Modeling of Non-Petroleum Based Fuels Chemical Kinetic Modeling of Fuels Chemical Kinetic Research on HCCI & Diesel Fuels

  20. Chemical Kinetic Modeling of Non-Petroleum Based Fuels | Department of

    Broader source: Energy.gov (indexed) [DOE]

    Energy 1 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon ft010_pitz_2011_o.pdf More Documents & Publications Chemical Kinetic Modeling of Non-Petroleum Based Fuels Chemical Kinetic Modeling of Fuels Simulation of High Efficiency Clean Combustion Engines and Detailed Chemical Kinetic Mechanisms Development

  1. A Smoothed Particle Hydrodynamics-Based Fluid Model With a Spatially

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dependent Viscosity | Argonne Leadership Computing Facility A Smoothed Particle Hydrodynamics-Based Fluid Model With a Spatially Dependent Viscosity Authors: Martys, N.S., George, W.L., Chun, B., Lootens, D. A smoothed particle hydrodynamics approach is utilized to model a non-Newtonian fluid with a spatially varying viscosity. In the limit of constant viscosity, this approach recovers an earlier model for Newtonian fluids of Espa Publication Date: September, 2010 Name of Publication Source:

  2. Laser shock peening on Zr-based bulk metallic glass and its effect on plasticity: Experiment and modeling

    SciTech Connect (OSTI)

    Cao, Yunfeng; Xie, Xie; Antonaglia, James; Winiarski, Bartlomiej; Wang, Gongyao; Shin, Yung C.; Withers, Philip J.; Dahmen, Karin A.; Liaw, Peter K.

    2015-05-20

    The Zr-based bulk metallic glasses (BMGs) are a new family of attractive materials with good glass-forming ability and excellent mechanical properties, such as high strength and excellent wear resistance, which make them candidates for structural and biomedical materials. Although the mechanical behavior of BMGs has been widely investigated, their deformation mechanisms are still poorly understood. In particular, their poor ductility significantly impedes their industrial application. In the present work, we show that the ductility of Zr-based BMGs with nearly zero plasticity is improved by a laser shock peening technique. Moreover, we map the distribution of laser-induced residual stresses via the micro-slot cutting method, and then predict them using a three dimensional finite-element method coupled with a confined plasma model. Reasonable agreement is achieved between the experimental and modeling results. The analysis of serrated flow reveals plentiful and useful information of the underlying deformation process. As a result, our work provides an easy and effective way to extend the ductility of intrinsically-brittle BMGs, opening up wider applications of these materials.

  3. Laser shock peening on Zr-based bulk metallic glass and its effect on plasticity: Experiment and modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Cao, Yunfeng; Xie, Xie; Antonaglia, James; Winiarski, Bartlomiej; Wang, Gongyao; Shin, Yung C.; Withers, Philip J.; Dahmen, Karin A.; Liaw, Peter K.

    2015-05-20

    The Zr-based bulk metallic glasses (BMGs) are a new family of attractive materials with good glass-forming ability and excellent mechanical properties, such as high strength and excellent wear resistance, which make them candidates for structural and biomedical materials. Although the mechanical behavior of BMGs has been widely investigated, their deformation mechanisms are still poorly understood. In particular, their poor ductility significantly impedes their industrial application. In the present work, we show that the ductility of Zr-based BMGs with nearly zero plasticity is improved by a laser shock peening technique. Moreover, we map the distribution of laser-induced residual stresses via themore » micro-slot cutting method, and then predict them using a three dimensional finite-element method coupled with a confined plasma model. Reasonable agreement is achieved between the experimental and modeling results. The analysis of serrated flow reveals plentiful and useful information of the underlying deformation process. As a result, our work provides an easy and effective way to extend the ductility of intrinsically-brittle BMGs, opening up wider applications of these materials.« less

  4. Depositional sequence analysis and sedimentologic modeling for improved prediction of Pennsylvanian reservoirs (Annex 1). Annual report, February 1, 1991--January 31, 1992

    SciTech Connect (OSTI)

    Watney, W.L.

    1992-08-01

    Interdisciplinary studies of the Upper Pennsylvanian Lansing and Kansas City groups have been undertaken in order to improve the geologic characterization of petroleum reservoirs and to develop a quantitative understanding of the processes responsible for formation of associated depositional sequences. To this end, concepts and methods of sequence stratigraphy are being used to define and interpret the three-dimensional depositional framework of the Kansas City Group. The investigation includes characterization of reservoir rocks in oil fields in western Kansas, description of analog equivalents in near-surface and surface sites in southeastern Kansas, and construction of regional structural and stratigraphic framework to link the site specific studies. Geologic inverse and simulation models are being developed to integrate quantitative estimates of controls on sedimentation to produce reconstructions of reservoir-bearing strata in an attempt to enhance our ability to predict reservoir characteristics.

  5. A method for modeling oxygen diffusion in an agent-based model with application to host-pathogen infection

    SciTech Connect (OSTI)

    Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.

    2015-01-01

    This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figure 1 is the evolution of the diffusion profiles of a containment granuloma over time.

  6. A method for modeling oxygen diffusion in an agent-based model with application to host-pathogen infection

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.

    2015-01-01

    This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figuremore » 1 is the evolution of the diffusion profiles of a containment granuloma over time.« less

  7. Modeling Long-term Creep Performance for Welded Nickel-base Superalloy Structures for Power Generation Systems

    SciTech Connect (OSTI)

    Shen, Chen

    2015-01-01

    We report here a constitutive model for predicting long-term creep strain evolution in ’ strengthened Ni-base superalloys. Dislocation climb-bypassing ’, typical in intermediate ’ volume fraction (~20%) alloys, is considered as the primary deformation mechanism. Dislocation shearing ’ to anti-phase boundary (APB) faults and diffusional creep are also considered for high-stress and high-temperature low-stress conditions, respectively. Additional damage mechanism is taken into account for rapid increase in tertiary creep strain. The model has been applied to Alloy 282, and calibrated in a temperature range of 1375-1450˚F, and stress range of 15-45ksi. The model parameters and a MATLAB code are provided. This report is prepared by Monica Soare and Chen Shen at GE Global Research. Technical discussions with Dr. Vito Cedro are greatly appreciated. This work was supported by DOE program DE-FE0005859

  8. TH-E-BRF-03: A Multivariate Interaction Model for Assessment of Hippocampal Vascular Dose-Response and Early Prediction of Radiation-Induced Neurocognitive Dysfunction

    SciTech Connect (OSTI)

    Farjam, R; Pramanik, P; Srinivasan, A; Chapman, C; Tsien, C; Lawrence, T; Cao, Y

    2014-06-15

    Purpose: Vascular injury could be a cause of hippocampal dysfunction leading to late neurocognitive decline in patients receiving brain radiotherapy (RT). Hence, our aim was to develop a multivariate interaction model for characterization of hippocampal vascular dose-response and early prediction of radiation-induced late neurocognitive impairments. Methods: 27 patients (17 males and 10 females, age 31–80 years) were enrolled in an IRB-approved prospective longitudinal study. All patients were diagnosed with a low-grade glioma or benign tumor and treated by 3-D conformal or intensity-modulated RT with a median dose of 54 Gy (50.4–59.4 Gy in 1.8− Gy fractions). Six DCE-MRI scans were performed from pre-RT to 18 months post-RT. DCE data were fitted to the modified Toft model to obtain the transfer constant of gadolinium influx from the intravascular space into the extravascular extracellular space, Ktrans, and the fraction of blood plasma volume, Vp. The hippocampus vascular property alterations after starting RT were characterized by changes in the hippocampal mean values of, ÎŒh(Ktrans)τ and ÎŒh(Vp)τ. The dose-response, ΔΌh(Ktrans/Vp)pre->τ, was modeled using a multivariate linear regression considering integrations of doses with age, sex, hippocampal laterality and presence of tumor/edema near a hippocampus. Finally, the early vascular dose-response in hippocampus was correlated with neurocognitive decline 6 and 18 months post-RT. Results: The ÎŒh(Ktrans) increased significantly from pre-RT to 1 month post-RT (p<0.0004). The multivariate model showed that the dose effect on ΔΌh(Ktrans)pre->1M post-RT was interacted with sex (p<0.0007) and age (p<0.00004), with the dose-response more pronounced in older females. Also, the vascular dose-response in the left hippocampus of females was significantly correlated with memory function decline at 6 (r = − 0.95, p<0.0006) and 18 (r = −0.88, p<0.02) months post-RT. Conclusion: The hippocampal vascular response to radiation could be sex and age dependent. The early hippocampal vascular dose-response could predict late neurocognitive dysfunction. (Support: NIH-RO1NS064973)

  9. The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean

    SciTech Connect (OSTI)

    Eden, H.F.; Mooers, C.N.K.

    1990-06-01

    The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological, chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.

  10. The CASL vision is to confidently predict

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CASL vision is to confidently predict the performance of commercial nuclear power reactors through comprehensive, science-based modeling and simulation technology. To achieve this vision, CASL is assembling, assessing and coupling a variety of phys- ics codes, each with a distinct purpose and functionality. This higher-fidelity coupled physics code capability is intended to have broad, versatile functionality with multiple modules simulating issues such as grid-to-rod-fretting and CRUD

  11. Modeling the Integrated Performance of Dispersion and Monolithic U-Mo Based Fuels

    SciTech Connect (OSTI)

    Daniel M. Wachs; Douglas E. Burkes; Steven L. Hayes; Karen Moore; Greg Miller; Gerard Hofman; Yeon Soo Kim

    2006-10-01

    The evaluation and prediction of integrated fuel performance is a critical component of the Reduced Enrichment for Research and Test Reactors (RERTR) program. The PLATE code is the primary tool being developed and used to perform these functions. The code is being modified to incorporate the most recent fuel/matrix interaction correlations as they become available for both aluminum and aluminum/silicon matrices. The code is also being adapted to treat cylindrical and square pin geometries to enhance the validation database by including the results gathered from various international partners. Additional modeling work has been initiated to evaluate the thermal and mechanical performance requirements unique to monolithic fuels during irradiation.

  12. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Engine Combustion/Modeling Modelingadmin2015-10-28T01:54:52+00:00 Modelers at the CRF are developing high-fidelity simulation tools for engine combustion and detailed micro-kinetic, surface chemistry modeling tools for catalyst-based exhaust aftertreatment systems. The engine combustion modeling is focused on developing Large Eddy Simulation (LES). LES is being used with closely coupled key target experiments to reveal new understanding of the fundamental processes involved in engine combustion

  13. Predictive Technology Development and Crash Energy Management

    Broader source: Energy.gov (indexed) [DOE]

    ... projects titled: * Multiscale Modeling for Crash Prediction of Composite Structures * Modeling of The Manufacturing Process Induced Effects on Matrix Properties of Textile ...

  14. Methane modeling: predicting the inflow of methane gas into coal mines. Quarterly technical progress report, April 1, 1982-June 30, 1982

    SciTech Connect (OSTI)

    Boyer, C.M. II; Hoysan, P.M.; Pavone, A.M.; Richmond, O.; Schwerer, F.C.; Smelser, R.E.

    1982-01-01

    Work on Phase I of the Contract program is essentially complete and was reported in the Phase I Technical Report which has been reviewed and accepted by the Contract Technical Project Officer. Phase I work included a survey of relevant technical literature and development, demonstration and documentation of a computer model, MINE1D, for flow of methane and water in coal strata for geometries corresponding to an advancing mine face and to a mine pillar. The Phase I models are one-dimensional in the space variable but describe time-dependent (nonsteady) phenomena and include gas sorption phenomena. Some revisions have been made to input/output sections of MINE1D and the documentation has been expanded. These modifications will be reported in the next Quarterly Technical Report. Preliminary test scenarios have been formulated and reviewed with the Contract Technical Project Officer for measurement of emissions during room-and-pillar and longwall mining operations. These preliminary scenarios are described in this report. A mathematical model has been developed to describe the increased stresses on the coal seam near mine openings. The model is based on an approximate elastic/plastic treatment of the coal seam and an elastic treatment of surrounding strata. In this model, elastic compaction of the coal seam decreases porosity and permeability, whereas plastic deformation increases the porosity of the natural fracture network and thereby increases the permeability. The model takes into account the effect of changes in pore fluid pressure (in the natural fracture network of the coal seam) on the deformation of the coal seam. This model is described in this report, and will be programmed for inclusion in revised versions of MINE1D and for use in the two-dimensional computer models now under development. 8 figures.

  15. Midtemperature solar systems test faclity predictions for thermal performance based on test data: Solar Kinetics T-700 solar collector with glass reflector surface

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-03-01

    Sandia National Laboratories, Albuquerque (SNLA), is currently conducting a program to predict the performance and measure the characteristics of commercially available solar collectors that have the potential for use in industrial process heat and enhanced oil recovery applications. The thermal performance predictions for the Solar Kinetics solar line-focusing parabolic trough collector for five cities in the US are presented. (WHK)

  16. Midtemperature solar systems test facility predictions for thermal performance based on test data: AAI solar collector with pressure-formed glass reflector surface

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-03-01

    Sandia National Laboratories, Albuquerque (SNLA), is currently conducting a program to predict the performance and measure the characteristics of commercially available solar collectors that have the potential for use in industrial process heat and enhance oil recovery applications. The thermal performance predictions for the AAI solar line-focusing slat-type collector for five cities in the US are presented. (WHK)

  17. Process-based modeling of the aeloian environment at the dune scale

    SciTech Connect (OSTI)

    Stam, J.M.T. (IGG-TNO, Delft (Netherlands))

    1993-09-01

    Process-based models are quantitative models that simulate the physical process of sedimentation with the objective of reconstructing the spatial distribution, stratification, and properties of the subsurface. In this study, a two-dimensional, process-based model of the aeolian environment, at the dune-interdune scale, has been developed. Sedimentation is governed by the variation of wind velocity over the topography, which is calculated analytically. Velocity calculations are coupled to a sediment transport equation, to determine where erosion and deposition occur. The resulting change in topography determines a new velocity field, which is then calculated. Features that the model simulates include ripple formation and dune migration, as well as the resulting internal sedimentary structures. Process-based models can be used as tool to help interpret structures in ancient formations. This model has been applied specifically to reconstruct dune-interdune sequences observed in cores from the Rotliegendes, localized in the southern Permian basin (North Sea). The interdune strata are characterized by a low permeability. A flow simulation has been done on the aeolian section generated by the model, showing the effect of these heterogeneities on fluid flow.

  18. Modeling the thermal deformation of TATB-based explosives. Part 1: Thermal expansion of “neat-pressed” polycrystalline TATB

    SciTech Connect (OSTI)

    Luscher, Darby J.

    2014-05-08

    We detail a modeling approach to simulate the anisotropic thermal expansion of polycrystalline (1,3,5-triamino-2,4,6-trinitrobenzene) TATB-based explosives that utilizes microstructural information including porosity, crystal aspect ratio, and processing-induced texture. This report, the first in a series, focuses on nonlinear thermal expansion of “neat-pressed” polycrystalline TATB specimens which do not contain any binder; additional complexities related to polymeric binder and irreversible ratcheting behavior are briefly discussed, however detailed investigation of these aspects are deferred to subsequent reports. In this work we have, for the first time, developed a mesoscale continuum model relating the thermal expansion of polycrystal TATB specimens to their microstructural characteristics. A self-consistent homogenization procedure is used to relate macroscopic thermoelastic response to the constitutive behavior of single-crystal TATB. The model includes a representation of grain aspect ratio, porosity, and crystallographic texture attributed to the consolidation process. A quantitative model is proposed to describe the evolution of preferred orientation of graphitic planes in TATB during consolidation and an algorithm constructed to develop a discrete representation of the associated orientation distribution function. Analytical and numerical solutions using this model are shown to produce textures consistent with previous measurements and characterization for isostatic and uniaxial “die-pressed” specimens. Predicted thermal strain versus temperature for textured specimens are shown to be in agreement with corresponding experimental measurements. Using the developed modeling approach, several simulations have been run to investigate the influence of microstructure on macroscopic thermal expansion behavior. Results from these simulations are used to identify qualitative trends. Implications of the identified trends are discussed in the context of thermal deformation of engineered components whose consolidation process is generally more complex than isostatic or die-pressed specimens. Finally, an envisioned application of the modeling approach to simulating thermal expansion of weapon systems and components is outlined along with necessary future work to introduce the effects of binder and ratcheting behavior. Key conclusions from this work include the following. Both porosity and grain aspect ratio have an influence on the thermal expansion of polycrystal TATB considering realistic material variability. Thepreferred orientation of the single crystal TATB [001] poles within a polycrystal gives rise to pronounced anisotropy of the macroscopic thermal expansion. The extent of this preferred orientation depends on the magnitude of deformation, and consequently, is expected to vary spatially throughout manufactured components much like porosity. The modeling approach presented here has utility toward bringing spatially variable microstructural features into macroscale system engineering modelsAbstract Not Provided

  19. A NEW CHEMICAL EVOLUTION MODEL FOR DWARF SPHEROIDAL GALAXIES BASED ON OBSERVED LONG STAR FORMATION HISTORIES

    SciTech Connect (OSTI)

    Homma, Hidetomo; Murayama, Takashi; Kobayashi, Masakazu A. R.; Taniguchi, Yoshiaki

    2015-02-01

    We present a new chemical evolution model for dwarf spheroidal galaxies (dSphs) in the local universe. Our main aim is to explain both their observed star formation histories and metallicity distribution functions simultaneously. Applying our new model for the four local dSphs, that is, Fornax, Sculptor, Leo II, and Sextans, we find that our new model reproduces the observed chemical properties of the dSphs consistently. Our results show that the dSphs have evolved with both a low star formation efficiency and a large gas outflow efficiency compared with the Milky Way, as suggested by previous works. Comparing the observed [?/Fe]-[Fe/H] relation of the dSphs with the model predictions, we find that our model favors a longer onset time of Type Ia supernovae (i.e., 0.5 Gyr) than that suggested in previous studies (i.e., 0.1 Gyr). We discuss the origin of this discrepancy in detail.

  20. Simulation-Based Engineering

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulation-Based Engineering Simulation-Based Engineering is focused on predicting the behavior of complex multiphase flow reactors used in fossil-energy technologies. This effort combines theory, computational modeling, experiments, and industrial input. Physics- and science-based computational models and tools are needed to support the development and deployment of advanced fossil-fuel energy devices such as gasifiers and carbon capture reactors. It is critical to develop a practical framework

  1. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    SciTech Connect (OSTI)

    Quan, Wei; Lv, Lin Liu, Baiqi

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  2. Heuristic Drift-based Model of the Power Scrape-off width in H-mode Tokamaks

    SciTech Connect (OSTI)

    Robert J. Goldston

    2011-04-29

    An heuristic model for the plasma scrape-off width in H-mode plasmas is introduced. Grad B and curv B drifts into the SOL are balanced against sonic parallel flows out of the SOL, to the divertor plates. The overall particle flow pattern posited is a modification for open field lines of Pfirsch-Shlüter flows to include sinks to the divertors. These assumptions result in an estimated SOL width of ~ 2a?p/R. They also result in a first-principles calculation of the particle confinement time of H-mode plasmas, qualitatively consistent with experimental observations. It is next assumed that anomalous perpendicular electron thermal diffusivity is the dominant source of heat flux across the separatrix, investing the SOL width, defined above, with heat from the main plasma. The separatrix temperature is calculated based on a two-point model balancing power input to the SOL with Spitzer-Härm parallel thermal conduction losses to the divertor. This results in a heuristic closed-form prediction for the power scrape-off width that is in reasonable quantitative agreement both in absolute magnitude and in scaling with recent experimental data from deuterium plasmas. Further work should include full numerical calculations, including all magnetic and electric drifts, as well as more thorough comparison with experimental data.

  3. Wear prediction in a fluidized bed

    SciTech Connect (OSTI)

    Boyle, E.J.; Rogers, W.A.

    1993-06-01

    A procedure to model the wear of surfaces exposed to a fluidized bed is formulated. A stochastic methodology adapting the kinetic theory of gases to granular flows is used to develop an impact wear model. This uses a single-particle wear model to account for impact wear from all possible-particle collisions. An adaptation of a single-particle abrasion model to describe the effects of many abrading particles is used to account for abrasive wear. Parameters describing granular flow within the fluidized bed, necessary for evaluation of the wear expressions, are determined by numerical solution of the fluidized bed hydrodynamic equations. Additional parameters, describing the contact between fluidized particles and the wearing surface, are determined by optimization based on wear measurements. The modeling procedure was used to analyze several bubbling and turbulent fluidized bed experiments with single-tube and tube bundle configurations. Quantitative agreement between the measured and predicted wear rates was found, with some exceptions for local wear predictions. This work demonstrates a methodology for wear predictions in fluidized beds.

  4. Model-based engineering:a strategy for RRW and future weapons programs.

    SciTech Connect (OSTI)

    Harris, Rick; Martinez, Jacky R.

    2007-05-01

    To meet Sandia's engineering challenges it is crucial that we shorten the product realization process. The challenge of RRW is to produce exceptional high quality designs and respond to changes quickly. Computer aided design models are an important element in realizing these objectives. Advances in the use of three dimensional geometric models on the Reliable Robust Warhead (RRW) activity have resulted in business advantage. This approach is directly applicable to other programs within the Laboratories. This paper describes the RRW approach and rationale. Keys to this approach are defined operational states that indicate a pathway for greater model-based realization and responsive infrastructure.

  5. Model-Based Analysis of the Role of Biological, Hydrological and Geochemical Factors Affecting Uranium Bioremediation

    SciTech Connect (OSTI)

    Zhao, Jiao; Scheibe, Timothy D.; Mahadevan, Radhakrishnan

    2011-01-24

    Uranium contamination is a serious concern at several sites motivating the development of novel treatment strategies such as the Geobacter-mediated reductive immobilization of uranium. However, this bioremediation strategy has not yet been optimized for the sustained uranium removal. While several reactive-transport models have been developed to represent Geobacter-mediated bioremediation of uranium, these models often lack the detailed quantitative description of the microbial process (e.g., biomass build-up in both groundwater and sediments, electron transport system, etc.) and the interaction between biogeochemical and hydrological process. In this study, a novel multi-scale model was developed by integrating our recent model on electron capacitance of Geobacter (Zhao et al., 2010) with a comprehensive simulator of coupled fluid flow, hydrologic transport, heat transfer, and biogeochemical reactions. This mechanistic reactive-transport model accurately reproduces the experimental data for the bioremediation of uranium with acetate amendment. We subsequently performed global sensitivity analysis with the reactive-transport model in order to identify the main sources of prediction uncertainty caused by synergistic effects of biological, geochemical, and hydrological processes. The proposed approach successfully captured significant contributing factors across time and space, thereby improving the structure and parameterization of the comprehensive reactive-transport model. The global sensitivity analysis also provides a potentially useful tool to evaluate uranium bioremediation strategy. The simulations suggest that under difficult environments (e.g., highly contaminated with U(VI) at a high migration rate of solutes), the efficiency of uranium removal can be improved by adding Geobacter species to the contaminated site (bioaugmentation) in conjunction with the addition of electron donor (biostimulation). The simulations also highlight the interactive effect of initial cell concentration and flow rate on U(VI) reduction.

  6. Physics-Based Compact Model for CIGS and CdTe Solar Cells: From Voltage-Dependent Carrier Collection to Light-Enhanced Reverse Breakdown: Preprint

    SciTech Connect (OSTI)

    Sun, Xingshu; Alam, Muhammad Ashraful; Raguse, John; Garris, Rebekah; Deline, Chris; Silverman, Timothy

    2015-10-15

    In this paper, we develop a physics-based compact model for copper indium gallium diselenide (CIGS) and cadmium telluride (CdTe) heterojunction solar cells that attributes the failure of superposition to voltage-dependent carrier collection in the absorber layer, and interprets light-enhanced reverse breakdown as a consequence of tunneling-assisted Poole-Frenkel conduction. The temperature dependence of the model is validated against both simulation and experimental data for the entire range of bias conditions. The model can be used to characterize device parameters, optimize new designs, and most importantly, predict performance and reliability of solar panels including the effects of self-heating and reverse breakdown due to partial-shading degradation.

  7. Midtemperature solar systems test facility predictions for thermal performance based on test data: solar kinetics T-600 solar collector with FEK 244 reflector surface

    SciTech Connect (OSTI)

    Harrison, T.D.

    1981-04-01

    Sandia National Laboratories, Albuquerque (SNLA), is currently conducting a program to predict the performance and measure the characteristics of commercially available solar collectors that have the potential for use in industrial process heat and enhanced oil recovery applications. The thermal performance predictions for the Solar Kinetics T-600 solar line-focusing parabolic trough collector are presented for three output temperatures at five cities in the US. (WHK)

  8. A Conceptual Model for Partially PremixedLow-Temperature Diesel Combustion Based onIn-Cylinder Laser Diagnostics and Chemical Kinetics Modeling

    Broader source: Energy.gov [DOE]

    Conceptual models for low temperature combustion diesel engines are offered based on recent research within optically accessible engines and combustion chambers.

  9. WINDOW-WALL INTERFACE CORRECTION FACTORS: THERMAL MODELING OF INTEGRATED FENESTRATION AND OPAQUE ENVELOPE SYSTEMS FOR IMPROVED PREDICTION OF ENERGY USE

    SciTech Connect (OSTI)

    Bhandari, Mahabir S; Ravi, Dr. Srinivasan

    2012-01-01

    The boundary conditions for thermal modeling of fenestration systems assume an adiabatic condition between the fenestration system installed and the opaque envelope system. This theoretical adiabatic boundary condition may not be appropriate owing to heat transfer at the interfaces, particularly for aluminum- framed windows affixed to metal- framed walls. In such scenarios, the heat transfer at the interface may increase the discrepancy between real world thermal indices and laboratory measured or calculated indices based on NFRC Rating System.This paper discusses the development of window-wall Interface Correction Factors (ICF) to improve energy impacts of building envelope systems

  10. Functional Polymorphisms of Base Excision Repair Genes XRCC1 and APEX1 Predict Risk of Radiation Pneumonitis in Patients With Non-Small Cell Lung Cancer Treated With Definitive Radiation Therapy

    SciTech Connect (OSTI)

    Yin Ming; Liao Zhongxing; Liu Zhensheng; Wang, Li-E; Gomez, Daniel; Komaki, Ritsuko; Wei Qingyi

    2011-11-01

    Purpose: To explore whether functional single nucleotide polymorphisms (SNPs) of base-excision repair genes are predictors of radiation treatment-related pneumonitis (RP), we investigated associations between functional SNPs of ADPRT, APEX1, and XRCC1 and RP development. Methods and Materials: We genotyped SNPs of ADPRT (rs1136410 [V762A]), XRCC1 (rs1799782 [R194W], rs25489 [R280H], and rs25487 [Q399R]), and APEX1 (rs1130409 [D148E]) in 165 patients with non-small cell lung cancer (NSCLC) who received definitive chemoradiation therapy. Results were assessed by both Logistic and Cox regression models for RP risk. Kaplan-Meier curves were generated for the cumulative RP probability by the genotypes. Results: We found that SNPs of XRCC1 Q399R and APEX1 D148E each had a significant effect on the development of Grade {>=}2 RP (XRCC1: AA vs. GG, adjusted hazard ratio [HR] = 0.48, 95% confidence interval [CI], 0.24-0.97; APEX1: GG vs. TT, adjusted HR = 3.61, 95% CI, 1.64-7.93) in an allele-dose response manner (Trend tests: p = 0.040 and 0.001, respectively). The number of the combined protective XRCC1 A and APEX1 T alleles (from 0 to 4) also showed a significant trend of predicting RP risk (p = 0.001). Conclusions: SNPs of the base-excision repair genes may be biomarkers for susceptibility to RP. Larger prospective studies are needed to validate our findings.

  11. Predictive Capability Maturity Model for computational modeling...

    Office of Scientific and Technical Information (OSTI)

    Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

  12. Predictive Capability Maturity Model for computational modeling...

    Office of Scientific and Technical Information (OSTI)

    ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

  13. Prediction of new particle emission on cold fusion

    SciTech Connect (OSTI)

    Matsumoto, T. . Dept. of Nuclear Engineering)

    1990-12-01

    In this paper the energy distribution of cold fusion products is analyzed based on the Nattoh model. A new hydrogen-catalyzed fusion reaction is proposed to occur in a metal. From the differences in the Q value and other parameters, a new particles, the iton, is predicted to be emitted, with a rest mass 2 to 26 times that of an electron.

  14. Wind Power Plant Prediction by Using Neural Networks: Preprint

    SciTech Connect (OSTI)

    Liu, Z.; Gao, W.; Wan, Y. H.; Muljadi, E.

    2012-08-01

    This paper introduces a method of short-term wind power prediction for a wind power plant by training neural networks based on historical data of wind speed and wind direction. The model proposed is shown to achieve a high accuracy with respect to the measured data.

  15. Predicting Hurricanes with Supercomputers

    SciTech Connect (OSTI)

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  16. Estimation of placental and lactational transfer and tissue distribution of atrazine and its main metabolites in rodent dams, fetuses, and neonates with physiologically based pharmacokinetic modeling

    SciTech Connect (OSTI)

    Lin, Zhoumeng; Fisher, Jeffrey W.; Wang, Ran; Ross, Matthew K.; Filipov, Nikolay M.

    2013-11-15

    Atrazine (ATR) is a widely used chlorotriazine herbicide, a ubiquitous environmental contaminant, and a potential developmental toxicant. To quantitatively evaluate placental/lactational transfer and fetal/neonatal tissue dosimetry of ATR and its major metabolites, physiologically based pharmacokinetic models were developed for rat dams, fetuses and neonates. These models were calibrated using pharmacokinetic data from rat dams repeatedly exposed (oral gavage; 5 mg/kg) to ATR followed by model evaluation against other available rat data. Model simulations corresponded well to the majority of available experimental data and suggest that: (1) the fetus is exposed to both ATR and its major metabolite didealkylatrazine (DACT) at levels similar to maternal plasma levels, (2) the neonate is exposed mostly to DACT at levels two-thirds lower than maternal plasma or fetal levels, while lactational exposure to ATR is minimal, and (3) gestational carryover of DACT greatly affects its neonatal dosimetry up until mid-lactation. To test the model's cross-species extrapolation capability, a pharmacokinetic study was conducted with pregnant C57BL/6 mice exposed (oral gavage; 5 mg/kg) to ATR from gestational day 12 to 18. By using mouse-specific parameters, the model predictions fitted well with the measured data, including placental ATR/DACT levels. However, fetal concentrations of DACT were overestimated by the model (10-fold). This overestimation suggests that only around 10% of the DACT that reaches the fetus is tissue-bound. These rodent models could be used in fetal/neonatal tissue dosimetry predictions to help design/interpret early life toxicity/pharmacokinetic studies with ATR and as a foundation for scaling to humans. - Highlights: • We developed PBPK models for atrazine in rat dams, fetuses, and neonates. • We conducted pharmacokinetic (PK) study with atrazine in pregnant mice. • Model predictions were in good agreement with experimental rat and mouse PK data. • The fetus is exposed to atrazine/its main metabolite at levels similar to the dam. • The nursing neonate is exposed primarily to atrazine's main metabolite DACT.

  17. DEMONSTRATION OF EQUIVALENCY OF CANE AND SOFTWOOD BASED CELOTEX FOR MODEL 9975 SHIPPING PACKAGES

    SciTech Connect (OSTI)

    Watkins, R; Jason Varble, J

    2008-05-27

    Cane-based Celotex{trademark} has been used extensively in various Department of Energy (DOE) packages as a thermal insulator and impact absorber. Cane-based Celotex{trademark} fiberboard was only manufactured by Knight-Celotex Fiberboard at their Marrero Plant in Louisiana. However, Knight-Celotex Fiberboard shut down their Marrero Plant in early 2007 due to impacts from hurricane Katrina and other economic factors. Therefore, cane-based Celotex{trademark} fiberboard is no longer available for use in the manufacture of new shipping packages requiring the material as a component. Current consolidation plans for the DOE Complex require the procurement of several thousand new Model 9975 shipping packages requiring cane-based Celotex{trademark} fiberboard. Therefore, an alternative to cane-based Celotex{trademark} fiberboard is needed. Knight-Celotex currently manufactures Celotex{trademark} fiberboard from other cellulosic materials, such as hardwood and softwood. A review of the relevant literature has shown that softwood-based Celotex{trademark} meets all parameters important to the Model 9975 shipping package.

  18. Performance Modeling

    Office of Scientific and Technical Information (OSTI)

    The prediction methodology will form the foundation of a more robust resource management ... Accurate performance prediction requires accurate performance models of the components ...

  19. Properties of the multiorbital Hubbard models for the iron-based superconductors

    SciTech Connect (OSTI)

    Dagotto, Elbio R; Moreo, Adriana; Nicholson, Andrew D; Luo, Qinlong; Liang, Shuhua; Zhang, Xiaotian

    2011-01-01

    A brief review of the main properties of multiorbital Hubbard models for the Fe-based supercon- ductors is presented. The emphasis is on the results obtained by our group at the University of Tennessee and Oak Ridge National Laboratory, Tennessee, USA, but results by several other groups are also discussed. The models studied here have two, three, and five orbitals, and they are analyzed using a variety of computational and mean-field approximations. A physical region where the properties of the models are in qualitative agreement with neutron scattering, photoemission, and transport results is revealed. A variety of interesting open questions are briefly discussed such as: what are the dominant pairing tendencies in Hubbard models? Can pairing occur in an interorbital channel? Are nesting effects of fundamental relevance in the pnictides or approaches based on local moments are more important? What kind of magnetic states are found in the presence of iron va- cancies? Can charge stripes exist in iron-based superconductors? Why is transport in the pnictides anisotropic? The discussion of results includes the description of these and other open problems in this fascinating area of research.

  20. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect (OSTI)

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-04-08

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  1. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  2. Modeling CANDU-6 liquid zone controllers for effects of thorium-based fuels

    SciTech Connect (OSTI)

    St-Aubin, E.; Marleau, G.

    2012-07-01

    We use the DRAGON code to model the CANDU-6 liquid zone controllers and evaluate the effects of thorium-based fuels on their incremental cross sections and reactivity worth. We optimize both the numerical quadrature and spatial discretization for 2D cell models in order to provide accurate fuel properties for 3D liquid zone controller supercell models. We propose a low computer cost parameterized pseudo-exact 3D cluster geometries modeling approach that avoids tracking issues on small external surfaces. This methodology provides consistent incremental cross sections and reactivity worths when the thickness of the buffer region is reduced. When compared with an approximate annular geometry representation of the fuel and coolant region, we observe that the cluster description of fuel bundles in the supercell models does not increase considerably the precision of the results while increasing substantially the CPU time. In addition, this comparison shows that it is imperative to finely describe the liquid zone controller geometry since it has a strong impact of the incremental cross sections. This paper also shows that liquid zone controller reactivity worth is greatly decreased in presence of thorium-based fuels compared to the reference natural uranium fuel, since the fission and the fast to thermal scattering incremental cross sections are higher for the new fuels. (authors)

  3. A Model-Based Signal Processing Approach to Nuclear Explosion Monitoring

    SciTech Connect (OSTI)

    Rodgers, A; Harris, D; Pasyanos, M

    2007-03-14

    This report describes research performed under Laboratory Research and Development Project 05-ERD-019, entitled ''A New Capability for Regional High-Frequency Seismic Wave Simulation in Realistic Three-Dimensional Earth Models to Improve Nuclear Explosion Monitoring''. A more appropriate title for this project is ''A Model-Based Signal Processing Approach to Nuclear Explosion Monitoring''. This project supported research for a radically new approach to nuclear explosion monitoring as well as allowed the development new capabilities in computational seismology that can contribute to NNSA/NA-22 Programs.

  4. Renewable Energy Cost Modeling. A Toolkit for Establishing Cost-Based Incentives in the United States

    SciTech Connect (OSTI)

    Gifford, Jason S.; Grace, Robert C.; Rickerson, Wilson H.

    2011-05-01

    This report serves as a resource for policymakers who wish to learn more about levelized cost of energy (LCOE) calculations, including cost-based incentives. The report identifies key renewable energy cost modeling options, highlights the policy implications of choosing one approach over the other, and presents recommendations on the optimal characteristics of a model to calculate rates for cost-based incentives, FITs, or similar policies. These recommendations shaped the design of NREL's Cost of Renewable Energy Spreadsheet Tool (CREST), which is used by state policymakers, regulators, utilities, developers, and other stakeholders to assist with analyses of policy and renewable energy incentive payment structures. Authored by Jason S. Gifford and Robert C. Grace of Sustainable Energy Advantage LLC and Wilson H. Rickerson of Meister Consultants Group, Inc.

  5. Disease mapping based on stochastic SIR-SI model for Dengue and Chikungunya in Malaysia

    SciTech Connect (OSTI)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2014-12-04

    This paper describes and demonstrates a method for relative risk estimation which is based on the stochastic SIR-SI vector-borne infectious disease transmission model specifically for Dengue and Chikungunya diseases in Malaysia. Firstly, the common compartmental model for vector-borne infectious disease transmission called the SIR-SI model (susceptible-infective-recovered for human populations; susceptible-infective for vector populations) is presented. This is followed by the explanations on the stochastic SIR-SI model which involve the Bayesian description. This stochastic model then is used in the relative risk formulation in order to obtain the posterior relative risk estimation. Then, this relative estimation model is demonstrated using Dengue and Chikungunya data of Malaysia. The viruses of these diseases are transmitted by the same type of female vector mosquito named Aedes Aegypti and Aedes Albopictus. Finally, the findings of the analysis of relative risk estimation for both Dengue and Chikungunya diseases are presented, compared and displayed in graphs and maps. The distribution from risk maps show the high and low risk area of Dengue and Chikungunya diseases occurrence. This map can be used as a tool for the prevention and control strategies for both diseases.

  6. Analysis of laser remote fusion cutting based on a mathematical model

    SciTech Connect (OSTI)

    Matti, R. S. [Department of Engineering Sciences and Mathematics, Luleć University of Technology, S-971 87 Luleć (Sweden); Department of Mechanical Engineering, College of Engineering, University of Mosul, Mosul (Iraq); Ilar, T.; Kaplan, A. F. H. [Department of Engineering Sciences and Mathematics, Luleć University of Technology, S-971 87 Luleć (Sweden)

    2013-12-21

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too.

  7. Improving Well Productivity Based Modeling with the Incorporation of Geologic Dependencies

    U.S. Energy Information Administration (EIA) Indexed Site

    Improving Well Productivity Based Modeling with the Incorporation of Geologic Dependencies Troy Cook and Dana Van Wagener October 14, 2014 Independent Statistics & Analysis www.eia.gov U.S. Energy Information Administration Washington, DC 20585 This paper is released to encourage discussion and critical comment. The analysis and conclusions expressed here are those of the authors and not necessarily those of the U.S. Energy Information Administration. WORKING PAPER SERIES October 2014 Tony

  8. An Evaluation of Mesoscale Model Predictions of Down-Valley and Canyon Flows and Their Consequences Using Doppler Lidar Measurements During VTMX 2000

    SciTech Connect (OSTI)

    Fast, Jerome D.; Darby, Lisa S.

    2004-04-01

    A mesoscale model, a Lagrangian particle dispersion model, and extensive Doppler lidar wind measurements during the VTMX 2000 field campaign were used to examine converging flows over the Salt Lake Valley and their effect on vertical mixing of tracers at night and during the morning transition period. The simulated wind components were transformed into radial velocities to make a direct comparison with about 1.3 million Doppler lidar data points and critically evaluate, using correlation coefficients, the spatial variations in the simulated wind fields aloft. The mesoscale model captured reasonably well the general features of the observed circulations including the daytime up-valley flow, the nighttime slope, canyon, and down-valley flows, and the convergence of the flows over the valley. When there were errors in the simulated wind fields, they were usually associated with the timing, structure, or strength of specific flows. Simulated outflows from canyons along the Wasatch Mountains propagated over the valley and converged with the down-valley flow, but the advance and retreat of these simulated flows was often out of phase with the lidar measurements. While the flow reversal during the evening transition period produced rising motions over much of the valley atmosphere in the absence of significant ambient winds, average vertical velocities became close to zero as the down-valley flow developed. Still, vertical velocities between 5 and 15 cm s-1 occurred where down-slope, canyon and down-valley flows converged and vertical velocities greater than 50 cm s-1 were produced by hydraulic jumps at the base of the canyons. The presence of strong ambient winds resulted in smaller average rising motions during the evening transition period and larger average vertical velocities after that. A fraction of the tracer released at the surface was transported up to the height of the surrounding mountains; however, higher concentrations were produced aloft for evenings characterized by well-developed drainage circulations. Simulations with and without vertical motions in the particle model produced large differences in the tracer concentrations at specific locations and times; however, the overall ventilation of the valley atmosphere differed by only 5% or less. Despite the atmospheric stability, turbulence produced by vertical wind shears mixed particles well above the surface stable layer for the particle model simulation without vertical motions.

  9. Towards the Prediction of Decadal to Centennial Climate Processes...

    Office of Scientific and Technical Information (OSTI)

    Towards the Prediction of Decadal to Centennial Climate Processes in the Coupled Earth System Model Citation Details In-Document Search Title: Towards the Prediction of Decadal to ...

  10. Please join us for a triple-header seminar organized around Modeling...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    existing tools. We also showed that different knowledge-based strategies (mixture models, machine learning or hybrid potential functions) improve the predictions. Due to the...

  11. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    SciTech Connect (OSTI)

    Dana L. Kelly; Ronald L. Boring; Ali Mosleh; Carol Smidts

    2011-10-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  12. Prediction of Part Distortion in Die Casting

    SciTech Connect (OSTI)

    R. Allen Miller

    2005-03-30

    The die casting process is one of the net shape manufacturing techniques and is widely used to produce high production castings with tight tolerances for many industries. An understanding of the stress distribution and the deformation pattern of parts produced by die casting will result in less deviation from the part design specification, a better die design and eventually more productivity and cost savings. This report presents methods that can be used to simulate the die casting process in order to predict the deformation and stresses in the produced part and assesses the degree to which distortion modeling is practical for die casting at the current time. A coupled thermal-mechanical finite elements model was used to simulate the die casting process. The simulation models the effect of thermal and mechanical interaction between the casting and the die. It also includes the temperature dependant material properties of the casting. Based on a designed experiment, a sensitivity analysis was conducted on the model to investigate the effect of key factors. These factors include the casting material model, material properties and thermal interaction between casting and dies. To verify the casting distortion predictions, it was compared against the measured dimensions of produced parts. The comparison included dimensions along and across the parting plane and the flatness of one surface.

  13. Considerations for modeling small-particulate impacts from surface coal-mining operations based on wind-tunnel simulations

    SciTech Connect (OSTI)

    Perry, S.G.; Petersen, W.B.; Thompson, R.S.

    1994-12-31

    The Clean Air Act Amendments of 1990 provide for a reexamination of the current Environmental Protection Agency`s (USEPA) methods for modeling fugitive particulate (PM10) from open-pit, surface coal mines. The Industrial Source Complex Model (ISCST2) is specifically named as the method that needs further study. Title II, Part B, Section 234 of the Amendments states that {open_quotes}...the Administrator shall analyze the accuracy of such model and emission factors and make revisions as may be necessary to eliminate any significant over-predictions of air quality effect of fugitive particulate emissions from such sources.{close_quotes}

  14. Fracture Toughness Prediction for MWCNT Reinforced Ceramics

    SciTech Connect (OSTI)

    Henager, Charles H.; Nguyen, Ba Nghiep

    2013-09-01

    This report describes the development of a micromechanics model to predict fracture toughness of multiwall carbon nanotube (MWCNT) reinforced ceramic composites to guide future experimental work for this project. The modeling work described in this report includes (i) prediction of elastic properties, (ii) development of a mechanistic damage model accounting for matrix cracking to predict the composite nonlinear stress/strain response to tensile loading to failure, and (iii) application of this damage model in a modified boundary layer (MBL) analysis using ABAQUS to predict fracture toughness and crack resistance behavior (R-curves) for ceramic materials containing MWCNTs at various volume fractions.

  15. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

    SciTech Connect (OSTI)

    Oprisan, Sorinel Adrian; Oprisan, Ana

    2005-03-31

    Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells -- EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

  16. ADA/SCADA RTU protocol based on the 3-layer UCA model

    SciTech Connect (OSTI)

    Adamo, V.P.

    1995-12-31

    This paper describes an implementation of a DA/SCADA RTU communication protocol based on the 3-layer reference model for wide area networks specified in the Utility Communications Architecture (UCA) VL1.0. This protocol is based on the following international standards: EIA-232-D, High-level Data Link Control (HDLC) [ISO/IEC 3309], and Manufacturing Message Specification (MMS) [ISO/IEC 95061]. A description of the HDLC frame structure used in this implementation is provided. This includes a description of the extended transparency option for Start/Stop transmission, commonly referred to as {open_quotes}asynchronous HDLC{close_quotes}. This option allows for the transmission of HDLC frames using inexpensive asynchronous communication hardware. The data link topology described in this paper is an unbalanced, point-to-multipoint topology consisting of one primary, or, master station, and multiple secondary, or remote, stations. The data link operates in the Normal Response Mode (NRM). In this mode a secondary station may initiate transmissions only as a result of receiving explicit permission to do so from the primary station. The application layer protocol described in this paper is an implementation of the Manufacturing Message Specification (MMS). The MMS device model, or Virtual Manufacturing Device (VMD), for a DA/SCADA Remote Terminal Unit is provided. The current VMD model provides a view of common RTU data types, plus, A/C Input (ACI) data including phasor magnitude and mean readings, harmonics, and overcurrent alarm information.

  17. Web-Based Training on Reviewing Dose Modeling Aspects of NRC Decommissioning and License Termination Plans

    SciTech Connect (OSTI)

    LePoire, D.; Cheng, J.J.; Kamboj, S.; Arnish, J.; Richmond, P.; Chen, S.Y.; Barr, C.; McKenney, C.

    2008-01-15

    NRC licensees at decommissioning nuclear facilities submit License Termination Plans (LTP) or Decommissioning Plans (DP) to NRC for review and approval. To facilitate a uniform and consistent review of these plans, the NRC developed training for its staff. A live classroom course was first developed in 2005, which targeted specific aspects of the LTP and DP review process related to dose-based compliance demonstrations or modeling. A web-based training (WBT) course was developed in 2006 and 2007 to replace the classroom-based course. The advantage of the WBT is that it will allow for staff training or refreshers at any time, while the advantage of a classroom-based course is that it provides a forum for lively discussion and the sharing of experience of classroom participants. The objective of this course is to train NRC headquarters and regional office staff on how to review sections of a licensee's DP or LTP that pertain to dose modeling. The DP generally refers to the decommissioning of non-reactor facilities, while the LTP refers specifically to the decommissioning of reactors. This review is part of the NRC's licensing process, in which the NRC determines if a licensee has provided a suitable technical basis to support derived concentration guideline levels (DCGLs)1 or dose modeling analyses performed to demonstrate compliance with dose-based license termination rule criteria. This type of training is one component of an organizational management system. These systems 'use a range of practices to identify, create, represent, and distribute knowledge for reuse, awareness and learning'. This is especially important in an organization undergoing rapid change or staff turnover to retain organizational information and processes. NRC is committed to maintaining a dynamic program of training, development, and knowledge transfer to ensure that the NRC acquires and maintains the competencies needed to accomplish its mission. This paper discusses one specific project related to training, developing, and transferring knowledge to NRC staff on how to review dose-modeling portions of licensee-submitted DPs and LTPs. This project identified specific cases and examples, created easily updateable educational modules, represented material in an engaging format through animations, video, and graphics, and distributed information on how to perform these reviews in an accessible, web-based format. WBT promotes consistency in reviews and has the advantage of being able to be used as a resource to staff at any time. The WBT will provide reviewers with knowledge needed to perform risk-informed analyses (e.g., information related to development of realistic scenarios and use of probabilistic analysis). WBT on review of LTP or DP dose modeling will promote staff development, efficiency, and effectiveness in performing risk-informed, performance-based reviews of decommissioning activities at NRC-licensed facilities. One of the key advantages of this type of web-based training is that it can be loaded on-demand and can be reused indefinitely. In addition to the benefits of on-demand training, the modules can also be used for reference. The presentations are hosted on a web server that can be accessed by registered users at any time. Staff can return to a particular module to review the material long after they have completed the course.

  18. Model based approach to UXO imaging using the time domain electromagnetic method

    SciTech Connect (OSTI)

    Lavely, E.M.

    1999-04-01

    Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detector modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.

  19. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    SciTech Connect (OSTI)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan; Seenumani, Gayathri; Down, John; Lopez, Rodrigo

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developed will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.

  20. Forecasting hotspots using predictive visual analytics approach

    DOE Patents [OSTI]

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  1. ENTHALPY-BASED THERMAL EVOLUTION OF LOOPS. II. IMPROVEMENTS TO THE MODEL

    SciTech Connect (OSTI)

    Cargill, P. J.; Bradshaw, S. J.; Klimchuk, J. A.

    2012-06-20

    This paper develops the zero-dimensional (0D) hydrodynamic coronal loop model 'Enthalpy-based Thermal Evolution of Loops' (EBTEL) proposed by Klimchuk et al., which studies the plasma response to evolving coronal heating, especially impulsive heating events. The basis of EBTEL is the modeling of mass exchange between the corona and transition region (TR) and chromosphere in response to heating variations, with the key parameter being the ratio of the TR to coronal radiation. We develop new models for this parameter that now include gravitational stratification and a physically motivated approach to radiative cooling. A number of examples are presented, including nanoflares in short and long loops, and a small flare. The new features in EBTEL are important for accurate tracking of, in particular, the density. The 0D results are compared to a 1D hydro code (Hydrad) with generally good agreement. EBTEL is suitable for general use as a tool for (1) quick-look results of loop evolution in response to a given heating function, (2) extensive parameter surveys, and (3) situations where the modeling of hundreds or thousands of elemental loops is needed. A single run takes a few seconds on a contemporary laptop.

  2. Description and evaluation of a mechanistically based conceptual model for spall

    SciTech Connect (OSTI)

    Hansen, F.D.; Knowles, M.K.; Thompson, T.W.

    1997-08-01

    A mechanistically based model for a possible spall event at the WIPP site is developed and evaluated in this report. Release of waste material to the surface during an inadvertent borehole intrusion is possible if future states of the repository include high gas pressure and waste material consisting of fine particulates having low mechanical strength. The conceptual model incorporates the physics of wellbore hydraulics coupled to transient gas flow to the intrusion borehole, and mechanical response of the waste. Degraded waste properties using of the model. The evaluations include both numerical and analytical implementations of the conceptual model. A tensile failure criterion is assumed appropriate for calculation of volumes of waste experiencing fragmentation. Calculations show that for repository gas pressures less than 12 MPa, no tensile failure occurs. Minimal volumes of material experience failure below gas pressure of 14 MPa. Repository conditions dictate that the probability of gas pressures exceeding 14 MPa is approximately 1%. For these conditions, a maximum failed volume of 0.25 m{sup 3} is calculated.

  3. SHOCK INITIATION EXPERIMENTS ON THE TATB BASED EXPLOSIVE RX-03-GO WITH IGNITION AND GROWTH MODELING

    SciTech Connect (OSTI)

    Vandersall, K S; Garcia, F; Tarver, C M

    2009-06-23

    Shock initiation experiments on the TATB based explosive RX-03-GO (92.5% TATB, 7.5% Cytop A by weight) were performed to obtain in-situ pressure gauge data, characterize the run-distance-to-detonation behavior, and calculate Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive sample with manganin piezoresistive pressure gauge packages placed between sample slices. The RX-03-GO formulation utilized is similar to that of LX-17 (92.5% TATB, 7.5% Kel-f by weight) with the notable differences of a new binder material and TATB that has been dissolved and recrystallized in order to improve the purity and morphology. The shock sensitivity will be compared with that of prior data on LX-17 and other TATB formulations. Ignition and Growth modeling parameters were obtained with a reasonable fit to the experimental data.

  4. A double-layer based model of ion confinement in electron cyclotron resonance ion source

    SciTech Connect (OSTI)

    Mascali, D. Neri, L.; Celona, L.; Castro, G.; Gammino, S.; Ciavola, G.; Torrisi, G.; Università Mediterranea di Reggio Calabria, Dipartimento di Ingegneria dell’Informazione, delle Infrastrutture e dell’Energia Sostenibile, Via Graziella, I-89100 Reggio Calabria ; Sorbello, G.; Università degli Studi di Catania, Dipartimento di Ingegneria Elettrica Elettronica ed Informatica, Viale Andrea Doria 6, 95125 Catania

    2014-02-15

    The paper proposes a new model of ion confinement in ECRIS, which can be easily generalized to any magnetic configuration characterized by closed magnetic surfaces. Traditionally, ion confinement in B-min configurations is ascribed to a negative potential dip due to superhot electrons, adiabatically confined by the magneto-static field. However, kinetic simulations including RF heating affected by cavity modes structures indicate that high energy electrons populate just a thin slab overlapping the ECR layer, while their density drops down of more than one order of magnitude outside. Ions, instead, diffuse across the electron layer due to their high collisionality. This is the proper physical condition to establish a double-layer (DL) configuration which self-consistently originates a potential barrier; this “barrier” confines the ions inside the plasma core surrounded by the ECR surface. The paper will describe a simplified ion confinement model based on plasma density non-homogeneity and DL formation.

  5. GX-Means: A model-based divide and merge algorithm for geospatial image clustering

    SciTech Connect (OSTI)

    Vatsavai, Raju; Symons, Christopher T; Chandola, Varun; Jun, Goo

    2011-01-01

    One of the practical issues in clustering is the specification of the appropriate number of clusters, which is not obvious when analyzing geospatial datasets, partly because they are huge (both in size and spatial extent) and high dimensional. In this paper we present a computationally efficient model-based split and merge clustering algorithm that incrementally finds model parameters and the number of clusters. Additionally, we attempt to provide insights into this problem and other data mining challenges that are encountered when clustering geospatial data. The basic algorithm we present is similar to the G-means and X-means algorithms; however, our proposed approach avoids certain limitations of these well-known clustering algorithms that are pertinent when dealing with geospatial data. We compare the performance of our approach with the G-means and X-means algorithms. Experimental evaluation on simulated data and on multispectral and hyperspectral remotely sensed image data demonstrates the effectiveness of our algorithm.

  6. Prediction of Corrosion of Advanced Materials and Fabricated Components

    SciTech Connect (OSTI)

    A. Anderko; G. Engelhardt; M.M. Lencka; M.A. Jakab; G. Tormoen; N. Sridhar

    2007-09-29

    The goal of this project is to provide materials engineers, chemical engineers and plant operators with a software tool that will enable them to predict localized corrosion of process equipment including fabricated components as well as base alloys. For design and revamp purposes, the software predicts the occurrence of localized corrosion as a function of environment chemistry and assists the user in selecting the optimum alloy for a given environment. For the operation of existing plants, the software enables the users to predict the remaining life of equipment and help in scheduling maintenance activities. This project combined fundamental understanding of mechanisms of corrosion with focused experimental results to predict the corrosion of advanced, base or fabricated, alloys in real-world environments encountered in the chemical industry. At the heart of this approach is the development of models that predict the fundamental parameters that control the occurrence of localized corrosion as a function of environmental conditions and alloy composition. The fundamental parameters that dictate the occurrence of localized corrosion are the corrosion and repassivation potentials. The program team, OLI Systems and Southwest Research Institute, has developed theoretical models for these parameters. These theoretical models have been applied to predict the occurrence of localized corrosion of base materials and heat-treated components in a variety of environments containing aggressive and non-aggressive species. As a result of this project, a comprehensive model has been established and extensively verified for predicting the occurrence of localized corrosion as a function of environment chemistry and temperature by calculating the corrosion and repassivation potentials.To support and calibrate the model, an experimental database has been developed to elucidate (1) the effects of various inhibiting species as well as aggressive species on localized corrosion of nickel-base alloys, stainless steels and copper-nickel alloys and (2) the effects of heat treatment on localized corrosion. Excellent agreement with experimental data has been obtained for alloys in various environments, including acids, bases, oxidizing species, inorganic inhibitors, etc. Further, a probabilistic model has been established for predicting the long-term damage due to localized corrosion on the basis of short-term inspection results. This methodology is applicable to pitting, crevice corrosion, stress corrosion cracking and corrosion fatigue. Finally, a comprehensive model has been developed for predicting sensitization of Fe-Ni-Cr-Mo-W-N alloys and its effect on localized corrosion. As a vehicle for the commercialization of this technology, OLI Systems has developed the Corrosion Analyzer, a software tool that is already used by many companies in the chemical process industry. In process design, the Corrosion Analyzer provides the industry with (1) reliable prediction of the tendency of base alloys for localized corrosion as a function of environmental conditions and (2) understanding of how to select alloys for corrosive environments. In process operations, the software will help to predict the remaining useful life of equipment based on limited input data. Thus, users will also be able to identify process changes, corrosion inhibition strategies, and other control options before costly shutdowns, energy waste, and environmental releases occur. With the Corrosion Analyzer, various corrosion mitigation measures can be realistically tested in a virtual laboratory.

  7. Nuclear fuel cycle system simulation tool based on high-fidelity component modeling

    SciTech Connect (OSTI)

    Ames, David E.

    2014-02-01

    The DOE is currently directing extensive research into developing fuel cycle technologies that will enable the safe, secure, economic, and sustainable expansion of nuclear energy. The task is formidable considering the numerous fuel cycle options, the large dynamic systems that each represent, and the necessity to accurately predict their behavior. The path to successfully develop and implement an advanced fuel cycle is highly dependent on the modeling capabilities and simulation tools available for performing useful relevant analysis to assist stakeholders in decision making. Therefore a high-fidelity fuel cycle simulation tool that performs system analysis, including uncertainty quantification and optimization was developed. The resulting simulator also includes the capability to calculate environmental impact measures for individual components and the system. An integrated system method and analysis approach that provides consistent and comprehensive evaluations of advanced fuel cycles was developed. A general approach was utilized allowing for the system to be modified in order to provide analysis for other systems with similar attributes. By utilizing this approach, the framework for simulating many different fuel cycle options is provided. Two example fuel cycle configurations were developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized waste inventories.

  8. Natural Abundance 17O Nuclear Magnetic Resonance and Computational Modeling Studies of Lithium Based Liquid Electrolytes

    SciTech Connect (OSTI)

    Deng, Xuchu; Hu, Mary Y.; Wei, Xiaoliang; Wang, Wei; Chen, Zhong; Liu, Jun; Hu, Jian Z.

    2015-07-01

    Natural abundance 17O NMR measurements were conducted on electrolyte solutions consisting of Li[CF3SO2NSO2CF3] (LiTFSI) dissolved in the solvents of ethylene carbonate (EC), propylene carbonate (PC), ethyl methyl carbonate (EMC), and their mixtures at various concentrations. It was observed that 17O chemical shifts of solvent molecules change with the concentration of LiTFSI. The chemical shift displacements of carbonyl oxygen are evidently greater than those of ethereal oxygen, strongly indicating that Li+ ion is coordinated with carbonyl oxygen rather than ethereal oxygen. To understand the detailed molecular interaction, computational modeling of 17O chemical shifts was carried out on proposed solvation structures. By comparing the predicted chemical shifts with the experimental values, it is found that a Li+ ion is coordinated with four double bond oxygen atoms from EC, PC, EMC and TFSI- anion. In the case of excessive amount of solvents of EC, PC and EMC the Li+ coordinated solvent molecules are undergoing quick exchange with bulk solvent molecules, resulting in average 17O chemical shifts. Several kinds of solvation structures are identified, where the proportion of each structure in the liquid electrolytes investigated depends on the concentration of LiTFSI.

  9. Model-based planning for laser cutting operations under unsteady-state conditions

    SciTech Connect (OSTI)

    Di Pietro, P.; Yao, Y.L.

    1996-12-31

    Boundary encroachment or cutting right up to pre-cut sections are examples of unsteady-state operations of the laser cutting process. Cornering and generating small diameter holes also fall into this category. Heat transfer is often frustrated here, resulting in bulk heating of the workpiece. This in turn leads to a degradation of the cut quality. Currently, trial-and-error based experimentation is needed in order to assure quality in these regions. Thus model-based process planning has the benefit of reducing this step whilst leading to an optimal solution. Numerical investigation of the laser-workpiece interaction zone quantifies significant effects of such transiency on cutting front mobility and beam coupling behavior. Non-linear power adaptation profiles are generated via the optimization strategy in order to stabilize cutting front temperatures. Experimental results demonstrate such process planning can produce quality improvements.

  10. DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems

    SciTech Connect (OSTI)

    Maiden, Wendy M.

    2010-05-01

    Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.

  11. Integrated Agent-Based and Production Cost Modeling Framework for Renewable Energy Studies: Preprint

    SciTech Connect (OSTI)

    Gallo, Giulia

    2015-10-07

    The agent-based framework for renewable energy studies (ARES) is an integrated approach that adds an agent-based model of industry actors to PLEXOS and combines the strengths of the two to overcome their individual shortcomings. It can examine existing and novel wholesale electricity markets under high penetrations of renewables. ARES is demonstrated by studying how increasing levels of wind will impact the operations and the exercise of market power of generation companies that exploit an economic withholding strategy. The analysis is carried out on a test system that represents the Electric Reliability Council of Texas energy-only market in the year 2020. The results more realistically reproduce the operations of an energy market under different and increasing penetrations of wind, and ARES can be extended to address pressing issues in current and future wholesale electricity markets.

  12. Studies of Ocean Predictability at Decade to Century Time Scales Using a Global Ocean General Circulation Model in a Parallel Computing Environment

    SciTech Connect (OSTI)

    Barnett, T.P.

    1998-11-30

    The objectives of this report are to determine the structure of oceanic natural variability at time scales of decades to centuries, characterize the physical mechanisms responsible for the variability; determine the relative importance of heat, fresh water, and moment fluxes on the variability; determine the predictability of the variability on these times scales. (B204)

  13. Oblique incidence effects in direct x-ray detectors: A first-order approximation using a physics-based analytical model

    SciTech Connect (OSTI)

    Badano, Aldo; Freed, Melanie; Fang Yuan

    2011-04-15

    Purpose: The authors describe the modifications to a previously developed analytical model of indirect CsI:Tl-based detector response required for studying oblique x-ray incidence effects in direct semiconductor-based detectors. This first-order approximation analysis allows the authors to describe the associated degradation in resolution in direct detectors and compare the predictions to the published data for indirect detectors. Methods: The proposed model is based on a physics-based analytical description developed by Freed et al. [''A fast, angle-dependent, analytical model of CsI detector response for optimization of 3D x-ray breast imaging systems,'' Med. Phys. 37(6), 2593-2605 (2010)] that describes detector response functions for indirect detectors and oblique incident x rays. The model, modified in this work to address direct detector response, describes the dependence of the response with x-ray energy, thickness of the transducer layer, and the depth-dependent blur and collection efficiency. Results: The authors report the detector response functions for indirect and direct detector models for typical thicknesses utilized in clinical systems for full-field digital mammography (150 {mu}m for indirect CsI:Tl and 200 {mu}m for a-Se direct detectors). The results suggest that the oblique incidence effect in a semiconductor detector differs from that in indirect detectors in two ways: The direct detector model produces a sharper overall PRF compared to the response corresponding to the indirect detector model for normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction compared to that found in indirect detectors with respect to the response at normal incidence angles. Conclusions: Compared to the effect seen in indirect detectors, the direct detector model exhibits a sharper response at normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction with respect to the blur in the orthogonal direction. The results suggest that the oblique incidence effect in direct detectors can be considered to be caused mostly by the geometry of the path where the x-ray beam and its secondary particles deposit energy in the semiconductor layer.

  14. Modeled tephra ages from lake sediments, base of Redoubt Volcano, Alaska

    SciTech Connect (OSTI)

    Schiff, C J; Kaufman, D S; Wallace, K L; Werner, A; Ku, T L; Brown, T A

    2007-02-25

    A 5.6-m-long lake sediment core from Bear Lake, Alaska, located 22 km southeast of Redoubt Volcano, contains 67 tephra layers deposited over the last 8750 cal yr, comprising 15% of the total thickness of recovered sediment. Using 12 AMS {sup 14}C ages, along with the {sup 137}Cs and {sup 210}Pb activities of recent sediment, we evaluated different models to determine the age-depth relation of sediment, and to determine the age of each tephra deposit. The age model is based on a cubic smooth spline function that was passed through the adjusted tephra-free depth of each dated layer. The estimated age uncertainty of the 67 tephras averages {+-} 105 yr (1{sigma}). Tephra-fall frequency at Bear Lake was among the highest during the past 500 yr, with eight tephras deposited compared to an average of 3.7 per 500 yr over the last 8500 yr. Other periods of increased tephra fall occurred 2500-3500, 4500-5000, and 7000-7500 cal yr. Our record suggests that Bear Lake experienced extended periods (1000-2000 yr) of increased tephra fall separated by shorter periods (500-1000 yr) of apparent quiescence. The Bear Lake sediment core affords the most comprehensive tephrochronology from the base of the Redoubt Volcano to date, with an average tephra-fall frequency of once every 130 yr.

  15. Nonlinear process model based control of a propylene sidestream draw column

    SciTech Connect (OSTI)

    Riggs, J.B. )

    1990-11-01

    While sidestream draw columns offer the incentives of reduced capital and operating expenses, they also pose more challenging control problems than ordinary distillation columns. This paper describes the application of nonlinear process model based control (PMBC) for composition control of all product streams for a simulation of a distillation column with a liquid sidestream draw. A tray-to-tray simulator of an industrial propylene/propane column that considers 5-min composition analyzer dead time was used to test the nonlinear PMBC controller for setpoint changes, a feed flow rate change, and feed composition changes. The nonlinear PMBC controller used an approximate model based upon the Smoker equation directly to make control decisions. The nonlinear PMBC controller exhibits excellent control performance for all test cases with a maximum relative deviation of the impurity from setpoint of about 10% for the two product streams. The nonlinear PMBC controller provides significantly improved control performance over a conventional single loop control scheme that is currently in industrial use.

  16. Residential-energy-demand modeling and the NIECS data base: an evaluation

    SciTech Connect (OSTI)

    Cowing, T.G.; Dubin, J.A.; McFadden, D.

    1982-01-01

    The purpose of this report is to evaluate the 1978-1979 National Interim Energy Consumption Survey (NIECS) data base in terms of its usefulness for estimating residential energy demand models based on household appliance choice and utilization decisions. The NIECS contains detailed energy usage information at the household level for 4081 households during the April 1978 to March 1979 period. Among the data included are information on the structural and thermal characteristics of the housing unit, demographic characteristics of the household, fuel usage, appliance characteristics, and actual energy consumption. The survey covers the four primary residential fuels-electricity, natural gas, fuel oil, and liquefied petroleum gas - and includes detailed information on recent household conservation and retrofit activities. Section II contains brief descriptions of the major components of the NIECS data set. Discussions are included on the sample frame and the imputation procedures used in NIECS. There are also two extensive tables, giving detailed statistical and other information on most of the non-vehicle NIECS variables. Section III contains an assessment of the NIECS data, focusing on four areas: measurement error, sample design, imputation problems, and additional data needed to estimate appliance choice/use models. Section IV summarizes and concludes the report.

  17. Total dissolved gas prediction and optimization in RiverWare

    SciTech Connect (OSTI)

    Stewart, Kevin M.; Witt, Adam M.; Hadjerioua, Boualem

    2015-09-01

    Management and operation of dams within the Columbia River Basin (CRB) provides the region with irrigation, hydropower production, flood control, navigation, and fish passage. These various system-wide demands can require unique dam operations that may result in both voluntary and involuntary spill, thereby increasing tailrace levels of total dissolved gas (TDG) which can be fatal to fish. Appropriately managing TDG levels within the context of the systematic demands requires a predictive framework robust enough to capture the operationally related effects on TDG levels. Development of the TDG predictive methodology herein attempts to capture the different modes of hydro operation, thereby making it a viable tool to be used in conjunction with a real-time scheduling model such as RiverWare. The end result of the effort will allow hydro operators to minimize system-wide TDG while meeting hydropower operational targets and constraints. The physical parameters such as spill and hydropower flow proportions, accompanied by the characteristics of the dam such as plant head levels and tailrace depths, are used to develop the empirically-based prediction model. In the broader study, two different models are developed a simplified and comprehensive model. The latter model incorporates more specific bubble physics parameters for the prediction of tailrace TDG levels. The former model is presented herein and utilizes an empirically based approach to predict downstream TDG levels based on local saturation depth, spillway and powerhouse flow proportions, and entrainment effects. Representative data collected from each of the hydro projects is used to calibrate and validate model performance and the accuracy of predicted TDG uptake. ORNL, in conjunction with IIHR - Hydroscience & Engineering, The University of Iowa, carried out model adjustments to adequately capture TDG levels with respect to each plant while maintaining a generalized model configuration. Validation results indicate excellent model performance with coefficient of determination values exceeding 92% for all sites. This approach enables model extension to an increasingly wider array of hydropower plants, i.e., with the proper data input, TDG uptake can be calculated independent of actual physical component design. The TDG model is used as a module in the systematic optimization framework of RiverWare, a river and reservoir modeling tool used by federal agencies, public utility districts, and other dam owners and operators to forecast, schedule, and manage hydropower assets. The integration and testing of the TDG module within RiverWare, led by University of Colorado s Center for Advanced Decision Support for Water and Environmental Systems (CADSWES), will allow users to generate optimum system schedules based on the minimization of TDG. Optimization analysis and added value will be quantified as system wide reductions in TDG achieved while meeting existing hydropower constraints. Future work includes the development of a method to predict downstream reservoir forebay TDG levels as a function of upstream reservoir tailrace TDG values based on river hydrodynamics, hydro operations, and reservoir characteristics. Once implemented, a holistic model that predicts both TDG uptake and transport will give hydropower operators valuable insight into how system-wide environmental effects can be mitigated while simultaneously balancing stakeholder interests.

  18. Modeling

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Caterpillar, Sandia CRADA Opens Door to Multiple Research Projects Capabilities, Computational Modeling & Simulation, CRF, Materials Science, Modeling, Modeling, Modeling & ...

  19. SU-E-J-73: Generation of Volumetric Images with a Respiratory Motion Model Based On An External Surrogate Signal

    SciTech Connect (OSTI)

    Hurwitz, M; Williams, C; Mishra, P; Dhou, S; Lewis, J

    2014-06-01

    Purpose: Respiratory motion during radiotherapy treatment can differ significantly from motion observed during imaging for treatment planning. Our goal is to use an initial 4DCT scan and the trace of an external surrogate marker to generate 3D images of patient anatomy during treatment. Methods: Deformable image registration is performed on images from an initial 4DCT scan. The deformation vectors are used to develop a patient-specific linear relationship between the motion of each voxel and the trajectory of an external surrogate signal. Correlations in motion are taken into account with principal component analysis, reducing the number of free parameters. This model is tested with digital phantoms reproducing the breathing patterns of ten measured patient tumor trajectories, using five seconds of data to develop the model and the subsequent thirty seconds to test its predictions. The model is also tested with a breathing physical anthropomorphic phantom programmed to reproduce a patient breathing pattern. Results: The error (mean absolute, 95th percentile) over 30 seconds in the predicted tumor centroid position ranged from (0.8, 1.3) mm to (2.2, 4.3) mm for the ten patient breathing patterns. The model reproduced changes in both phase and amplitude of the breathing pattern. Agreement between prediction and truth over the entire image was confirmed by assessing the global voxel intensity RMS error. In the physical phantom, the error in the tumor centroid position was less than 1 mm for all images. Conclusion: We are able to reconstruct 3D images of patient anatomy with a model correlating internal respiratory motion with motion of an external surrogate marker, reproducing the expected tumor centroid position with an average accuracy of 1.4 mm. The images generated by this model could be used to improve dose calculations for treatment planning and delivered dose estimates. This work was partially funded by a research grant from Varian Medical Systems.

  20. Predictions of pure liquid shock Hugoniots

    SciTech Connect (OSTI)

    Hobbs, M.L.; Baer, M.R.

    1998-06-01

    Determination of product species and associated equations-of-state (EOS) for energetic materials such as pyrotechnics with complex elemental compositions remains a major unsolved problem. Although, empirical EOS models may be calibrated to replicate detonation conditions within experimental variability (5--10%), different states, e.g. expansion, may produce significant discrepancy with data if the basic form of the EOS model is incorrect. A more physically realistic EOS model based on intermolecular potentials, such as the Jacobs Cowperthwaite Zwisler (JCZ3) EOS, is needed to predict detonation states as well as expanded states. Predictive capability for any EOS requires a large species data base composed of a wide variety of elements. Unfortunately, only 20 species have known exponential 6 (EXP 6) molecular force constants which are used in the JCZ3-EOS. Of these 20 species, only 10 have been adequately compared to experimental data such as molecular scattering or shock Hugoniot data. Since data in the strongly repulsive region of the molecular potential is limited, alternative methods must be found to deduce force constants for a larger number of species. The objective of the present study is to determine JCZ3 product species force constants using corresponding state theory. Intermolecular potential parameters were obtained for a variety of gas species using a simple corresponding states technique with critical volume and critical temperature. A more complex, four parameter corresponding state method with shape and polarity corrections was also used to obtain intermolecular potential parameters. Both corresponding state methods were used to predict shock Hugoniot data obtained from pure liquids. The simple corresponding state method is shown to give adequate agreement with shock Hugoniot data.

  1. MicroCT-Based Skeletal Models for Use in Tomographic Voxel Phantoms for Radiological Protection

    SciTech Connect (OSTI)

    Wesley Bolch

    2010-03-30

    ABSTRACT The University of Florida (UF) proposes to develop two high-resolution image-based skeletal dosimetry models for direct use by ICRP Committee 2’s Task Group on Dose Calculation in their forthcoming Reference Voxel Male (RVM) and Reference Voxel Female (RVF) whole-body dosimetry phantoms. These two phantoms are CT-based, and thus do not have the image resolution to delineate and perform radiation transport modeling of the individual marrow cavities and bone trabeculae throughout their skeletal structures. Furthermore, new and innovative 3D microimaging techniques will now be required for the skeletal tissues following Committee 2’s revision of the target tissues of relevance for radiogenic bone cancer induction. This target tissue had been defined in ICRP Publication 30 as a 10-?m cell layer on all bone surfaces of trabecular and cortical bone. The revised target tissue is now a 50-?m layer within the marrow cavities of trabecular bone only and is exclusive of the marrow adipocytes. Clearly, this new definition requires the use of 3D microimages of the trabecular architecture not available from past 2D optical studies of the adult skeleton. With our recent acquisition of two relatively young cadavers (males of age 18-years and 40-years), we will develop a series of reference skeletal models that can be directly applied to (1) the new ICRP reference voxel man and female phantoms developed for the ICRP, and (2) pediatric phantoms developed to target the ICRP reference children. Dosimetry data to be developed will include absorbed fractions for internal beta and alpha-particle sources, as well as photon and neutron fluence-to-dose response functions for direct use in external dosimetry studies of the ICRP reference workers and members of the general public

  2. MODEL BASED BIOMASS SYSTEM DESIGN OF FEEDSTOCK SUPPLY SYSTEMS FOR BIOENERGY PRODUCTION

    SciTech Connect (OSTI)

    David J. Muth, Jr.; Jacob J. Jacobson; Kenneth M. Bryden

    2013-08-01

    Engineering feedstock supply systems that deliver affordable, high-quality biomass remains a challenge for the emerging bioenergy industry. Cellulosic biomass is geographically distributed and has diverse physical and chemical properties. Because of this feedstock supply systems that deliver cellulosic biomass resources to biorefineries require integration of a broad set of engineered unit operations. These unit operations include harvest and collection, storage, preprocessing, and transportation processes. Design decisions for each feedstock supply system unit operation impact the engineering design and performance of the other system elements. These interdependencies are further complicated by spatial and temporal variances such as climate conditions and biomass characteristics. This paper develops an integrated model that couples a SQL-based data management engine and systems dynamics models to design and evaluate biomass feedstock supply systems. The integrated model, called the Biomass Logistics Model (BLM), includes a suite of databases that provide 1) engineering performance data for hundreds of equipment systems, 2) spatially explicit labor cost datasets, and 3) local tax and regulation data. The BLM analytic engine is built in the systems dynamics software package PowersimTM. The BLM is designed to work with thermochemical and biochemical based biofuel conversion platforms and accommodates a range of cellulosic biomass types (i.e., herbaceous residues, short- rotation woody and herbaceous energy crops, woody residues, algae, etc.). The BLM simulates the flow of biomass through the entire supply chain, tracking changes in feedstock characteristics (i.e., moisture content, dry matter, ash content, and dry bulk density) as influenced by the various operations in the supply chain. By accounting for all of the equipment that comes into contact with biomass from the point of harvest to the throat of the conversion facility and the change in characteristics, the BLM evaluates economic performance of the engineered system, as well as determining energy consumption and green house gas performance of the design. This paper presents a BLM case study delivering corn stover to produce cellulosic ethanol. The case study utilizes the BLM to model the performance of several feedstock supply system designs. The case study also explores the impact of temporal variations in climate conditions to test the sensitivity of the engineering designs. Results from the case study show that under certain conditions corn stover can be delivered to the cellulosic ethanol biorefinery for $35/dry ton.

  3. Strategic Plan for Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS)

    SciTech Connect (OSTI)

    Rich Johnson; Kimberlyn C. Mousseau; Hyung Lee

    2011-09-01

    NE-KAMS knowledge base will assist computational analysts, physics model developers, experimentalists, nuclear reactor designers, and federal regulators by: (1) Establishing accepted standards, requirements and best practices for V&V and UQ of computational models and simulations, (2) Establishing accepted standards and procedures for qualifying and classifying experimental and numerical benchmark data, (3) Providing readily accessible databases for nuclear energy related experimental and numerical benchmark data that can be used in V&V assessments and computational methods development, (4) Providing a searchable knowledge base of information, documents and data on V&V and UQ, and (5) Providing web-enabled applications, tools and utilities for V&V and UQ activities, data assessment and processing, and information and data searches. From its inception, NE-KAMS will directly support nuclear energy research, development and demonstration programs within the U.S. Department of Energy (DOE), including the Consortium for Advanced Simulation of Light Water Reactors (CASL), the Nuclear Energy Advanced Modeling and Simulation (NEAMS), the Light Water Reactor Sustainability (LWRS), the Small Modular Reactors (SMR), and the Next Generation Nuclear Power Plant (NGNP) programs. These programs all involve computational modeling and simulation (M&S) of nuclear reactor systems, components and processes, and it is envisioned that NE-KAMS will help to coordinate and facilitate collaboration and sharing of resources and expertise for V&V and UQ across these programs. In addition, from the outset, NE-KAMS will support the use of computational M&S in the nuclear industry by developing guidelines and recommended practices aimed at quantifying the uncertainty and assessing the applicability of existing analysis models and methods. The NE-KAMS effort will initially focus on supporting the use of computational fluid dynamics (CFD) and thermal hydraulics (T/H) analysis for M&S of nuclear reactor systems, components and processes, and will later expand to include materials, fuel system performance and other areas of M&S as time and funding allow.

  4. Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2

    SciTech Connect (OSTI)

    MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.

    1999-11-01

    This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.

  5. A Sensitivity Model (SM) approach to analyze urban development in Taiwan based on sustainability indicators

    SciTech Connect (OSTI)

    Huang Shuli Yeh Chiatsung Budd, William W. Chen Liling

    2009-02-15

    Sustainability indicators have been widely developed to monitor and assess sustainable development. They are expected to guide political decision-making based on their capability to represent states and trends of development. However, using indicators to assess the sustainability of urban strategies and policies has limitations - as they neither reflect the systemic interactions among them, nor provide normative indications in what direction they should be developed. This paper uses a semi-quantitative systematic model tool (Sensitivity Model Tools, SM) to analyze the role of urban development in Taiwan's sustainability. The results indicate that the natural environment in urban area is one of the most critical components and the urban economic production plays a highly active role in affecting Taiwan's sustainable development. The semi-quantitative simulation model integrates sustainability indicators and urban development policy to provide decision-makers with information about the impacts of their decisions on urban development. The system approach incorporated by this paper can be seen as a necessary, but not sufficient, condition for a sustainability assessment. The participatory process of expert participants for providing judgments on the relations between indicator variables is also discussed.

  6. Modeling the Behaviour of an Advanced Material Based Smart Landing Gear System for Aerospace Vehicles

    SciTech Connect (OSTI)

    Varughese, Byji; Dayananda, G. N.; Rao, M. Subba

    2008-07-29

    The last two decades have seen a substantial rise in the use of advanced materials such as polymer composites for aerospace structural applications. In more recent years there has been a concerted effort to integrate materials, which mimic biological functions (referred to as smart materials) with polymeric composites. Prominent among smart materials are shape memory alloys, which possess both actuating and sensory functions that can be realized simultaneously. The proper characterization and modeling of advanced and smart materials holds the key to the design and development of efficient smart devices/systems. This paper focuses on the material characterization; modeling and validation of the model in relation to the development of a Shape Memory Alloy (SMA) based smart landing gear (with high energy dissipation features) for a semi rigid radio controlled airship (RC-blimp). The Super Elastic (SE) SMA element is configured in such a way that it is forced into a tensile mode of high elastic deformation. The smart landing gear comprises of a landing beam, an arch and a super elastic Nickel-Titanium (Ni-Ti) SMA element. The landing gear is primarily made of polymer carbon composites, which possess high specific stiffness and high specific strength compared to conventional materials, and are therefore ideally suited for the design and development of an efficient skid landing gear system with good energy dissipation characteristics. The development of the smart landing gear in relation to a conventional metal landing gear design is also dealt with.

  7. Prediction of microalgae hydrothermal liquefaction products from feedstock biochemical composition

    SciTech Connect (OSTI)

    Leow, Shijie; Witter, John R.; Vardon, Derek R.; Sharma, Brajendra K.; Guest, Jeremy S.; Strathmann, Timothy J.

    2015-05-11

    Hydrothermal liquefaction (HTL) uses water under elevated temperatures and pressures (200–350 °C, 5–20 MPa) to convert biomass into liquid “biocrude” oil. Despite extensive reports on factors influencing microalgae cell composition during cultivation and separate reports on HTL products linked to cell composition, the field still lacks a quantitative model to predict HTL conversion product yield and qualities from feedstock biochemical composition; the tailoring of microalgae feedstock for downstream conversion is a unique and critical aspect of microalgae biofuels that must be leveraged upon for optimization of the whole process. This study developed predictive relationships for HTL biocrude yield and other conversion product characteristics based on HTL of Nannochloropsis oculata batches harvested with a wide range of compositions (23–59% dw lipids, 58–17% dw proteins, 12–22% dw carbohydrates) and a defatted batch (0% dw lipids, 75% dw proteins, 19% dw carbohydrates). HTL biocrude yield (33–68% dw) and carbon distribution (49–83%) increased in proportion to the fatty acid (FA) content. A component additivity model (predicting biocrude yield from lipid, protein, and carbohydrates) was more accurate predicting literature yields for diverse microalgae species than previous additivity models derived from model compounds. FA profiling of the biocrude product showed strong links to the initial feedstock FA profile of the lipid component, demonstrating that HTL acts as a water-based extraction process for FAs; the remainder non-FA structural components could be represented using the defatted batch. These findings were used to introduce a new FA-based model that predicts biocrude oil yields along with other critical parameters, and is capable of adjusting for the wide variations in HTL methodology and microalgae species through the defatted batch. Lastly, the FA model was linked to an upstream cultivation model (Phototrophic Process Model), providing for the first time an integrated modeling framework to overcome a critical barrier to microalgae-derived HTL biofuels and enable predictive analysis of the overall microalgal-to-biofuel process.

  8. Prediction of microalgae hydrothermal liquefaction products from feedstock biochemical composition

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Leow, Shijie; Witter, John R.; Vardon, Derek R.; Sharma, Brajendra K.; Guest, Jeremy S.; Strathmann, Timothy J.

    2015-05-11

    Hydrothermal liquefaction (HTL) uses water under elevated temperatures and pressures (200–350 °C, 5–20 MPa) to convert biomass into liquid “biocrude” oil. Despite extensive reports on factors influencing microalgae cell composition during cultivation and separate reports on HTL products linked to cell composition, the field still lacks a quantitative model to predict HTL conversion product yield and qualities from feedstock biochemical composition; the tailoring of microalgae feedstock for downstream conversion is a unique and critical aspect of microalgae biofuels that must be leveraged upon for optimization of the whole process. This study developed predictive relationships for HTL biocrude yield and othermore » conversion product characteristics based on HTL of Nannochloropsis oculata batches harvested with a wide range of compositions (23–59% dw lipids, 58–17% dw proteins, 12–22% dw carbohydrates) and a defatted batch (0% dw lipids, 75% dw proteins, 19% dw carbohydrates). HTL biocrude yield (33–68% dw) and carbon distribution (49–83%) increased in proportion to the fatty acid (FA) content. A component additivity model (predicting biocrude yield from lipid, protein, and carbohydrates) was more accurate predicting literature yields for diverse microalgae species than previous additivity models derived from model compounds. FA profiling of the biocrude product showed strong links to the initial feedstock FA profile of the lipid component, demonstrating that HTL acts as a water-based extraction process for FAs; the remainder non-FA structural components could be represented using the defatted batch. These findings were used to introduce a new FA-based model that predicts biocrude oil yields along with other critical parameters, and is capable of adjusting for the wide variations in HTL methodology and microalgae species through the defatted batch. Lastly, the FA model was linked to an upstream cultivation model (Phototrophic Process Model), providing for the first time an integrated modeling framework to overcome a critical barrier to microalgae-derived HTL biofuels and enable predictive analysis of the overall microalgal-to-biofuel process.« less

  9. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    SciTech Connect (OSTI)

    Stetzel, KD; Aldrich, LL; Trimboli, MS; Plett, GL

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables. (C) 2014 Elsevier B.V. All rights reserved.

  10. Initial Value Predictability of Intrinsic Oceanic Modes and Implications for Decadal Prediction over North America

    SciTech Connect (OSTI)

    Branstator, Grant

    2014-12-09

    The overall aim of our project was to quantify and characterize predictability of the climate as it pertains to decadal time scale predictions. By predictability we mean the degree to which a climate forecast can be distinguished from the climate that exists at initial forecast time, taking into consideration the growth of uncertainty that occurs as a result of the climate system being chaotic. In our project we were especially interested in predictability that arises from initializing forecasts from some specific state though we also contrast this predictability with predictability arising from forecasting the reaction of the system to external forcing – for example changes in greenhouse gas concentration. Also, we put special emphasis on the predictability of prominent intrinsic patterns of the system because they often dominate system behavior. Highlights from this work include: ‱ Development of novel methods for estimating the predictability of climate forecast models. ‱ Quantification of the initial value predictability limits of ocean heat content and the overturning circulation in the Atlantic as they are represented in various state of the artclimate models. These limits varied substantially from model to model but on average were about a decade with North Atlantic heat content tending to be more predictable than North Pacific heat content. ‱ Comparison of predictability resulting from knowledge of the current state of the climate system with predictability resulting from estimates of how the climate system will react to changes in greenhouse gas concentrations. It turned out that knowledge of the initial state produces a larger impact on forecasts for the first 5 to 10 years of projections. ‱ Estimation of tbe predictability of dominant patterns of ocean variability including well-known patterns of variability in the North Pacific and North Atlantic. For the most part these patterns were predictable for 5 to 10 years. ‱ Determination of especially predictable patterns in the North Atlantic. The most predictable of these retain predictability substantially longer than generic patterns, with some being predictable for two decades.

  11. Microcomputer Spectrum Analysis Models (MSAM) with terrain data base (for microcomputers). Software

    SciTech Connect (OSTI)

    Not Available

    1992-08-01

    The package contains a collection of 14 radio frequency communications engineering and spectrum management programs plus a menu program. An associated terrain elevation data base with 30-second data is provided for the U.S. (less Alaska), Hawaii, Puerto Rico, the Caribbean and border areas of Canada and Mexico. The following programs are included: Bearing/Distance Program (BDIST); Satellite Azimuth Program (SATAZ); Intermodulation Program (INTMOD); NLAMBDA-90 smooth-earth propagation program (NL90); Frequency Dependent Rejection program (FDR); ANNEX I program to evaluate frequency proposals per NTIA Manual (ANNEXI); Antenna Field Intensity program (AFI); Personal Computer Plot 2-D graphics program (PCPLT); Profile 4/3 earth terrain elevation plot program (PROFILE); Horizon radio line-of-sight plot program (HORIZON); Single-Emitter Analysis Mode (SEAM); Terrain Integrated Rough-Earth Model (TIREM); Power Density Display Program to produce power contour map (PDDP); Line-of-Sight antenna coverage map program (SHADO).

  12. Collaborative Research: Towards Advanced Understanding and Predictive Capability of Climate Change in the Arctic using a High-Resolution Regional Arctic Climate System Model

    SciTech Connect (OSTI)

    Lettenmaier, Dennis P

    2013-04-08

    Primary activities are reported in these areas: climate system component studies via one-way coupling experiments; development of the Regional Arctic Climate System Model (RACM); and physical feedback studies focusing on changes in Arctic sea ice using the fully coupled model.

  13. Quantifying sources of black carbon in western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source–receptor relationships for atmospheric BC and its deposition to snow over western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over northwestern USA and westernmore » Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based positive matrix factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  14. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    SciTech Connect (OSTI)

    Zhang, Rudong; Wang, Hailong; Hegg, D. A.; Qian, Yun; Doherty, Sarah J.; Dang, Cheng; Ma, Po-Lun; Rasch, Philip J.; Fu, Qiang

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA and West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.

  15. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-05-04

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA andmore » West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  16. Laser-Arc Hybrid Welding of Thick Section Ni-base Alloys – Advanced Modeling and Experiments

    SciTech Connect (OSTI)

    Debroy, Tarasankar; Palmer, Todd; Zhang, Wei

    2015-05-21

    Hybrid laser-arc welding of nickel-base alloys can increase productivity and decrease costs during construction and repair of critical components in nuclear power plants. However, laser and hybrid welding of nickel-base alloys is not well understood. This project sought to understand the physical processes during hybrid welding necessary to fabricate quality joints in Alloy 690, a Ni- Cr-Fe alloy. This document presents a summary of the data and results collected over the course of the project. The supporting documents are a collection of the research that has been or will be published in peer-reviewed journals along with a report from the partner at the national lab. Understanding the solidification behavior of Alloy 690 is important for knowing the final properties of the weldment. A study was undertaken to calculate the solidification parameters, such as temperature gradient, solidification rate, and cooling rate in Alloy 690 welds. With this information and measured cell and dendrite arm spacings, an Alloy 690 map was constructed to guide process parameter development and interpret fusion zones in later hybrid welds. This research is contained in “Solidification Map of a Nickel Base Alloy.” The keyhole formed under high laser intensity gives the hybrid welding technique the greater penetration depths compared to arc welding. However, keyhole behavior can form defects in the material, so knowing transient keyhole characteristics is important. With international collaborators, a study was undertaken to validate a new process monitoring tool known as inline coherent imaging (ICI), which is able to measure the keyhole depth with spatial and temporal resolutions on the order of 10 microns and 10 microseconds. ICI was validated for five alloy systems, including Alloy 690. Additionally, the keyhole growth rates at the start of welding were measured with unprecedented accuracy. This research is contained in “Real Time Monitoring of Laser Beam Welding Keyhole Depth by Laser Interferometry.” During full penetration welding of thick sections, root defects can form, which result in unacceptable weld quality. A study was undertaken to determine the competing forces in root defect formation by independently changing the weight forces and surface tension forces. The weight force was altered by changing the plate thickness, and the surface tension force was altered by changing the surface condition at the bottom surface. Root defects do depend on these two forces. This research is contained in “Mitigation of Root Defect in Laser and Hybrid Laser-Arc Welding.” Validation of the hybrid laser-arc model is necessary to properly model heat and mass transfer and fluid flow in Alloy 690 hybrid welds. Therefore, the developed model was validated for low carbon steel. Temperatures calculated by the model were included into a microstructural model in order to calculate the phase fractions. Process maps were developed for the selection of welding parameters to avoid martensite formation. This research is contained in “Fusion Zone Microstructure in Full Penetration Laser-Arc Hybrid Welding of Low Alloy Steel.” Alloy 690 suffers from ductility dip cracking, a form of hot cracking. This type of cracking inhibits the use of multipass welding to join Alloy 690. Our partners at ORNL performed some hot ductility testing with Alloy 690 samples using digital image correlation. The results of this work is contained in the report “Summary of 690 ductility dip cracking testing using Gleeble and digital image correlation.” Macro-porosity is a limiting factor in the widespread deployment of laser and hybrid laser-arc welding for construction and repair of nuclear power plant components. Keyhole instability and fluctuation results in the formation of large bubbles, which become trapped at the advancing solid- liquid interface as pores. Laser and hybrid laser-arc welds were fabricated for a range of conditions. Porosity levels in the welds were measured in X-ray computed tomography (CT), which provides very detailed data on the size and lo

  17. Pairwise covariance adds little to secondary structure prediction but improves the prediction of non-canonical local structure

    SciTech Connect (OSTI)

    Bystroff, Christopher; Webb-Robertson, Bobbie-Jo M.

    2009-05-06

    Amino acid sequence probability distributions, or profiles, have been used successfully to predict secondary structure and local structure in proteins. Profile models assume the statistical independence of each position in the sequence, but the energetics of protein folding is better captured in a scoring function that is based on pairwise interactions, like a force field. I-sites motifs are short sequence/structure motifs that populate the protein structure database due to energy-driven convergent evolution. Here we show that a pairwise covariant sequence model does not predict alpha helix or beta strand significantly better overall than a profile-based model, but it does improve the prediction of certain loop motifs. The finding is best explained by considering secondary structure profiles as multivariant, all-or-none models, which subsume covariant models. Pairwise covariance is nonetheless present and energetically rational. Examples of negative design are present, where the covariances disfavor non-native structures. Measured pairwise covariances are shown to be statistically robust in cross-validation tests, as long as the amino acid alphabet is reduced to nine classes. We present an updated I-sites local structure motif library and web server that provide sequence covariance information for all types of local structure in globular proteins.

  18. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less

  19. Adjusting lidar-derived digital terrain models in coastal marshes based on estimated aboveground biomass density

    SciTech Connect (OSTI)

    Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James

    2015-03-25

    Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation