skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information
  1. Anthropogenic climate change impacts exacerbate summer forest fires in California

    Record-breaking summer forest fires have become a regular occurrence in California. Observations indicate a fivefold increase in summer burned area (BA) in forests in northern and central California during 1996 to 2021 relative to 1971 to 1995. While the higher temperature and increased dryness have been suggested to be the leading causes of increased BA, the extent to which BA changes are due to natural variability or anthropogenic climate change remains unresolved. Here, we develop a climate-driven model of summer BA evolution in California and combine it with natural-only and historical climate simulations to assess the importance of anthropogenic climatemore » change on increased BA. Our results indicate that nearly all the observed increase in BA is due to anthropogenic climate change as historical model simulations accounting for anthropogenic forcing yield 172% (range 84 to 310%) more area burned than simulations with natural forcing only. We detect the signal of combined historical forcing on the observed BA emerging in 2001 with no detectable influence of the natural forcing alone. In addition, even when considering fuel limitations from fire-fuel feedbacks, a 3 to 52% increase in BA relative to the last decades is expected in the next decades (2031 to 2050), highlighting the need for proactive adaptations.« less
  2. Super‐Droplet Method to Simulate Lagrangian Microphysics of Nuclear Fallout in a Homogeneous Cloud

    Abstract Nuclear detonations produce hazardous local and global particles or fallout. Predicting fallout size, chemical components, and location is necessary to inform officials and determine immediate guidance for the public. However, existing nuclear detonation fallout models prescribe the particle size distributions based on limited observations. In this work, we apply the super‐droplet method, which is a numerical modeling technique developed for cloud microphysics, to simulate size distributions of particles in a mushroom cloud formed post‐detonation of a nuclear device. We model fallout formation and evolution with homogeneous nucleation and condensation of a single species and a Monte Carlo coagulation algorithm.more » We verify the numerical methods representing coagulation and condensation processes against analytical test problems. Additionally, we explore several scenarios for the integral system mass and yield in equivalent kilotons (kt) of TNT (trinitrotoluene). The fallout size distribution median diameter d pg follows a scaling law based on the integral system mass m v 0 kg and yield Y kt: nm. We test the effect of cloud turbulence, enhanced nucleation and growth, and vapor volatility with a sensitivity study. The range in median diameter predictions for simulations of historical tests performed over the Pacific encompass the measurements of particles sampled from the cloud caps. Predicted median particle size ranges up to 217, 123, 86, and 35 nm for historical tests with yields of 0.2, 0.7, 2, and 10 Mt, respectively. This work can be expanded in many different directions to build a more predictive model for fallout formation.« less
  3. Machine Learning Emulation of Spatial Deposition from a Multi-Physics Ensemble of Weather and Atmospheric Transport Models

    In the event of an accidental or intentional hazardous material release in the atmosphere, researchers often run physics-based atmospheric transport and dispersion models to predict the extent and variation of the contaminant spread. These predictions are imperfect due to propagated uncertainty from atmospheric model physics (or parameterizations) and weather data initial conditions. Ensembles of simulations can be used to estimate uncertainty, but running large ensembles is often very time consuming and resource intensive, even using large supercomputers. In this paper, we present a machine-learning-based method which can be used to quickly emulate spatial deposition patterns from a multi-physics ensemble ofmore » dispersion simulations. We use a hybrid linear and logistic regression method that can predict deposition in more than 100,000 grid cells with as few as fifty training examples. Logistic regression provides probabilistic predictions of the presence or absence of hazardous materials, while linear regression predicts the quantity of hazardous materials. The coefficients of the linear regressions also open avenues of exploration regarding interpretability—the presented model can be used to find which physics schemes are most important over different spatial areas. A single regression prediction is on the order of 10,000 times faster than running a weather and dispersion simulation. However, considering the number of weather and dispersion simulations needed to train the regressions, the speed-up achieved when considering the whole ensemble is about 24 times. Ultimately, this work will allow atmospheric researchers to produce potential contamination scenarios with uncertainty estimates faster than previously possible, aiding public servants and first responders.« less
  4. Improving Seasonal Forecast Using Probabilistic Deep Learning

    The path toward realizing the potential of seasonal forecasting and its socioeconomic benefits relies on improving general circulation model (GCM) based dynamical forecast systems. To improve dynamical seasonal forecasts, it is crucial to set up forecast benchmarks, and clarify forecast limitations posed by model initialization errors, formulation deficiencies, and internal climate variability. With huge costs in generating large forecast ensembles, and limited observations for forecast verification, the seasonal forecast benchmarking and diagnosing task proves challenging. Here, we develop a probabilistic deep learning-based statistical forecast methodology, drawing on a wealth of climate simulations to enhance seasonal forecast capability and forecast diagnosis.more » By explicitly modeling the internal climate variability and GCM formulation differences, the proposed Conditional Generative Forecasting (CGF) methodology enables bypassing crucial barriers in dynamical forecast, and offers a top-down viewpoint to examine how complicated GCMs encode the seasonal predictability information. We apply the CGF methodology for global seasonal forecast of precipitation and 2 m air temperature, based on a unique data set consisting 52,201 years of climate simulation. Results show that the CGF methodology can faithfully represent the seasonal predictability information encoded in GCMs. We successfully apply this learned relationship in real-world seasonal forecast, achieving competitive performance compared to dynamical forecasts. Using this CGF as benchmark, we reveal the impact of insufficient forecast spread sampling that limits the skill of the considered dynamical forecast system. Finally, we introduce different strategies for composing ensembles using the CGF methodology, highlighting the potential for leveraging the strengths of multiple GCMs to achieve advantgeous seasonal forecast.« less
  5. Deep convolutional autoencoders as generic feature extractors in seismological applications

    The idea of using a deep autoencoder to encode seismic waveform features and then use them in different seismological applications is appealing. In this paper, we designed tests to evaluate this idea of using autoencoders as feature extractors for different seismological applications, such as event discrimination (i.e., earthquake vs. noise waveforms, earthquake vs. explosion waveforms), and phase picking. These tests involve training an autoencoder, either undercomplete or overcomplete, on a large amount of earthquake waveforms, and then using the trained encoder as a feature extractor with subsequent application layers (either a fully connected layer, or a convolutional layer plus amore » fully connected layer) to make the decision. By comparing the performance of these newly designed models against the baseline models trained from scratch, we conclude that the autoencoder feature extractor approach may only outperform the baseline under certain conditions, such as when the target problems require features that are similar to the autoencoder encoded features, when a relatively small amount of training data is available, and when certain model structures and training strategies are utilized. The model structure that works best in all these tests is an overcomplete autoencoder with a convolutional layer and a fully connected layer to make the estimation.« less
  6. The influence of cooling rate on condensation of iron, aluminum, and uranium oxide nanoparticles

    Fundamental observations of particle size distributions are needed to develop models that predict the fate and transport of radioactive materials in the atmosphere following a nuclear incident. The extent of material transport is influenced by the time scales of particle formation processes (e.g., condensation, coagulation). In this study, we investigated the influence of cooling time scales on size distributions of uranium, aluminum, and iron oxide particles that are synthesized separately under identical run conditions inside the controlled environment of an argon plasma flow reactor. Two distinct temperature distributions are imposed along the flow reactor by varying the argon flow ratemore » downstream of the plasma torch. The vaporized reactants of uranium, aluminum, and iron are cooled from about 5000K to 1000K before they are collected on silicon wafers for ex situ scanning electron microscope analysis. The microscope images show that the sizes of the largest aluminum and iron oxide particles heavily depend on the cooling time scales, whereas significant size variation with cooling rate is not observed for uranium oxide particles. In addition, the size distribution of aluminum oxide particles exhibits the broadest range among all three metal oxides studied. We performed simulations of particle size distributions using a kinetic model that couples gas phase oxidation chemistry with particle formation processes, including nucleation, condensation, and coagulation. The model results demonstrate the strong sensitivity of particle size distribution to different cooling histories (i.e., temperature vs residence time) along the flow reactor. In conclusion, the kinetic model also helps identify directions for future research to improve the predictions.« less
  7. Large Eddy Simulations of Turbulent and Buoyant Flows in Urban and Complex Terrain Areas Using the Aeolus Model

    Fast and accurate predictions of the flow and transport of materials in urban and complex terrain areas are challenging because of the heterogeneity of buildings and land features of different shapes and sizes connected by canyons and channels, which results in complex patterns of turbulence that can enhance material concentrations in certain regions. To address this challenge, we have developed an efficient three-dimensional computational fluid dynamics (CFD) code called Aeolus that is based on first principles for predicting transport and dispersion of materials in complex terrain and urban areas. The model can be run in a very efficient Reynolds averagemore » Navier–Stokes (RANS) mode or a detailed large eddy simulation (LES) mode. The RANS version of Aeolus was previously validated against field data for tracer gas and radiological dispersal releases. As a part of this work, we have validated the Aeolus model in LES mode against two different sets of data: (1) turbulence quantities measured in complex terrain at Askervein Hill; and (2) wind and tracer data from the Joint Urban 2003 field campaign for urban topography. As a third set-up, we have applied Aeolus to simulate cloud rise dynamics for buoyant plumes from high-temperature explosions. For all three cases, Aeolus LES predictions compare well to observations and other models. These results indicate that Aeolus LES can be used to accurately simulate turbulent flow and transport for a wide range of applications and scales.« less
  8. Learning to Correct Climate Projection Biases

    The fidelity of climate projections is often undermined by biases in climate models due to their simplification or misrepresentation of unresolved climate processes. While various bias correction methods have been developed to post-process model outputs to match observations, existing approaches usually focus on limited, low-order statistics, or break either the spatiotemporal consistency of the target variable, or its dependency upon model resolved dynamics. We develop a Regularized Adversarial Domain Adaptation (RADA) methodology to overcome these deficiencies, and enhance efficient identification and correction of climate model biases. Instead of pre-assuming the spatiotemporal characteristics of model biases, we apply discriminative neural networksmore » to distinguish historical climate simulation samples and observation samples. The evidences based on which the discriminative neural networks make distinctions are applied to train the domain adaptation neural networks to bias correct climate simulations. We regularize the domain adaptation neural networks using cycle-consistent statistical and dynamical constraints. An application to daily precipitation projection over the contiguous United States shows that our methodology can correct all the considered moments of daily precipitation at approximately $$1^\circ$$ resolution, ensures spatiotemporal consistency and inter-field correlations, and can discriminate between different dynamical conditions. Our methodology offers a powerful tool for disentangling model parameterization biases from their interactions with the chaotic evolution of climate dynamics, opening a novel avenue toward big-data enhanced climate predictions.« less
  9. Uncertainty Analysis of Simulations of the Turn-of-the-Century Drought in the Western United States

    Abstract We perform the first uncertainty quantification analysis of the turn‐of‐the‐century drought in the western United States using a large perturbed‐parameter ensemble of the Community Atmosphere Model version 4.0 (CAM4). We develop several metrics to characterize the aridity bias, spatial extent, and tropical forcing of the drought and use statistical models to ascertain that the modeled drought was mainly sensitive to CAM4 parameters related to deep convection and clouds. Deep convection parameters account for over half the variance across the drought metrics. We employ observed estimates of these drought metrics to infer probability distributions of the model parameters that leadmore » to a better agreement with observations, thereby providing guidance on how to improve the simulation of drought in CAM4. We find that in some cases, the suggested parameter values that improve the simulation of one drought characteristic would degrade the simulation of another, suggesting that there is a complex relationship between the model parameters and drought in the western United States. We also demonstrate reductions in the uncertainty of the drought metrics of up to 30% by constraining with observations of all metrics. Furthermore, we demonstrate for the first time that improvements to the simulation of the spatial extent of the turn‐of‐the‐century drought also lead to improvements in the spatial extent of the Australian Millennium drought, suggesting that physics of drought as encoded by the model parameters may be generalizable.« less
  10. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions ofmore » these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. Finally, we also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.« less
...

Search for:
All Records
Author / Contributor
0000000246496967

Refine by:
Resource Type
Availability
Publication Date
Author / Contributor
Research Organization