DPM: A deep learning PDE augmentation method with application to large-eddy simulation
- Univ. of Oxford (United Kingdom). Mathematics; Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Industrial & Systems Engineering; Univ. of Illinois at Urbana-Champaign, IL (United States)
- Univ. of Illinois at Urbana-Champaign, IL (United States). The Center for Exascale Simulation of Plasma-coupled Combustion. Coordinated Science Lab.
- Univ. of Illinois at Urbana-Champaign, IL (United States). Aerospace Engineering
A framework is introduced that leverages known physics to reduce overfitting in machine learning for scientific applications. The partial differential equation (PDE) that expresses the physics is augmented with a neural network that uses available data to learn a description of the corresponding unknown or unrepresented physics. Training within this combined system corrects for missing, unknown, or erroneously represented physics, including discretization errors associated with the PDE's numerical solution. For optimization of the network within the PDE, an adjoint PDE is solved to provide high-dimensional gradients, and a stochastic adjoint method (SAM) further accelerates training. Additionally, the approach is demonstrated for large-eddy simulation (LES) of turbulence. High-fidelity direct numerical simulations (DNS) of decaying isotropic turbulence provide the training data used to learn sub-filter-scale closures for the filtered Navier–Stokes equations. Out-of-sample comparisons show that the deep learning PDE method outperforms widely-used models, even for filter sizes so large that they become qualitatively incorrect. It also significantly outperforms the same neural network when a priori trained based on simple data mismatch, not accounting for the full PDE. Measures of discretization errors, which are well-known to be consequential in LES, point to the importance of the unified training formulation's design, which without modification corrects for them. For comparable accuracy, simulation runtime is significantly reduced. A relaxation of the typical discrete enforcement of the divergence-free constraint in the solver is also successful, instead allowing the DPM to approximately enforce incompressibility physics. Since the training loss function is not restricted to correspond directly to the closure to be learned, training can incorporate diverse data, including experimental data.
- Research Organization:
- Univ. of Illinois at Urbana-Champaign, IL (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA); National Science Foundation (NSF)
- Grant/Contract Number:
- NA0002374
- OSTI ID:
- 1850305
- Alternate ID(s):
- OSTI ID: 1763780
- Journal Information:
- Journal of Computational Physics, Journal Name: Journal of Computational Physics Journal Issue: C Vol. 423; ISSN 0021-9991
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Multi-resolution partial differential equations preserved learning framework for spatiotemporal dynamics
Learning generative neural networks with physics knowledge