skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations

Journal Article · · Journal of Computational Physics

In the last 50 years there has been a tremendous progress in solving numerically the Navier-Stokes equations using finite differences, finite elements, spectral, and even meshless methods. Yet, in many real cases, we still cannot incorporate seamlessly (multi-fidelity) data into existing algorithms, and for industrial-complexity applications the mesh generation is time consuming and still an art. Moreover, solving ill-posed problems (e.g., lacking boundary conditions) or inverse problems is often prohibitively expensive and requires different formulations and new computer codes. Here, we employ physics-informed neural networks (PINNs), encoding the governing equations directly into the deep neural network via automatic differentiation, to overcome some of the aforementioned limitations for simulating incompressible laminar and turbulent flows. We develop the Navier-Stokes flow nets (NSFnets) by considering two different mathematical formulations of the Navier-Stokes equations: the velocity-pressure (VP) formulation and the vorticity-velocity (VV) formulation. Since this is a new approach, we first select some standard benchmark problems to assess the accuracy, convergence rate, computational cost and flexibility of NSFnets; analytical solutions and direct numerical simulation (DNS) databases provide proper initial and boundary conditions for the NSFnet simulations. The spatial and temporal coordinates are the inputs of the NSFnets, while the instantaneous velocity and pressure fields are the outputs for the VP-NSFnet, and the instantaneous velocity and vorticity fields are the outputs for the VV-NSFnet. This is unsupervised learning and, hence, no labeled data are required beyond boundary and initial conditions and the fluid properties. The residuals of the VP or VV governing equations, together with the initial and boundary conditions, are embedded into the loss function of the NSFnets. No data is provided for the pressure to the VP-NSFnet, which is a hidden state and is obtained via the incompressibility constraint without extra computational cost. Unlike the traditional numerical methods, NSFnets inherit the properties of neural networks (NNs), hence the total error is composed of the approximation, the optimization, and the generalization errors. Here, we empirically attempt to quantify these errors by varying the sampling (“residual”) points, the iterative solvers, and the size of the NN architecture. For the laminar flow solutions, we show that both the VP and the VV formulations are comparable in accuracy but their best performance corresponds to different NN architectures. The initial convergence rate is fast but the error eventually saturates to a plateau due to the dominance of the optimization error. For the turbulent channel flow, we show that NSFnets can sustain turbulence at , but due to expensive training we only consider part of the channel domain and enforce velocity boundary conditions on the subdomain boundaries provided by the DNS data base. We also perform a systematic study on the weights used in the loss function for balancing the data and physics components, and investigate a new way of computing the weights dynamically to accelerate training and enhance accuracy. In the last part, we demonstrate how NSFnets should be used in practice, namely for ill-posed problems with incomplete or noisy boundary conditions as well as for inverse problems. We obtain reasonably accurate solutions for such cases as well without the need to change the NSFnets and at the same computational cost as in the forward well-posed problems. As a result, we also present a simple example of transfer learning that will aid in accelerating the training of NSFnets for different parameter settings.

Research Organization:
Brown Univ., Providence, RI (United States)
Sponsoring Organization:
USDOE; National Natural Science Foundation of China (NSFC)
Grant/Contract Number:
SC0019453; U1711265
OSTI ID:
2282017
Alternate ID(s):
OSTI ID: 1775896
Journal Information:
Journal of Computational Physics, Vol. 426; ISSN 0021-9991
Publisher:
ElsevierCopyright Statement
Country of Publication:
United States
Language:
English

References (29)

A Compact-Difference Scheme for the Navier–Stokes Equations in Vorticity–Velocity Formulation journal January 2000
A Web services accessible database of turbulent channel flow and its use for testing a new integral wall model for LES journal December 2015
Fully-coupled pressure-based finite-volume framework for the simulation of fluid flows at all speeds in complex geometries journal October 2017
Subgrid-scale model for large-eddy simulation of isotropic turbulent flows using an artificial neural network journal December 2019
Prediction model of velocity field around circular cylinder over various Reynolds numbers by fusion convolutional neural networks based on pressure on the cylinder journal April 2018
Data-driven reduced order model with temporal convolutional neural network journal March 2020
Time-resolved reconstruction of flow field around a circular cylinder by recurrent neural networks based on non-time-resolved particle image velocimetry measurements journal April 2020
Deep learning of vortex-induced vibrations journal December 2018
A Novel Algebraic Stress Model with Machine-Learning-Assisted Parameterization journal January 2020
A Penalty Method for the Vorticity–Velocity Formulation journal February 1999
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations journal February 2019
Turbulence Modeling in the Age of Data journal January 2019
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance journal October 2016
Machine Learning for Fluid Mechanics journal September 2019
Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data journal March 2017
Exact fully 3D Navier-Stokes solutions for benchmarking journal September 1994
Dense motion estimation of particle images via a convolutional neural network journal March 2019
Estimation of time-resolved turbulent fields through correlation of non-time-resolved field measurements and time-resolved point measurements journal May 2018
Adaptive activation functions accelerate convergence in deep and physics-informed neural networks journal March 2020
Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations journal January 2020
A public turbulence database cluster and applications to study Lagrangian evolution of velocity increments in turbulence journal January 2008
Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks
  • Jagtap, Ameya D.; Kawaguchi, Kenji; Em Karniadakis, George
  • Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 476, Issue 2239 https://doi.org/10.1098/rspa.2020.0334
journal July 2020
Turbulence statistics in fully developed channel flow at low Reynolds number journal April 1987
High-order splitting methods for the incompressible Navier-Stokes equations journal December 1991
A coupled pressure-based computational method for incompressible/compressible flows journal December 2010
Sensor-based estimation of the velocity in the wake of a low-aspect-ratio pyramid journal January 2015
A coupled finite volume solver for the solution of incompressible flows on unstructured grids journal January 2009
Data exploration of turbulence simulations using a database cluster conference January 2007
Automatic differentiation in machine learning: a survey text January 2015