DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations

Abstract

In the last 50 years there has been a tremendous progress in solving numerically the Navier-Stokes equations using finite differences, finite elements, spectral, and even meshless methods. Yet, in many real cases, we still cannot incorporate seamlessly (multi-fidelity) data into existing algorithms, and for industrial-complexity applications the mesh generation is time consuming and still an art. Moreover, solving ill-posed problems (e.g., lacking boundary conditions) or inverse problems is often prohibitively expensive and requires different formulations and new computer codes. Here, we employ physics-informed neural networks (PINNs), encoding the governing equations directly into the deep neural network via automatic differentiation, to overcome some of the aforementioned limitations for simulating incompressible laminar and turbulent flows. We develop the Navier-Stokes flow nets (NSFnets) by considering two different mathematical formulations of the Navier-Stokes equations: the velocity-pressure (VP) formulation and the vorticity-velocity (VV) formulation. Since this is a new approach, we first select some standard benchmark problems to assess the accuracy, convergence rate, computational cost and flexibility of NSFnets; analytical solutions and direct numerical simulation (DNS) databases provide proper initial and boundary conditions for the NSFnet simulations. The spatial and temporal coordinates are the inputs of the NSFnets, while the instantaneous velocity and pressure fieldsmore » are the outputs for the VP-NSFnet, and the instantaneous velocity and vorticity fields are the outputs for the VV-NSFnet. This is unsupervised learning and, hence, no labeled data are required beyond boundary and initial conditions and the fluid properties. The residuals of the VP or VV governing equations, together with the initial and boundary conditions, are embedded into the loss function of the NSFnets. No data is provided for the pressure to the VP-NSFnet, which is a hidden state and is obtained via the incompressibility constraint without extra computational cost. Unlike the traditional numerical methods, NSFnets inherit the properties of neural networks (NNs), hence the total error is composed of the approximation, the optimization, and the generalization errors. Here, we empirically attempt to quantify these errors by varying the sampling (“residual”) points, the iterative solvers, and the size of the NN architecture. For the laminar flow solutions, we show that both the VP and the VV formulations are comparable in accuracy but their best performance corresponds to different NN architectures. The initial convergence rate is fast but the error eventually saturates to a plateau due to the dominance of the optimization error. For the turbulent channel flow, we show that NSFnets can sustain turbulence at , but due to expensive training we only consider part of the channel domain and enforce velocity boundary conditions on the subdomain boundaries provided by the DNS data base. We also perform a systematic study on the weights used in the loss function for balancing the data and physics components, and investigate a new way of computing the weights dynamically to accelerate training and enhance accuracy. In the last part, we demonstrate how NSFnets should be used in practice, namely for ill-posed problems with incomplete or noisy boundary conditions as well as for inverse problems. We obtain reasonably accurate solutions for such cases as well without the need to change the NSFnets and at the same computational cost as in the forward well-posed problems. As a result, we also present a simple example of transfer learning that will aid in accelerating the training of NSFnets for different parameter settings.« less

Authors:
 [1]; ORCiD logo [2]; ORCiD logo [1];  [2]
  1. Harbin Institute of Technology (China)
  2. Brown University, Providence, RI (United States)
Publication Date:
Research Org.:
Brown Univ., Providence, RI (United States)
Sponsoring Org.:
USDOE; National Natural Science Foundation of China (NSFC)
OSTI Identifier:
2282017
Alternate Identifier(s):
OSTI ID: 1775896
Grant/Contract Number:  
SC0019453; U1711265
Resource Type:
Accepted Manuscript
Journal Name:
Journal of Computational Physics
Additional Journal Information:
Journal Volume: 426; Journal ID: ISSN 0021-9991
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; PINNs; Turbulence; Velocity-pressure formulation; Vorticity-velocity formulation; Ill-posed problems; Transfer learning

Citation Formats

Jin, Xiaowei, Cai, Shengze, Li, Hui, and Karniadakis, George Em. NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations. United States: N. p., 2020. Web. doi:10.1016/j.jcp.2020.109951.
Jin, Xiaowei, Cai, Shengze, Li, Hui, & Karniadakis, George Em. NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations. United States. https://doi.org/10.1016/j.jcp.2020.109951
Jin, Xiaowei, Cai, Shengze, Li, Hui, and Karniadakis, George Em. Sun . "NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations". United States. https://doi.org/10.1016/j.jcp.2020.109951. https://www.osti.gov/servlets/purl/2282017.
@article{osti_2282017,
title = {NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations},
author = {Jin, Xiaowei and Cai, Shengze and Li, Hui and Karniadakis, George Em},
abstractNote = {In the last 50 years there has been a tremendous progress in solving numerically the Navier-Stokes equations using finite differences, finite elements, spectral, and even meshless methods. Yet, in many real cases, we still cannot incorporate seamlessly (multi-fidelity) data into existing algorithms, and for industrial-complexity applications the mesh generation is time consuming and still an art. Moreover, solving ill-posed problems (e.g., lacking boundary conditions) or inverse problems is often prohibitively expensive and requires different formulations and new computer codes. Here, we employ physics-informed neural networks (PINNs), encoding the governing equations directly into the deep neural network via automatic differentiation, to overcome some of the aforementioned limitations for simulating incompressible laminar and turbulent flows. We develop the Navier-Stokes flow nets (NSFnets) by considering two different mathematical formulations of the Navier-Stokes equations: the velocity-pressure (VP) formulation and the vorticity-velocity (VV) formulation. Since this is a new approach, we first select some standard benchmark problems to assess the accuracy, convergence rate, computational cost and flexibility of NSFnets; analytical solutions and direct numerical simulation (DNS) databases provide proper initial and boundary conditions for the NSFnet simulations. The spatial and temporal coordinates are the inputs of the NSFnets, while the instantaneous velocity and pressure fields are the outputs for the VP-NSFnet, and the instantaneous velocity and vorticity fields are the outputs for the VV-NSFnet. This is unsupervised learning and, hence, no labeled data are required beyond boundary and initial conditions and the fluid properties. The residuals of the VP or VV governing equations, together with the initial and boundary conditions, are embedded into the loss function of the NSFnets. No data is provided for the pressure to the VP-NSFnet, which is a hidden state and is obtained via the incompressibility constraint without extra computational cost. Unlike the traditional numerical methods, NSFnets inherit the properties of neural networks (NNs), hence the total error is composed of the approximation, the optimization, and the generalization errors. Here, we empirically attempt to quantify these errors by varying the sampling (“residual”) points, the iterative solvers, and the size of the NN architecture. For the laminar flow solutions, we show that both the VP and the VV formulations are comparable in accuracy but their best performance corresponds to different NN architectures. The initial convergence rate is fast but the error eventually saturates to a plateau due to the dominance of the optimization error. For the turbulent channel flow, we show that NSFnets can sustain turbulence at , but due to expensive training we only consider part of the channel domain and enforce velocity boundary conditions on the subdomain boundaries provided by the DNS data base. We also perform a systematic study on the weights used in the loss function for balancing the data and physics components, and investigate a new way of computing the weights dynamically to accelerate training and enhance accuracy. In the last part, we demonstrate how NSFnets should be used in practice, namely for ill-posed problems with incomplete or noisy boundary conditions as well as for inverse problems. We obtain reasonably accurate solutions for such cases as well without the need to change the NSFnets and at the same computational cost as in the forward well-posed problems. As a result, we also present a simple example of transfer learning that will aid in accelerating the training of NSFnets for different parameter settings.},
doi = {10.1016/j.jcp.2020.109951},
journal = {Journal of Computational Physics},
number = ,
volume = 426,
place = {United States},
year = {Sun Nov 01 00:00:00 EDT 2020},
month = {Sun Nov 01 00:00:00 EDT 2020}
}

Works referenced in this record:

A Compact-Difference Scheme for the Navier–Stokes Equations in Vorticity–Velocity Formulation
journal, January 2000

  • Meitz, Hubert L.; Fasel, Hermann F.
  • Journal of Computational Physics, Vol. 157, Issue 1
  • DOI: 10.1006/jcph.1999.6387

A Web services accessible database of turbulent channel flow and its use for testing a new integral wall model for LES
journal, December 2015


Fully-coupled pressure-based finite-volume framework for the simulation of fluid flows at all speeds in complex geometries
journal, October 2017

  • Xiao, Cheng-Nian; Denner, Fabian; van Wachem, Berend G. M.
  • Journal of Computational Physics, Vol. 346
  • DOI: 10.1016/j.jcp.2017.06.009

Subgrid-scale model for large-eddy simulation of isotropic turbulent flows using an artificial neural network
journal, December 2019


Data-driven reduced order model with temporal convolutional neural network
journal, March 2020

  • Wu, Pin; Sun, Junwu; Chang, Xuting
  • Computer Methods in Applied Mechanics and Engineering, Vol. 360
  • DOI: 10.1016/j.cma.2019.112766

Deep learning of vortex-induced vibrations
journal, December 2018

  • Raissi, Maziar; Wang, Zhicheng; Triantafyllou, Michael S.
  • Journal of Fluid Mechanics, Vol. 861
  • DOI: 10.1017/jfm.2018.872

A Novel Algebraic Stress Model with Machine-Learning-Assisted Parameterization
journal, January 2020

  • Jiang, Chao; Mi, Junyi; Laima, Shujin
  • Energies, Vol. 13, Issue 1
  • DOI: 10.3390/en13010258

A Penalty Method for the Vorticity–Velocity Formulation
journal, February 1999

  • Trujillo, James; Em Karniadakis, George
  • Journal of Computational Physics, Vol. 149, Issue 1
  • DOI: 10.1006/jcph.1998.6135

Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
journal, February 2019


Turbulence Modeling in the Age of Data
journal, January 2019


Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
journal, October 2016

  • Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy
  • Journal of Fluid Mechanics, Vol. 807
  • DOI: 10.1017/jfm.2016.615

Machine Learning for Fluid Mechanics
journal, September 2019


Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data
journal, March 2017


Exact fully 3D Navier-Stokes solutions for benchmarking
journal, September 1994

  • Ethier, C. Ross; Steinman, D. A.
  • International Journal for Numerical Methods in Fluids, Vol. 19, Issue 5
  • DOI: 10.1002/fld.1650190502

Dense motion estimation of particle images via a convolutional neural network
journal, March 2019


Estimation of time-resolved turbulent fields through correlation of non-time-resolved field measurements and time-resolved point measurements
journal, May 2018


Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
journal, March 2020

  • Jagtap, Ameya D.; Kawaguchi, Kenji; Karniadakis, George Em
  • Journal of Computational Physics, Vol. 404
  • DOI: 10.1016/j.jcp.2019.109136

Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations
journal, January 2020

  • Raissi, Maziar; Yazdani, Alireza; Karniadakis, George Em
  • Science, Vol. 367, Issue 6481
  • DOI: 10.1126/science.aaw4741

A public turbulence database cluster and applications to study Lagrangian evolution of velocity increments in turbulence
journal, January 2008


Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks
journal, July 2020

  • Jagtap, Ameya D.; Kawaguchi, Kenji; Em Karniadakis, George
  • Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 476, Issue 2239
  • DOI: 10.1098/rspa.2020.0334

Turbulence statistics in fully developed channel flow at low Reynolds number
journal, April 1987


High-order splitting methods for the incompressible Navier-Stokes equations
journal, December 1991

  • Karniadakis, George Em; Israeli, Moshe; Orszag, Steven A.
  • Journal of Computational Physics, Vol. 97, Issue 2
  • DOI: 10.1016/0021-9991(91)90007-8

A coupled pressure-based computational method for incompressible/compressible flows
journal, December 2010


Sensor-based estimation of the velocity in the wake of a low-aspect-ratio pyramid
journal, January 2015

  • Hosseini, Zahra; Martinuzzi, Robert J.; Noack, Bernd R.
  • Experiments in Fluids, Vol. 56, Issue 1
  • DOI: 10.1007/s00348-014-1880-8

A coupled finite volume solver for the solution of incompressible flows on unstructured grids
journal, January 2009


High-order splitting methods for the incompressible Navier-Stokes equations
journal, December 1991

  • Karniadakis, George Em; Israeli, Moshe; Orszag, Steven A.
  • Journal of Computational Physics, Vol. 97, Issue 2
  • DOI: 10.1016/0021-9991(91)90007-8

Data exploration of turbulence simulations using a database cluster
conference, January 2007

  • Perlman, Eric; Burns, Randal; Li, Yi
  • Proceedings of the 2007 ACM/IEEE conference on Supercomputing - SC '07
  • DOI: 10.1145/1362622.1362654

Automatic differentiation in machine learning: a survey
text, January 2015