skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems

Abstract

Currently the training of neural networks relies on data of comparable accuracy but in real applications only a very small set of high-fidelity data is available while inexpensive lower fidelity data may be plentiful. We propose a new composite neural network (NN) that can be trained based on multi-fidelity data. It is comprised of three NNs, with the first NN trained using the low-fidelity data and coupled to two high-fidelity NNs, one with activation functions and another one without, in order to discover and exploit nonlinear and linear correlations, respectively, between the low-fidelity and the high-fidelity data. We first demonstrate the accuracy of the new multi-fidelity NN for approximating some standard benchmark functions but also a 20-dimensional function that is not easy to approximate with other methods, e.g. Gaussian process regression. Subsequently, we extend the recently developed physics-informed neural networks (PINNs) to be trained with multi-fidelity data sets (MPINNs). MPINNs contain four fully-connected neural networks, where the first one approximates the low-fidelity data, while the second and third construct the correlation between the low- and high-fidelity data and produce the multi-fidelity approximation, which is then used in the last NN that encodes the partial differential equations (PDEs). Specifically, by decomposingmore » the correlation into a linear and nonlinear part, the present model is capable of learning both the linear and complex nonlinear correlations between the low- and high-fidelity data adaptively. By training the MPINNs, we can: (1) obtain the correlation between the low- and high-fidelity data, (2) infer the quantities of interest based on a few scattered data, and (3) identify the unknown parameters in the PDEs. In particular, we employ the MPINNs to learn the hydraulic conductivity field for unsaturated flows as well as the reactive models for reactive transport. The results demonstrate that MPINNs can achieve relatively high accuracy based on a very small set of high-fidelity data. Despite the relatively low dimension and limited number of fidelities (two-fidelity levels) for the benchmark problems in the present study, the proposed model can be readily extended to very high-dimensional regression and classification problems involving multi-fidelity data.« less

Authors:
 [1];  [2]
  1. Brown University
  2. BROWN UNIVERSITY
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1617454
Report Number(s):
PNNL-SA-152903
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Journal Article
Journal Name:
Journal of Computational Physics
Additional Journal Information:
Journal Volume: 401
Country of Publication:
United States
Language:
English
Subject:
Multi-fidelity, Physics-informed neural networks, Adversarial data, Porous media, Reactive transport

Citation Formats

Meng, Xuhui, and Karniadakis, George E. A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems. United States: N. p., 2020. Web. doi:10.1016/j.jcp.2019.109020.
Meng, Xuhui, & Karniadakis, George E. A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems. United States. doi:10.1016/j.jcp.2019.109020.
Meng, Xuhui, and Karniadakis, George E. Wed . "A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems". United States. doi:10.1016/j.jcp.2019.109020.
@article{osti_1617454,
title = {A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems},
author = {Meng, Xuhui and Karniadakis, George E.},
abstractNote = {Currently the training of neural networks relies on data of comparable accuracy but in real applications only a very small set of high-fidelity data is available while inexpensive lower fidelity data may be plentiful. We propose a new composite neural network (NN) that can be trained based on multi-fidelity data. It is comprised of three NNs, with the first NN trained using the low-fidelity data and coupled to two high-fidelity NNs, one with activation functions and another one without, in order to discover and exploit nonlinear and linear correlations, respectively, between the low-fidelity and the high-fidelity data. We first demonstrate the accuracy of the new multi-fidelity NN for approximating some standard benchmark functions but also a 20-dimensional function that is not easy to approximate with other methods, e.g. Gaussian process regression. Subsequently, we extend the recently developed physics-informed neural networks (PINNs) to be trained with multi-fidelity data sets (MPINNs). MPINNs contain four fully-connected neural networks, where the first one approximates the low-fidelity data, while the second and third construct the correlation between the low- and high-fidelity data and produce the multi-fidelity approximation, which is then used in the last NN that encodes the partial differential equations (PDEs). Specifically, by decomposing the correlation into a linear and nonlinear part, the present model is capable of learning both the linear and complex nonlinear correlations between the low- and high-fidelity data adaptively. By training the MPINNs, we can: (1) obtain the correlation between the low- and high-fidelity data, (2) infer the quantities of interest based on a few scattered data, and (3) identify the unknown parameters in the PDEs. In particular, we employ the MPINNs to learn the hydraulic conductivity field for unsaturated flows as well as the reactive models for reactive transport. The results demonstrate that MPINNs can achieve relatively high accuracy based on a very small set of high-fidelity data. Despite the relatively low dimension and limited number of fidelities (two-fidelity levels) for the benchmark problems in the present study, the proposed model can be readily extended to very high-dimensional regression and classification problems involving multi-fidelity data.},
doi = {10.1016/j.jcp.2019.109020},
journal = {Journal of Computational Physics},
number = ,
volume = 401,
place = {United States},
year = {2020},
month = {1}
}