Constrained or unconstrained? Neural-network-based equation discovery from data
- University of Colorado, Boulder, CO (United States)
- Sandia National Lab. (SNL-CA), Livermore, CA (United States)
Throughout many fields, practitioners often rely on differential equations to model systems. Yet, for many applications, the theoretical derivation of such equations and/or the accurate resolution of their solutions may be intractable. Instead, recently developed methods, including those based on parameter estimation, operator subset selection, and neural networks, allow for the data-driven discovery of both ordinary and partial differential equations (PDEs), on a spectrum of interpretability. The success of these strategies is often contingent upon the correct identification of representative equations from noisy observations of state variables and, as importantly and intertwined with that, the mathematical strategies utilized to enforce those equations. Specifically, the latter has been commonly addressed via unconstrained optimization strategies. Representing the PDE as a neural network, we propose to discover the PDE (or the associated operator) by solving a constrained optimization problem and using an intermediate state representation similar to a physics-informed neural network (PINN). The objective function of this constrained optimization problem promotes matching the data, while the constraints require that the discovered PDE is satisfied at a number of spatial collocation points. We present a penalty method and a widely used trust-region barrier method to solve this constrained optimization problem, and we compare these methods on numerical examples. Our results on several example problems demonstrate that the latter constrained method outperforms the penalty method, particularly for higher noise levels or fewer collocation points. This work motivates further exploration into using sophisticated constrained optimization methods in scientific machine learning, as opposed to their commonly used, penalty-method or unconstrained counterparts. For both of these methods, we solve these discovered neural network PDEs with classical methods, such as finite difference methods, as opposed to PINNs-type methods relying on automatic differentiation. Here, we briefly highlight how simultaneously fitting the data while discovering the PDE improves the robustness to noise and other small, yet crucial, implementation details.
- Research Organization:
- University of Colorado, Boulder, CO (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA); National Science Foundation (NSF)
- Grant/Contract Number:
- NA0003962; NA0003525
- OSTI ID:
- 2589832
- Journal Information:
- Computer Methods in Applied Mechanics and Engineering, Journal Name: Computer Methods in Applied Mechanics and Engineering Vol. 436; ISSN 0045-7825
- Publisher:
- Elsevier BVCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Exact enforcement of temporal continuity in sequential physics-informed neural networks
Quantifying local and global mass balance errors in physics-informed neural networks