Gradient-based optimization for regression in the functional tensor-train format
- Univ. of Michigan, Ann Arbor, MI (United States)
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Optimization and Uncertainty Quantification
Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. Here, we use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.
- Research Organization:
- Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
- Grant/Contract Number:
- AC04-94AL85000; NA-0003525
- OSTI ID:
- 1485822
- Alternate ID(s):
- OSTI ID: 1702087
- Report Number(s):
- SAND-2018-13399J; 670535
- Journal Information:
- Journal of Computational Physics, Vol. 374, Issue C; ISSN 0021-9991
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Web of Science
Similar Records
Nonlinear sparse Bayesian learning for physics-based models
Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection