skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Gradient-based optimization for regression in the functional tensor-train format

Abstract

Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. Here, we use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.

Authors:
ORCiD logo [1]; ORCiD logo [2]
  1. Univ. of Michigan, Ann Arbor, MI (United States)
  2. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Optimization and Uncertainty Quantification
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
OSTI Identifier:
1485822
Alternate Identifier(s):
OSTI ID: 1702087
Report Number(s):
SAND-2018-13399J
Journal ID: ISSN 0021-9991; 670535
Grant/Contract Number:  
AC04-94AL85000; NA-0003525
Resource Type:
Journal Article: Accepted Manuscript
Journal Name:
Journal of Computational Physics
Additional Journal Information:
Journal Volume: 374; Journal Issue: C; Journal ID: ISSN 0021-9991
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; Tensors; Regression; Function approximation; Uncertainty quantification; Alternating least squares; Stochastic gradient descent

Citation Formats

Gorodetsky, Alex A., and Jakeman, John D. Gradient-based optimization for regression in the functional tensor-train format. United States: N. p., 2018. Web. doi:10.1016/j.jcp.2018.08.010.
Gorodetsky, Alex A., & Jakeman, John D. Gradient-based optimization for regression in the functional tensor-train format. United States. https://doi.org/10.1016/j.jcp.2018.08.010
Gorodetsky, Alex A., and Jakeman, John D. 2018. "Gradient-based optimization for regression in the functional tensor-train format". United States. https://doi.org/10.1016/j.jcp.2018.08.010. https://www.osti.gov/servlets/purl/1485822.
@article{osti_1485822,
title = {Gradient-based optimization for regression in the functional tensor-train format},
author = {Gorodetsky, Alex A. and Jakeman, John D.},
abstractNote = {Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. Here, we use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.},
doi = {10.1016/j.jcp.2018.08.010},
url = {https://www.osti.gov/biblio/1485822}, journal = {Journal of Computational Physics},
issn = {0021-9991},
number = C,
volume = 374,
place = {United States},
year = {Tue Aug 14 00:00:00 EDT 2018},
month = {Tue Aug 14 00:00:00 EDT 2018}
}

Journal Article:

Citation Metrics:
Cited by: 17 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

A scalable optimization approach for fitting canonical tensor decompositions
journal, January 2011


Spectral Tensor-Train Decomposition
journal, January 2016


Adaptive sparse polynomial chaos expansion based on least angle regression
journal, March 2011


A Least-Squares Method for Sparse Low Rank Approximation of Multivariate Functions
journal, January 2015


Adaptive Smolyak Pseudospectral Approximations
journal, January 2013


Sparse pseudospectral approximation method
journal, July 2012


Nonlinear approximation
journal, January 1998


A non-adapted sparse approximation of PDEs with stochastic inputs
journal, April 2011


Non-intrusive low-rank separated approximation of high-dimensional stochastic models
journal, August 2013


Multi-element probabilistic collocation method in high dimensions
journal, March 2010


Sparse grid collocation schemes for stochastic natural convection problems
journal, July 2007


Mercer Kernels and Integrated Variance Experimental Design: Connections Between Gaussian Process Regression and Polynomial Approximation
journal, January 2016


Hierarchical Singular Value Decomposition of Tensors
journal, January 2010


Variants of Alternating Least Squares Tensor Completion in the Tensor Train Format
journal, January 2015


A New Scheme for the Tensor Representation
journal, October 2009


Enhancing 1 -minimization estimates of polynomial chaos expansions using basis selection
journal, May 2015


Tensor Decompositions and Applications
journal, August 2009


Quantification of Uncertainty from High-Dimensional Scattered data via Polynomial Approximation
journal, January 2014


A Compressed Sensing Approach for Partial Differential Equations with Random Input Data
journal, October 2012


High-dimensional additive modeling
journal, December 2009


An Anisotropic Sparse Grid Stochastic Collocation Method for Partial Differential Equations with Random Input Data
journal, January 2008


Tensor-Train Decomposition
journal, January 2011


Constructive Representation of Functions in Low-Rank Tensor Formats
journal, December 2012


TT-cross approximation for multidimensional arrays
journal, January 2010


Low rank tensor recovery via iterative hard thresholding
journal, June 2017


Polynomial-Chaos-Based Kriging
journal, January 2015


Simultaneous Variable Selection
journal, August 2005


Local Convergence of the Alternating Least Squares Algorithm for Canonical Tensor Approximation
journal, January 2012


The Wiener--Askey Polynomial Chaos for Stochastic Differential Equations
journal, January 2002


Model selection and estimation in regression with grouped variables
journal, February 2006


Non-intrusive low-rank separated approximation of high-dimensional stochastic models
journal, August 2013


Sparse grid collocation schemes for stochastic natural convection problems
journal, July 2007


Multi-element probabilistic collocation method in high dimensions
journal, March 2010


Adaptive sparse polynomial chaos expansion based on least angle regression
journal, March 2011


Enhancing 1 -minimization estimates of polynomial chaos expansions using basis selection
journal, May 2015


Fast adaptive interpolation of multi-dimensional arrays in tensor train format
conference, September 2011


Optimization Methods for Large-Scale Machine Learning
journal, January 2018


Alternating Least Squares Tensor Completion in The TT-Format
preprint, January 2015


Works referencing / citing this record:

Sparse low-rank separated representation models for learning from data
journal, January 2019


Adaptive multi‐index collocation for uncertainty quantification and sensitivity analysis
journal, November 2019

  • Jakeman, John D.; Eldred, Michael S.; Geraci, Gianluca
  • International Journal for Numerical Methods in Engineering, Vol. 121, Issue 6
  • https://doi.org/10.1002/nme.6268

Adaptive multi‐index collocation for uncertainty quantification and sensitivity analysis
journal, August 2020

  • Jakeman, John D.; Eldred, Michael S.; Geraci, Gianluca
  • International Journal for Numerical Methods in Engineering, Vol. 121, Issue 19
  • https://doi.org/10.1002/nme.6450

Adaptive multi‐index collocation for uncertainty quantification and sensitivity analysis
journal, November 2019

  • Jakeman, John D.; Eldred, Michael S.; Geraci, Gianluca
  • International Journal for Numerical Methods in Engineering, Vol. 121, Issue 6
  • https://doi.org/10.1002/nme.6268

Introductory overview of identifiability analysis: A guide to evaluating whether you have the right type of data for your modeling purpose
journal, September 2019


MGA: Momentum Gradient Attack on Network
journal, February 2021


Cholesky-based experimental design for Gaussian process and kernel-based emulation and calibration
text, January 2021