skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications

Journal Article · · Journal of Computational Physics
 [1];  [1];  [2]
  1. Purdue Univ., West Lafayette, IN (United States). Dept. of Mathematics
  2. Purdue Univ., West Lafayette, IN (United States). Dept. of Mathematics. School of Mechanical Engineering. Dept. of Statistics. Dept. of Earth, Atmospheric, and Planetary Sciences

Deep neural networks have been successfully employed in an extensive variety of research areas, including solving partial differential equations. Despite its significant success, there are some challenges in effectively training DNN, such as avoiding overfitting in over-parameterized DNNs and accelerating the optimization in DNNs with pathological curvature. Here, we propose a Bayesian type sparse deep learning algorithm. The algorithm utilizes a set of spike-and-slab priors for the parameters in the deep neural network. The hierarchical Bayesian mixture will be trained using an adaptive empirical method. That is, one will alternatively sample from the posterior using preconditioned stochastic gradient Langevin Dynamics (PSGLD), and optimize the latent variables via stochastic approximation. The sparsity of the network is achieved while optimizing the hyperparameters with adaptive searching and penalizing. A popular SG-MCMC approach is Stochastic gradient Langevin dynamics (SGLD). However, considering the complex geometry in the model parameter space in nonconvex learning, updating parameters using a universal step size in each component as in SGLD may cause slow mixing. To address this issue, we apply a computationally manageable preconditioner in the updating rule, which provides a step-size parameter to adapt to local geometric properties. Moreover, by smoothly optimizing the hyperparameter in the preconditioning matrix, our proposed algorithm ensures a decreasing bias, which is introduced by ignoring the correction term in the preconditioned SGLD. According to the existing theoretical framework, we show that the proposed algorithm can asymptotically converge to the correct distribution with a controllable bias under mild conditions. Numerical tests are performed on both synthetic regression problems and learning solutions of elliptic PDE, which demonstrate the accuracy and efficiency of the present work.

Research Organization:
Purdue Univ., West Lafayette, IN (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); National Science Foundation (NSF); US Army Research Office (ARO)
Grant/Contract Number:
SC0021142; DMS-1555072; DMS-1736364; CMMI-1634832; CMMI-1560834; W911NF-15-1-0562
OSTI ID:
1853726
Alternate ID(s):
OSTI ID: 1809418
Journal Information:
Journal of Computational Physics, Vol. 432, Issue C; ISSN 0021-9991
Publisher:
ElsevierCopyright Statement
Country of Publication:
United States
Language:
English

References (21)

A Stochastic Quasi-Newton Method for Large-Scale Optimization journal January 2016
EMVS: The EM Approach to Bayesian Variable Selection journal April 2014
Online adaptive local multiscale model reduction for heterogeneous problems in perforated domains journal July 2016
Generalized multiscale finite element methods (GMsFEM) journal October 2013
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data journal October 2019
Training‐Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network journal January 2018
Deep multiscale model learning journal April 2020
Mixed Generalized Multiscale Finite Element Methods and Applications journal January 2015
Reduced-order deep learning for flow dynamics. The interplay between deep learning and model reduction journal January 2020
A Multiscale Neural Network Based on Hierarchical Matrices journal January 2019
Riemann manifold Langevin and Hamiltonian Monte Carlo methods: Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods journal March 2011
Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems journal November 2019
MgNet: A unified framework of multigrid and convolutional neural network journal May 2019
The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems journal February 2018
Efficient deep learning techniques for multiphase flow simulation in heterogeneous porousc media journal January 2020
Homogenization-Based Mixed Multiscale Finite Elements for Problems with Anisotropy journal April 2011
Deep global model reduction learning in porous media flow simulation journal December 2019
Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification journal August 2018
Mixed Multiscale Finite Element Methods for Stochastic Porous Media Flows journal January 2008
Pruning Convolutional Neural Networks for Resource Efficient Inference preprint January 2016
Physics-Constrained Deep Learning for High-dimensional Surrogate Modeling and Uncertainty Quantification without Labeled Data text January 2019