DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications

Abstract

Deep neural networks have been successfully employed in an extensive variety of research areas, including solving partial differential equations. Despite its significant success, there are some challenges in effectively training DNN, such as avoiding overfitting in over-parameterized DNNs and accelerating the optimization in DNNs with pathological curvature. Here, we propose a Bayesian type sparse deep learning algorithm. The algorithm utilizes a set of spike-and-slab priors for the parameters in the deep neural network. The hierarchical Bayesian mixture will be trained using an adaptive empirical method. That is, one will alternatively sample from the posterior using preconditioned stochastic gradient Langevin Dynamics (PSGLD), and optimize the latent variables via stochastic approximation. The sparsity of the network is achieved while optimizing the hyperparameters with adaptive searching and penalizing. A popular SG-MCMC approach is Stochastic gradient Langevin dynamics (SGLD). However, considering the complex geometry in the model parameter space in nonconvex learning, updating parameters using a universal step size in each component as in SGLD may cause slow mixing. To address this issue, we apply a computationally manageable preconditioner in the updating rule, which provides a step-size parameter to adapt to local geometric properties. Moreover, by smoothly optimizing the hyperparameter in the preconditioning matrix,more » our proposed algorithm ensures a decreasing bias, which is introduced by ignoring the correction term in the preconditioned SGLD. According to the existing theoretical framework, we show that the proposed algorithm can asymptotically converge to the correct distribution with a controllable bias under mild conditions. Numerical tests are performed on both synthetic regression problems and learning solutions of elliptic PDE, which demonstrate the accuracy and efficiency of the present work.« less

Authors:
 [1];  [1];  [2]
  1. Purdue Univ., West Lafayette, IN (United States). Dept. of Mathematics
  2. Purdue Univ., West Lafayette, IN (United States). Dept. of Mathematics. School of Mechanical Engineering. Dept. of Statistics. Dept. of Earth, Atmospheric, and Planetary Sciences
Publication Date:
Research Org.:
Purdue Univ., West Lafayette, IN (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); National Science Foundation (NSF); US Army Research Office (ARO)
OSTI Identifier:
1853726
Alternate Identifier(s):
OSTI ID: 1809418
Grant/Contract Number:  
SC0021142; DMS-1555072; DMS-1736364; CMMI-1634832; CMMI-1560834; W911NF-15-1-0562
Resource Type:
Accepted Manuscript
Journal Name:
Journal of Computational Physics
Additional Journal Information:
Journal Volume: 432; Journal Issue: C; Journal ID: ISSN 0021-9991
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; computer science; physics; Bayesian sparse learning; preconditioned stochastic gradient MCMC; deep learning; deep neural network; adaptive hierarchical posterior; stochastic approximation

Citation Formats

Wang, Yating, Deng, Wei, and Lin, Guang. Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications. United States: N. p., 2021. Web. doi:10.1016/j.jcp.2021.110134.
Wang, Yating, Deng, Wei, & Lin, Guang. Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications. United States. https://doi.org/10.1016/j.jcp.2021.110134
Wang, Yating, Deng, Wei, and Lin, Guang. Wed . "Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications". United States. https://doi.org/10.1016/j.jcp.2021.110134. https://www.osti.gov/servlets/purl/1853726.
@article{osti_1853726,
title = {Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications},
author = {Wang, Yating and Deng, Wei and Lin, Guang},
abstractNote = {Deep neural networks have been successfully employed in an extensive variety of research areas, including solving partial differential equations. Despite its significant success, there are some challenges in effectively training DNN, such as avoiding overfitting in over-parameterized DNNs and accelerating the optimization in DNNs with pathological curvature. Here, we propose a Bayesian type sparse deep learning algorithm. The algorithm utilizes a set of spike-and-slab priors for the parameters in the deep neural network. The hierarchical Bayesian mixture will be trained using an adaptive empirical method. That is, one will alternatively sample from the posterior using preconditioned stochastic gradient Langevin Dynamics (PSGLD), and optimize the latent variables via stochastic approximation. The sparsity of the network is achieved while optimizing the hyperparameters with adaptive searching and penalizing. A popular SG-MCMC approach is Stochastic gradient Langevin dynamics (SGLD). However, considering the complex geometry in the model parameter space in nonconvex learning, updating parameters using a universal step size in each component as in SGLD may cause slow mixing. To address this issue, we apply a computationally manageable preconditioner in the updating rule, which provides a step-size parameter to adapt to local geometric properties. Moreover, by smoothly optimizing the hyperparameter in the preconditioning matrix, our proposed algorithm ensures a decreasing bias, which is introduced by ignoring the correction term in the preconditioned SGLD. According to the existing theoretical framework, we show that the proposed algorithm can asymptotically converge to the correct distribution with a controllable bias under mild conditions. Numerical tests are performed on both synthetic regression problems and learning solutions of elliptic PDE, which demonstrate the accuracy and efficiency of the present work.},
doi = {10.1016/j.jcp.2021.110134},
journal = {Journal of Computational Physics},
number = C,
volume = 432,
place = {United States},
year = {Wed Feb 03 00:00:00 EST 2021},
month = {Wed Feb 03 00:00:00 EST 2021}
}

Works referenced in this record:

A Stochastic Quasi-Newton Method for Large-Scale Optimization
journal, January 2016

  • Byrd, R. H.; Hansen, S. L.; Nocedal, Jorge
  • SIAM Journal on Optimization, Vol. 26, Issue 2
  • DOI: 10.1137/140954362

EMVS: The EM Approach to Bayesian Variable Selection
journal, April 2014

  • Ročková, Veronika; George, Edward I.
  • Journal of the American Statistical Association, Vol. 109, Issue 506
  • DOI: 10.1080/01621459.2013.869223

Online adaptive local multiscale model reduction for heterogeneous problems in perforated domains
journal, July 2016


Generalized multiscale finite element methods (GMsFEM)
journal, October 2013


Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
journal, October 2019

  • Zhu, Yinhao; Zabaras, Nicholas; Koutsourelakis, Phaedon-Stelios
  • Journal of Computational Physics, Vol. 394
  • DOI: 10.1016/j.jcp.2019.05.024

Training‐Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network
journal, January 2018

  • Laloy, Eric; Hérault, Romain; Jacques, Diederik
  • Water Resources Research, Vol. 54, Issue 1
  • DOI: 10.1002/2017WR022148

Deep multiscale model learning
journal, April 2020


Mixed Generalized Multiscale Finite Element Methods and Applications
journal, January 2015

  • Chung, Eric T.; Efendiev, Yalchin; Lee, Chak Shing
  • Multiscale Modeling & Simulation, Vol. 13, Issue 1
  • DOI: 10.1137/140970574

Reduced-order deep learning for flow dynamics. The interplay between deep learning and model reduction
journal, January 2020


A Multiscale Neural Network Based on Hierarchical Matrices
journal, January 2019

  • Fan, Yuwei; Lin, Lin; Ying, Lexing
  • Multiscale Modeling & Simulation, Vol. 17, Issue 4
  • DOI: 10.1137/18M1203602

Riemann manifold Langevin and Hamiltonian Monte Carlo methods: Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods
journal, March 2011

  • Girolami, Mark; Calderhead, Ben
  • Journal of the Royal Statistical Society: Series B (Statistical Methodology), Vol. 73, Issue 2
  • DOI: 10.1111/j.1467-9868.2010.00765.x

Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems
journal, November 2019


MgNet: A unified framework of multigrid and convolutional neural network
journal, May 2019


The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems
journal, February 2018


Efficient deep learning techniques for multiphase flow simulation in heterogeneous porousc media
journal, January 2020


Homogenization-Based Mixed Multiscale Finite Elements for Problems with Anisotropy
journal, April 2011

  • Arbogast, Todd
  • Multiscale Modeling & Simulation, Vol. 9, Issue 2
  • DOI: 10.1137/100788677

Deep global model reduction learning in porous media flow simulation
journal, December 2019


Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification
journal, August 2018


Mixed Multiscale Finite Element Methods for Stochastic Porous Media Flows
journal, January 2008

  • Aarnes, J. E.; Efendiev, Y.
  • SIAM Journal on Scientific Computing, Vol. 30, Issue 5
  • DOI: 10.1137/07070108x

Pruning Convolutional Neural Networks for Resource Efficient Inference
preprint, January 2016