Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications
Journal Article
·
· Journal of Computational Physics
- Purdue Univ., West Lafayette, IN (United States). Dept. of Mathematics; Purdue Univ., West Lafayette, IN (United States)
- Purdue Univ., West Lafayette, IN (United States). Dept. of Mathematics
- Purdue Univ., West Lafayette, IN (United States). Dept. of Mathematics. School of Mechanical Engineering. Dept. of Statistics. Dept. of Earth, Atmospheric, and Planetary Sciences
Deep neural networks have been successfully employed in an extensive variety of research areas, including solving partial differential equations. Despite its significant success, there are some challenges in effectively training DNN, such as avoiding overfitting in over-parameterized DNNs and accelerating the optimization in DNNs with pathological curvature. Here, we propose a Bayesian type sparse deep learning algorithm. The algorithm utilizes a set of spike-and-slab priors for the parameters in the deep neural network. The hierarchical Bayesian mixture will be trained using an adaptive empirical method. That is, one will alternatively sample from the posterior using preconditioned stochastic gradient Langevin Dynamics (PSGLD), and optimize the latent variables via stochastic approximation. The sparsity of the network is achieved while optimizing the hyperparameters with adaptive searching and penalizing. A popular SG-MCMC approach is Stochastic gradient Langevin dynamics (SGLD). However, considering the complex geometry in the model parameter space in nonconvex learning, updating parameters using a universal step size in each component as in SGLD may cause slow mixing. To address this issue, we apply a computationally manageable preconditioner in the updating rule, which provides a step-size parameter to adapt to local geometric properties. Moreover, by smoothly optimizing the hyperparameter in the preconditioning matrix, our proposed algorithm ensures a decreasing bias, which is introduced by ignoring the correction term in the preconditioned SGLD. According to the existing theoretical framework, we show that the proposed algorithm can asymptotically converge to the correct distribution with a controllable bias under mild conditions. Numerical tests are performed on both synthetic regression problems and learning solutions of elliptic PDE, which demonstrate the accuracy and efficiency of the present work.
- Research Organization:
- Purdue Univ., West Lafayette, IN (United States)
- Sponsoring Organization:
- National Science Foundation (NSF); US Army Research Office (ARO); USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
- Grant/Contract Number:
- SC0021142
- OSTI ID:
- 1853726
- Alternate ID(s):
- OSTI ID: 1809418
OSTI ID: 23203379
- Journal Information:
- Journal of Computational Physics, Journal Name: Journal of Computational Physics Journal Issue: C Vol. 432; ISSN 0021-9991
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
An adaptive Hessian approximated stochastic gradient MCMC method
Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
Improving Deep Neural Networks’ Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum
Journal Article
·
Wed Feb 03 19:00:00 EST 2021
· Journal of Computational Physics
·
OSTI ID:1853727
Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
Journal Article
·
Sun Jan 03 19:00:00 EST 2021
· SIAM Journal on Scientific Computing
·
OSTI ID:1866812
Improving Deep Neural Networks’ Training for Image Classification With Nonlinear Conjugate Gradient-Style Adaptive Momentum
Journal Article
·
Thu Mar 23 20:00:00 EDT 2023
· IEEE Transactions on Neural Networks and Learning Systems
·
OSTI ID:2280651