Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo

Journal Article · · SIAM Journal on Scientific Computing
DOI:https://doi.org/10.1137/19m1294356· OSTI ID:1866812
 [1];  [2];  [2];  [2]
  1. Univ. of California, Los Angeles, CA (United States); University of Utah
  2. Univ. of California, Los Angeles, CA (United States)
As an important Markov chain Monte Carlo (MCMC) method, the stochastic gradient Langevin dynamics (SGLD) algorithm has achieved great success in Bayesian learning and posterior sampling. Furthermore, SGLD typically suffers from a slow convergence rate due to its large variance caused by the stochastic gradient. In order to alleviate these drawbacks, we leverage the recently developed Laplacian smoothing technique and propose a Laplacian smoothing stochastic gradient Langevin dynamics (LS-SGLD) algorithm. We prove that for sampling from both log-concave and non-log-concave densities, LS-SGLD achieves strictly smaller discretization error in 2-Wasserstein distance, although its mixing rate can be slightly slower. Experiments on both synthetic and real datasets verify our theoretical results and demonstrate the superior performance of LS-SGLD on different machine learning tasks including posterior sampling, Bayesian logistic regression, and training Bayesian convolutional neural networks.
Research Organization:
Univ. of Utah, Salt Lake City, UT (United States)
Sponsoring Organization:
Air Force Research Laboratory; National Science Foundation; Office of Naval Research; USDOE
Grant/Contract Number:
SC0021142
OSTI ID:
1866812
Alternate ID(s):
OSTI ID: 1853730
Journal Information:
SIAM Journal on Scientific Computing, Journal Name: SIAM Journal on Scientific Computing Journal Issue: 1 Vol. 43; ISSN 1064-8275
Publisher:
Society for Industrial and Applied Mathematics (SIAM)Copyright Statement
Country of Publication:
United States
Language:
English

References (12)

Mimicking the one-dimensional marginal distributions of processes having an ito differential journal December 1986
Diffusion for Global Optimization in $\mathbb{R}^n $ journal May 1987
Stochastic Gradient Hamiltonian Monte Carlo preprint January 2014
A Deterministic Gradient-Based Approach to Avoid Saddle Points preprint January 2019
The reduced-order hybrid Monte Carlo sampling smoother: The reduced-order hybrid Monte Carlo sampling smoother journal June 2016
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases journal September 2016
Correlation functions and computer simulations journal May 1981
Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise journal October 2002
Theoretical guarantees for approximate sampling from smooth and log-concave densities journal April 2016
A Bayesian Approach to Estimating Background Flows from a Passive Scalar journal January 2020
Nonasymptotic convergence analysis for the unadjusted Langevin algorithm journal June 2017
Weighted Csiszár-Kullback-Pinsker inequalities and applications to transportation inequalities journal January 2005

Similar Records

Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications
Journal Article · Tue Feb 02 19:00:00 EST 2021 · Journal of Computational Physics · OSTI ID:1853726

An adaptive Hessian approximated stochastic gradient MCMC method
Journal Article · Wed Feb 03 19:00:00 EST 2021 · Journal of Computational Physics · OSTI ID:1853727

Challenges in Markov Chain Monte Carlo for Bayesian Neural Networks
Journal Article · Mon Jun 20 20:00:00 EDT 2022 · Statistical Science · OSTI ID:1976073