Summary: A computational framework for empirical Bayes
Yves F. Atchad´e
(June 09; revised Jan. 10)
Abstract: In empirical Bayes inference one is typically interested in sampling from the
posterior distribution of a parameter with a hyper-parameter set to its maximum likeli-
hood estimate. This is often problematic particularly when the likelihood function of the
hyper-parameter is not available in closed form and the posterior distribution is intractable.
Previous works have dealt with this problem using a multi-step approach based on the EM
algorithm and Markov Chain Monte Carlo (MCMC). We propose a framework based on re-
cent developments in adaptive MCMC, where this problem is addressed more efficiently using
a single Monte Carlo run. We discuss the convergence of the algorithm and its connection
with the EM algorithm. We apply our algorithm to the Bayesian Lasso of Park and Casella
(2008) and on the empirical Bayes variable selection of George and Foster (2000).
AMS 2000 subject classifications: Primary 60C05, 60J27, 60J35, 65C40.
Keywords and phrases: Empirical Bayes, Adaptive MCMC, Variable selection, Bayesian
This paper develops an adaptive Monte Carlo strategy for sampling from posterior distributions in
empirical Bayes (EB) analysis. We start here with a general description of the problem. Suppose