Brittleness and Bayesian Inference
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
In a recent report, “Bayesian Brittleness: Why no Bayesian model is ‘good enough,’” (arXiv, 1304.6772v1) the authors prove rigorously that under a seemingly natural class of priors Π, the posterior expectation of a function Φ(θ) can achieve essentially any value that Φ(θ) alone can achieve, so that Bayesian inference appears to have no robustness whatsoever. We explain this puzzling result, and show why it does not imply a breakdown of Bayesian inference. The problem is that the priors leading to the extreme results depend on the observed data, which means they are not valid priors. The corresponding posteriors are also invalid, so the extreme variation in these posteriors does not imply that Bayesian inference is brittle. The report, however, provides a new framework which may be used for any assumed class of priors Π, and which we explain in detail. This framework may be a useful new tool for deriving quantitative robustness results.
- Research Organization:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC52-06NA25396
- OSTI ID:
- 1090691
- Report Number(s):
- LA-UR--13-25883
- Country of Publication:
- United States
- Language:
- English
Similar Records
Minimally Informative Prior Distributions for PSA
INFERRING THE ECCENTRICITY DISTRIBUTION