skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Minimally Informative Prior Distributions for PSA

Conference ·
OSTI ID:984541

A salient feature of Bayesian inference is its ability to incorporate information from a variety of sources into the inference model, via the prior distribution (hereafter simply “the prior”). However, over-reliance on old information can lead to priors that dominate new data. Some analysts seek to avoid this by trying to work with a minimally informative prior distribution. Another reason for choosing a minimally informative prior is to avoid the often-voiced criticism of subjectivity in the choice of prior. Minimally informative priors fall into two broad classes: 1) so-called noninformative priors, which attempt to be completely objective, in that the posterior distribution is determined as completely as possible by the observed data, the most well known example in this class being the Jeffreys prior, and 2) priors that are diffuse over the region where the likelihood function is nonnegligible, but that incorporate some information about the parameters being estimated, such as a mean value. In this paper, we compare four approaches in the second class, with respect to their practical implications for Bayesian inference in Probabilistic Safety Assessment (PSA). The most commonly used such prior, the so-called constrained noninformative prior, is a special case of the maximum entropy prior. This is formulated as a conjugate distribution for the most commonly encountered aleatory models in PSA, and is correspondingly mathematically convenient; however, it has a relatively light tail and this can cause the posterior mean to be overly influenced by the prior in updates with sparse data. A more informative prior that is capable, in principle, of dealing more effectively with sparse data is a mixture of conjugate priors. A particular diffuse nonconjugate prior, the logistic-normal, is shown to behave similarly for some purposes. Finally, we review the so-called robust prior. Rather than relying on the mathematical abstraction of entropy, as does the constrained noninformative prior, the robust prior places a heavy-tailed Cauchy prior on the canonical parameter of the aleatory model.

Research Organization:
Idaho National Lab. (INL), Idaho Falls, ID (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
DE-AC07-05ID14517
OSTI ID:
984541
Report Number(s):
INL/CON-10-18287; TRN: US201016%%1384
Resource Relation:
Conference: PSAM-10,Seattle, WA,06/07/2010,06/11/2010
Country of Publication:
United States
Language:
English

Similar Records

Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares
Journal Article · Tue Mar 01 00:00:00 EST 2011 · Reliability Engineering and System Safety · OSTI ID:984541

Finding A Minimally Informative Dirichlet Prior Using Least Squares
Conference · Tue Mar 01 00:00:00 EST 2011 · OSTI ID:984541

MSPI False Indication Probability Simulations
Conference · Tue Mar 01 00:00:00 EST 2011 · OSTI ID:984541