 
Summary: 5 Bayesian inference for extremes
Throughout this short course, the method of maximum likelihood has provided a general and
flexible technique for parameter estimation. Given a (generic) parameter vector within a
family , the likelihood function is the probability (density) of the observed data as a function
of . Values of that have high likelihood correspond to models which give high probability
to the observed data. The principle of maximum likelihood estimation is to adopt the model
with greatest likelihood; of all the models under consideration, this is the one that assigns
the highest probability to the observed data. Other inferential procedures, such as "method
of moments", provide viable alternatives to maximum likelihood estimation; momentsbased
techniques choose optimally by equating modelbased and empirical moments, and solving
for to obtain parameter estimates. These, and other procedures (such as probability weighted
moments, Lmoments and ranked set estimation), are discussed in detail in, amongst other
places, Kotz and Nadarajah (2000).
5.1 General theory
Bayesian techniques offer an alternative way to draw inferences from the likelihood func
tion, which many practitioners often prefer. As in the nonBayesian setting, we assume data
x = (x1, . . . , xn) to be realisations of a random variable whose density falls within a parametric
family F = {f(x; ) : }. However, parameters of a distribution are now treated as ran
dom variables, for which we specify prior distributions distributions of the parameters prior
to the inclusion of data. The specification of these prior distributions enables us to supplement
