 
Summary: Bayesian inference with probabilistic population codes
Wei Ji Ma1,3, Jeffrey M Beck1,3, Peter E Latham2 & Alexandre Pouget1
Recent psychophysical experiments indicate that humans perform nearoptimal Bayesian inference in a wide variety of tasks,
ranging from cue integration to decision making to motor control. This implies that neurons both represent probability
distributions and combine those distributions according to a close approximation to Bayes' rule. At first sight, it would seem that
the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in
cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability
distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poisson
like variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of
neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and
for realistic neuronal variability.
Virtually all computations performed by the nervous system are subject
to uncertainty and taking this into account is critical for making
inferences about the outside world. For instance, imagine hiking in a
forest and having to jump over a stream. To decide whether or not to
jump, you could compute the width of the stream and compare it to
your internal estimate of your jumping ability. If, for example, you can
jump 2 m and the stream is 1.9 m wide, then you might choose to jump.
The problem with this approach, of course, is that you ignored
the uncertainty in the sensory and motor estimates. If you can jump
