Summary: Communicated by Peter Foldiak
Reduced Representation by Neural Networks with
Restricted Receptive Fields
L. F. Abbott
Centerfor Complex Systems,Brandeis University,Waltham,M A 02254 USA
Model neural networks can perform dimensional reductions of input
data sets using correlation-based learning rules to adjust their weights.
Simple Hebbian learning rules lead to an optimal reduction at the sin-
gle unit level but result in highly redundant network representations.
More complexrules designed to reduce or remove this redundancy can
develop optimal principal component representations, but they are not
very compelling from a biological perspective. Neurons in biological
networks have restricted receptive fields limiting their access to the
input data space. We find that, within this restricted receptive field
architecture, simple correlation-based learning rules can produce sur-
prisingly efficient reduced representations. When noise is present, the
size of the receptive fields can be optimally tuned to maximize the ac-
curacy of reconstructions of input data from a reduced representation.