Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Attractor networks for shape recognition 1 Yali Amit 2
 

Summary: Attractor networks for shape recognition 1
Yali Amit 2
and Massimo Mascaro 3
Abstract
We describe a system of thousands of binary perceptrons with coarse oriented edges as input
which is able to successfully recognize shapes, even in a context with hundreds of classes. The
perceptrons have randomized feedforward connections from the input layer and form a recurrent
network among themselves. Each class is represented by a pre-learned attractor (serving as an
associative `hook') in the recurrent net corresponding to a randomly selected subpopulation of
the perceptrons. In training first the attractor of the correct class is activated among the percep-
trons, then the visual stimulus is presented at the input layer. The feedforward connections are
modified using field dependent Hebbian learning with positive synapses, which we show to be
stable with respect to large variations in feature statistics and coding levels, and allows the use
of the same threshold on all perceptrons. Recognition is based only on the visual stimuli. These
activate the recurrent network, which is then driven by the dynamics to a sustained attractor
state, concentrated in the correct class subset, and providing a form of working memory. We
believe this architecture is on one hand more transparent than standard feedforward two layer
networks and on the other hand has stronger biological analogies.
1Appeared: Neural Computation, vol. 13, pp. 1415-1442, 2001
2Department of Statistics, University of Chicago, Chicago, IL, 60637; Email: amit@galton.uchicago.edu. Supported

  

Source: Amit, Yali - Departments of Computer Science & Statistics, University of Chicago

 

Collections: Computer Technologies and Information Sciences