 
Summary: J. Phys. A: Math. Gen. 22 (1989) L711L717. Printed in the UK
LE'lTER TO THE EDITOR
Optimal learning in neural network memories?
L F Abbott and Thomas B Kepler
Physics Department, Brandeis University, Waltham, MA 02254, USA
Received 12 May 1988
Abstract. We examine general learning procedures for neural network associative memories
and find algorithms which optimise convergence.
A neural network memory uses fixed points of the map
Si(t+1)=sgn JVSj(t) . (1)
( j r l )
(where Si = *l and Jii =0) as memory patterns which attract nearby input patterns
providing associative recall. The dynamics (1) takes an initial input Si(0)and after a
sufficient number of iterations maps it to an associated memory pattern 5, provided
that tiis a fixed point of (1) and that Si(0)lies within the domain of attraction of this
fixed point. Learning in such a network is a process by which a matrix Jv is constructed
with the appropriate fixed points and required basins of attraction. Suppose we wish
to `learn' a set of memory patterns 67 with p = 1,2, ...,ON. Important variables for
characterising a fixed point are
where the normalisation factor 11Ji11 is
