N.J., Smelter, & P.B., Baltes (Eds.) (2001).
Encyclopedia of the Social and Behavioral Sciences.
London: Elsevier Science.
Article Title: Linear Algebra for Neural Networks
By: Herv´e Abdi
Author Address: Herv´e Abdi, School of Human Development, MS: Gr.4.1,
The University of Texas at Dallas, Richardson, TX 7508330688, USA
Phone: 972 883 2065, fax: 972 883 2491 Date: June 1, 2001
Neural networks are quantitative models which learn to associate input and
output patterns adaptively with the use of learning algorithms. We expose four
main concepts from linear algebra which are essential for analyzing these models:
1) the projection of a vector, 2) the eigen and singular value decomposition,
3) the gradient vector and Hessian matrix of a vector function, and 4) the
Taylor expansion of a vector function. We illustrate these concepts by the
analysis of the Hebbian and Widrow-Hoff rules and some basic neural network
architectures (i.e., the linear autoassociator, the linear heteroassociator, and
the error backpropagation network). We show also that neural networks are