Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
N.J., Smelter, & P.B., Baltes (Eds.) (2001). Encyclopedia of the Social and Behavioral Sciences.
 

Summary: from:
N.J., Smelter, & P.B., Baltes (Eds.) (2001).
Encyclopedia of the Social and Behavioral Sciences.
London: Elsevier Science.
Article Title: Linear Algebra for Neural Networks
By: Herv´e Abdi
Author Address: Herv´e Abdi, School of Human Development, MS: Gr.4.1,
The University of Texas at Dallas, Richardson, TX 750833­0688, USA
Phone: 972 883 2065, fax: 972 883 2491 Date: June 1, 2001
E-mail: herve@utdallas.edu
Abstract
Neural networks are quantitative models which learn to associate input and
output patterns adaptively with the use of learning algorithms. We expose four
main concepts from linear algebra which are essential for analyzing these models:
1) the projection of a vector, 2) the eigen and singular value decomposition,
3) the gradient vector and Hessian matrix of a vector function, and 4) the
Taylor expansion of a vector function. We illustrate these concepts by the
analysis of the Hebbian and Widrow-Hoff rules and some basic neural network
architectures (i.e., the linear autoassociator, the linear heteroassociator, and
the error backpropagation network). We show also that neural networks are

  

Source: Abdi, Hervé - School of Behavioral and Brain Sciences, University of Texas at Dallas

 

Collections: Biology and Medicine; Computer Technologies and Information Sciences