Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Baltzer Journals Capacity Bounds for
 

Summary: Baltzer Journals
Capacity Bounds for
Structured Neural Network Architectures
Peter Rieper 1 , Sabine Kr¨ oner 2; \Lambda and Reinhard Moratz 3
1 FB Mathematik, Universit¨at Hamburg, Bundesstr. 55, D­20146 Hamburg
2 Technische Informatik I, TU Hamburg­Harburg, Harburger Schloßstr. 20,
D­21071 Hamburg, E­mail: Kroener@tu­harburg.d400.de
3 AG Angewandte Informatik, Universit¨at Bielefeld, Postfach 100131, D­33501 Bielefeld
1 Introduction
Structured multi­layer feedforward neural networks gain more and more importance in
speech­ and image processing applications. Their characteristic is that a­priori knowledge
about the task to be performed is already built into their architecture by use of nodes
with shared weight vectors. Examples are time delay neural networks [10] and networks
for invariant pattern recognition [4, 5].
One problem in the training of neural networks is the estimation of the number of
training samples needed to achieve good generalization. In [1] is shown that for feedfor­
ward architectures this number is correlated with the capacity or Vapnik­Chervonenkis
dimension of the architecture. So far an upper bound for the capacity has been derived for
two­layer feedforward architectures with independent weights: it depends with O( w
a \Delta ln q

  

Source: Albert-Ludwigs-Universität Freiburg, Institut für Informatik,, Lehrstuhls für Mustererkennung und Bildverarbeitung

 

Collections: Computer Technologies and Information Sciences