Nearest neighbor rules PAC-approximate feedforward networks
The problem of function estimation using feedforward neural networks based on an indpendently and identically generated sample is addressed. The feedforward networks with a single hidden layer of 1/(1+{epsilon}{sup -{gamma}z}) units and bounded parameters are considered. It is shown that given a sufficiently large sample, a nearest neighbor rule approximates the best neural network such that the expected error is arbitrarily bounded with an arbitrary high probability. Result is extendible to other neural networks where the hidden units satisfy a suitable Lipschitz condition. A result of practical interest is that the problem of computing a neural network that approximates (in the above sense) the best possible one is computationally difficult, whereas a nearest neighbor rule is linear-time computable in terms of the sample size.
- Research Organization:
- Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
- Sponsoring Organization:
- USDOE, Washington, DC (United States)
- DOE Contract Number:
- AC05-96OR22464
- OSTI ID:
- 228504
- Report Number(s):
- CONF-9606163-1; ON: DE96008807
- Resource Relation:
- Conference: International conference on neural networks, Washington, DC (United States), 3-6 Jun 1996; Other Information: PBD: [1996]
- Country of Publication:
- United States
- Language:
- English
Similar Records
On PAC learning of functions with smoothness properties using feedforward sigmoidal networks
Function estimation by feedforward sigmoidal networks with bounded weights