A robust and efficient training algorithm for feedforward neural networks
One of the attractive features of neural networks is their ability to generalize from sample data. The network learns correct associations through an iterative modification of weight values associated with the arcs that connect the processing elements. The use of neural networks is limited, however, because current training techniques suffer from slow convergence and a lack of robustness. This means that training may not result in a usable network in a reasonable time, if at all. By formulating the neural network training problem as an unconstrained optimization problem, the task of training neural networks can be accelerated by using techniques such as conjugate gradient and quasi-Newton methods. This allows neurocomputing to be applied to problems that are presently impractical for a neural network solution. This study explores the use of the best current optimization techniques applied to a variety of neural network problem domains. An optimizer was developed and tested for use on general feedforward neural network training problems.
- Research Organization:
- Kent State Univ., OH (United States)
- OSTI ID:
- 7033768
- Country of Publication:
- United States
- Language:
- English
Similar Records
Nonlinear programming with feedforward neural networks.
Statistical and optimization methods to expedite neural network training for transient identification