Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A robust and efficient training algorithm for feedforward neural networks

Thesis/Dissertation ·
OSTI ID:7033768

One of the attractive features of neural networks is their ability to generalize from sample data. The network learns correct associations through an iterative modification of weight values associated with the arcs that connect the processing elements. The use of neural networks is limited, however, because current training techniques suffer from slow convergence and a lack of robustness. This means that training may not result in a usable network in a reasonable time, if at all. By formulating the neural network training problem as an unconstrained optimization problem, the task of training neural networks can be accelerated by using techniques such as conjugate gradient and quasi-Newton methods. This allows neurocomputing to be applied to problems that are presently impractical for a neural network solution. This study explores the use of the best current optimization techniques applied to a variety of neural network problem domains. An optimizer was developed and tested for use on general feedforward neural network training problems.

Research Organization:
Kent State Univ., OH (United States)
OSTI ID:
7033768
Country of Publication:
United States
Language:
English

Similar Records

Ill-conditioning in neural network training problems
Journal Article · Sat May 01 00:00:00 EDT 1993 · SIAM Journal on Scientific and Statistical Computing (Society for Industrial and Applied Mathematics); (United States) · OSTI ID:6492379

Nonlinear programming with feedforward neural networks.
Conference · Wed Jun 02 00:00:00 EDT 1999 · OSTI ID:11194

Statistical and optimization methods to expedite neural network training for transient identification
Conference · Sun Feb 28 23:00:00 EST 1993 · OSTI ID:10147434