skip to main content
DOE Patents title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Neural node network and model, and method of teaching same

Abstract

The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modelingmore » and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.« less

Inventors:
 [1];  [1];  [2];  [3];  [1]
  1. (College Station, TX)
  2. (Austin, TX)
  3. (Irvine, CA)
Issue Date:
Research Org.:
Texas Engineering Experiment Station
Sponsoring Org.:
USDOE
OSTI Identifier:
870229
Patent Number(s):
5479571
Assignee:
Texas A&M University System (College Station, TX) CHO
DOE Contract Number:  
FG07-89ER12893
Resource Type:
Patent
Country of Publication:
United States
Language:
English
Subject:
neural; node; network; model; method; teaching; connected; feed; forward; hidden; layer; 16; nodes; 20; output; fed; input; unit; delay; produced; device; 24; occurring; feedback; path; 22; local; receives; delayed; crosstalk; 36; performs; transfer; function; operation; based; inputs; previous; outputs; implemented; analog; digital; purpose; processor; methods; propagation; weight; calculation; preferably; gradient; decent; immediately; follows; computations; subsequent; weights; normalized; preventing; convergence; optimum; education; incremental; off-line; educated; suitable; modeling; controlling; dynamic; nonlinear; systems; time; series; predicting; parameters; on-line; processing; transfer function; time series; delay device; nonlinear systems; on-line process; hidden layer; /706/

Citation Formats

Parlos, Alexander G., Atiya, Amir F., Fernandez, Benito, Tsai, Wei K., and Chong, Kil T. Neural node network and model, and method of teaching same. United States: N. p., 1995. Web.
Parlos, Alexander G., Atiya, Amir F., Fernandez, Benito, Tsai, Wei K., & Chong, Kil T. Neural node network and model, and method of teaching same. United States.
Parlos, Alexander G., Atiya, Amir F., Fernandez, Benito, Tsai, Wei K., and Chong, Kil T. Sun . "Neural node network and model, and method of teaching same". United States. https://www.osti.gov/servlets/purl/870229.
@article{osti_870229,
title = {Neural node network and model, and method of teaching same},
author = {Parlos, Alexander G. and Atiya, Amir F. and Fernandez, Benito and Tsai, Wei K. and Chong, Kil T.},
abstractNote = {The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {1995},
month = {1}
}

Patent:

Save / Share: