Democratic reinforcement: A principle for brain function
- Brookhaven National Laboratory, Upton, New York 11973 (United States)
We introduce a simple ``toy`` brain model. The model consists of a set of randomly connected, or layered integrate-and-fire neurons. Inputs to and outputs from the environment are connected randomly to subsets of neurons. The connections between firing neurons are strengthened or weakened according to whether the action was successful or not. Unlike previous reinforcement learning algorithms, the feedback from the environment is democratic: it affects all neurons in the same way, irrespective of their position in the network and independent of the output signal. Thus no unrealistic back propagation or other external computation is needed. This is accomplished by a global threshold regulation which allows the system to self-organize into a highly susceptible, possibly ``critical`` state with low activity and sparse connections between firing neurons. The low activity permits memory in quiescent areas to be conserved since only firing neurons are modified when new information is being taught.
- OSTI ID:
- 44842
- Journal Information:
- Physical Review. E, Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, Vol. 51, Issue 5; Other Information: PBD: May 1995
- Country of Publication:
- United States
- Language:
- English
Similar Records
Self-organization in a simple brain model
Dynamic neuronal ensembles: Issues in representing structure change in object-oriented, biologically-based brain models