Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Network: Computation in Neural Systems September 2006; 17(3): 235252
 

Summary: Network: Computation in Neural Systems
September 2006; 17(3): 235252
Dimensional reduction for reward-based learning
CHRISTIAN D. SWINEHART1
& L. F. ABBOTT2
1
Volen Center for Complex Systems, Department of Biology, Brandeis University, Waltham,
MA 02454-9110, USA
2
Center for Neurobiology and Behavior, Department of Physiology and Cellular Biophysics,
Columbia University College of Physicians and Surgeons, New York, NY 10032-2695, USA
(Received 22 December 2005; accepted 26 April 2006)
Abstract
Reward-based learning in neural systems is challenging because a large number of parameters that affect
network function must be optimized solely on the basis of a reward signal that indicates improved
performance. Searching the parameter space for an optimal solution is particularly difficult if the
network is large. We show that Hebbian forms of synaptic plasticity applied to synapses between a
supervisor circuit and the network it is controlling can effectively reduce the dimension of the space
of parameters being searched to support efficient reinforcement-based learning in large networks.
The critical element is that the connections between the supervisor units and the network must be

  

Source: Abbott, Laurence - Center for Neurobiology and Behavior & Department of Physiology and Cellular Biophysics, Columbia University

 

Collections: Biology and Medicine