Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Variational Information Maximization for Neural Coding
 

Summary: Variational Information Maximization for
Neural Coding
Felix Agakov1
and David Barber2
1
University of Edinburgh, 5 Forrest Hill, EH1 2QL Edinburgh, UK
felixa@inf.ed.ac.uk, www.anc.ed.ac.uk
2
IDIAP, Rue du Simplon 4, CH-1920 Martigny Switzerland,
www.idiap.ch
Abstract. Mutual Information (MI) is a long studied measure of cod-
ing efficiency, and many attempts to apply it to population coding have
been made. However, this is a computationally intractable task, and most
previous studies redefine the criterion in forms of approximations. Re-
cently we described properties of a simple lower bound on MI [2]. Here
we describe the bound optimization procedure for learning of popula-
tion codes in a simple point neural model. We compare our approach
with other techniques maximizing approximations of MI, focusing on a
comparison with the Fisher Information criterion.
1 Introduction

  

Source: Agakov, Felix - Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh

 

Collections: Computer Technologies and Information Sciences