Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network

  Advanced Search  

Combining Multimodal Sensory Input for Spatial Learning

Summary: Combining Multimodal Sensory Input for
Spatial Learning
Thomas Str¨osslin1
, Christophe Krebser, Angelo Arleo2
, and Wulfram Gerstner1
Laboratory of Computational Neuroscience, EPFL, Lausanne, Switzerland
Laboratoire de Physiologie de la Perception et de l'Action, Coll`ege de
France-CNRS, Paris, France
Abstract. For robust self-localisation in real environments autonomous
agents must rely upon multimodal sensory information. The relative
importance of a sensory modality is not constant during the agent-
environment interaction. We study the interrelation between visual and
tactile information in a spatial learning task. We adopt a biologically in-
spired approach to detect multimodal correlations based on the proper-
ties of neurons in the superior colliculus. Reward-based Hebbian learning
is applied to train an active gating network to weigh individual senses
depending on the current environmental conditions. The model is imple-
mented and tested on a mobile robot platform.


Source: Arleo, Angelo - Laboratory of Neurobiology of Adaptive Processes, Université Pierre-et-Marie-Curie, Paris 6


Collections: Biology and Medicine