Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Approximate Learning in Temporal Hidden Hopfield Felix V. Agakov 1 and David Barber 1
 

Summary: Approximate Learning in Temporal Hidden Hopfield
Models
Felix V. Agakov 1 and David Barber 1
University of Edinburgh, Division of Informatics, Edinburgh EH1 2QL, UK,
felixa@anc.ed.ac.uk, http://anc.ed.ac.uk
Abstract. Many popular probabilistic models for temporal sequences assume simple hid­
den dynamics or low­dimensionality of discrete variables. For higher dimensional discrete
hidden variables, recourse is often made to approximate mean field theories, which to date
have been applied to models with only simple hidden unit dynamics. We consider a class
of models in which the discrete hidden space is defined by parallel dynamics of densely
connected high­dimensional stochastic Hopfield networks. For these Hidden Hopfield Mod­
els (HHMs), mean field methods are derived for learning discrete and continuous temporal
sequences. We discuss applications of HHMs to classification and reconstruction of non­
stationary time series. We also demonstrate a few problems (e.g. learning of incomplete
binary sequences and reconstruction of 3D occupancy graphs) where distributed discrete
hidden space representation may be useful.
1 Markovian Dynamics for Temporal Sequences
Dynamic Bayesian networks are popular tools for modeling temporally correlated patterns. In­
cluded in this class of models are Hidden Markov Models (HMMs), auto­regressive HMMs (see
e.g. Rabiner, 1989), and Factorial HMMs (Ghahramani and Jordan, 1995). These models are spe­

  

Source: Agakov, Felix - Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh

 

Collections: Computer Technologies and Information Sciences