Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
The Hidden Markov Topic Model: A Probabilistic Model of Semantic Representation
 

Summary: The Hidden Markov Topic Model: A Probabilistic Model
of Semantic Representation
Mark Andrews, Gabriella Vigliocco
Department of Cognitive, Perceptual, and Brain Sciences, Division of Psychology and Language Sciences,
University College London
Received 2 October 2009; accepted 12 October 2009
Abstract
In this paper, we describe a model that learns semantic representations from the distributional sta-
tistics of language. This model, however, goes beyond the common bag-of-words paradigm, and
infers semantic representations by taking into account the inherent sequential nature of linguistic
data. The model we describe, which we refer to as a Hidden Markov Topics model, is a natural exten-
sion of the current state of the art in Bayesian bag-of-words models, that is, the Topics model of Grif-
fiths, Steyvers, and Tenenbaum (2007), preserving its strengths while extending its scope to
incorporate more fine-grained linguistic information.
Keywords: Bayesian models; Probabilistic models; Computational models; Semantic representation;
Semantic memory
1. Introduction
How word meanings are represented and learned is a foundational problem in the study
of human language use. Within cognitive science, a promising recent approach to this prob-
lem has been the study of how the meanings of words can be learned from their statistical

  

Source: Andrews, Mark W. - Department of Cognitive, Perceptual and Brain Sciences, University College London

 

Collections: Computer Technologies and Information Sciences