Summary: To appear in Proceedings of IEEE Workshop on Computer Vision and Pattern Recognition for Human Communicative Behavior Analysis
(CVPR4HB), June 2008.
The American Sign Language Lexicon Video Dataset
Vassilis Athitsos 1 , Carol Neidle 2 , Stan Sclaroff 3 , Joan Nash 2 ,
Alexandra Stefan 3 , Quan Yuan 3 , and Ashwin Thangali 3
1 Computer Science and Engineering Department, University of Texas at Arlington, USA
2 Linguistics Program, Boston University, Boston, Massachusetts, USA
3 Computer Science Department, Boston University, Boston, Massachusetts, USA
The lack of a written representation for American Sign
Language (ASL) makes it difficult to do something as com
monplace as looking up an unknown word in a dictionary.
The majority of printed dictionaries organize ASL signs
(represented in drawings or pictures) based on their nearest
English translation; so unless one already knows the mean
ing of a sign, dictionary lookup is not a simple proposition.
In this paper we introduce the ASL Lexicon Video Dataset,
a large and expanding public dataset containing video se
quences of thousands of distinct ASL signs, as well as anno
tations of those sequences, including start/end frames and