Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Benchmark Databases for Video-Based Automatic Sign Language Recognition Philippe Dreuw1
 

Summary: Benchmark Databases for Video-Based Automatic Sign Language Recognition
Philippe Dreuw1
, Carol Neidle2
, Vassilis Athitsos3
, Stan Sclaroff2
, and Hermann Ney1
1
RWTH Aachen University, Aachen, Germany
dreuw@cs.rwth-aachen.de
2
Boston University, Boston, MA, USA
carol@bu.edu
3
University of Texas, Arlington, TX, USA
Abstract
A new, linguistically annotated, video database for automatic sign language recognition is presented. The new RWTH-BOSTON-400
corpus, which consists of 843 sentences, several speakers and separate subsets for training, development, and testing is described in
detail. For evaluation and benchmarking of automatic sign language recognition, large corpora are needed. Recent research has focused
mainly on isolated sign language recognition methods using video sequences that have been recorded under lab conditions using special
hardware like data gloves. Such databases have often consisted generally of only one speaker and thus have been speaker-dependent, and

  

Source: Athitsos, Vassilis - Department of Computer Science and Engineering, University of Texas at Arlington

 

Collections: Computer Technologies and Information Sciences