Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
View-invariant modeling and recognition of human actions using grammars Abhijit S. Ogale, Alap Karapurkar, Yiannis Aloimonos
 

Summary: View-invariant modeling and recognition of human actions using grammars
Abhijit S. Ogale, Alap Karapurkar, Yiannis Aloimonos
Computer Vision Laboratory, Dept. of Computer Science
University of Maryland, College Park, MD 20742 USA
Email: {ogale,karapurk,yiannis}@cs.umd.edu
Abstract
In this paper, we represent human actions as short se-
quences of atomic body poses. The knowledge of body pose
is stored only implicitly as a set of silhouettes seen from
multiple viewpoints; no explicit 3D poses or body models
are used, and individual body parts are not identified. Ac-
tions and their constituent atomic poses are extracted from
a set of multiview multiperson video sequences by an au-
tomatic keyframe selection process, and are used to au-
tomatically construct a probabilistic context-free grammar
(PCFG). Given a new single viewpoint video, we can parse
it to recognize actions and changes in viewpoint simultane-
ously. Experimental results are provided.
1. Introduction
The motivation for representing human activity in terms

  

Source: Aloimonos, Yiannis - Center for Automation Research & Department of Computer Science, University of Maryland at College Park

 

Collections: Engineering