Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Human-Assisted Motion Annotation William T. Freeman1
 

Summary: Human-Assisted Motion Annotation
Ce Liu1
William T. Freeman1
Edward H. Adelson1
Yair Weiss1,2
1
CSAIL MIT 2
The Hebrew University of Jerusalem
{celiu,billf,adelson}@csail.mit.edu yweiss@cs.huji.ac.il
(a) A frame of a video sequence (b) User-aided layer segmentation (c) User-annotated motion (d) Output of a flow algorithm [8]
Figure 1. We designed a system to allow the user to specify layer configurations and motion hints (b). Our system uses these hints to
calculate a dense flow field for each layer. We show that the flow (c) is repeatable and accurate. (d): The output of a representative optical
flow algorithm [8], trained on the Yosemite sequence, shows many differences from the labeled ground truth for this and other realistic
sequences we have labeled. This indicates the value of our database for training and evaluating optical flow algorithms.
Abstract
Obtaining ground-truth motion for arbitrary, real-world
video sequences is a challenging but important task for both
algorithm evaluation and model design. Existing ground-
truth databases are either synthetic, such as the Yosemite
sequence, or limited to indoor, experimental setups, such

  

Source: Adelson, Edward - Computer Science and Artificial Intelligence Laboratory, Department of Brain and Cognitive Science, Massachusetts Institute of Technology (MIT)
Freeman, William T. - Computer Science and Artificial Intelligence Laboratory & Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT)

 

Collections: Biology and Medicine; Computer Technologies and Information Sciences