skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Learning View-Invariant Features for Person Identification in Temporally Synchronized Videos Taken by Wearable Cameras

Abstract

In this paper, we study the problem of Cross-View Person Identification (CVPI), which aims at identifying the same person from temporally synchronized videos taken by different wearable cameras. Our basic idea is to utilize the human motion consistency for CVPI, where human motion can be computed by optical flow. However, optical flow is view-variant - the same person's optical flow in different videos can be very different due to view angle change. In this paper, we attempt to utilize 3D human-skeleton sequences to learn a model that can extract view-invariant motion features from optical flows in different views. For this purpose, we use 3D Mocap database to build a synthetic optical flow dataset and train a Triplet Network (TN) consisting of three sub-networks: two for optical flow sequences from different views and one for the underlying 3D Mocap skeleton sequence. Finally, sub-networks for optical flows are used to extract view-invariant features for CVPI. Experimental results show that, using only the motion information, the proposed method can achieve comparable performance with the state-of-the-art methods. Further combination of the proposed method with an appearance-based method achieves new state-of-the-art performance.

Authors:
;
Publication Date:
Research Org.:
Brookhaven National Lab. (BNL), Upton, NY (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (SC-21)
OSTI Identifier:
1491128
Report Number(s):
BNL-210880-2019-COPA
DOE Contract Number:  
SC0012704
Resource Type:
Conference
Resource Relation:
Conference: 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy , 10/22/2017 - 10/29/2017
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Zheng, Kang, and Lin, Yuewei. Learning View-Invariant Features for Person Identification in Temporally Synchronized Videos Taken by Wearable Cameras. United States: N. p., 2017. Web. doi:10.1109/ICCV.2017.311.
Zheng, Kang, & Lin, Yuewei. Learning View-Invariant Features for Person Identification in Temporally Synchronized Videos Taken by Wearable Cameras. United States. https://doi.org/10.1109/ICCV.2017.311
Zheng, Kang, and Lin, Yuewei. 2017. "Learning View-Invariant Features for Person Identification in Temporally Synchronized Videos Taken by Wearable Cameras". United States. https://doi.org/10.1109/ICCV.2017.311. https://www.osti.gov/servlets/purl/1491128.
@article{osti_1491128,
title = {Learning View-Invariant Features for Person Identification in Temporally Synchronized Videos Taken by Wearable Cameras},
author = {Zheng, Kang and Lin, Yuewei},
abstractNote = {In this paper, we study the problem of Cross-View Person Identification (CVPI), which aims at identifying the same person from temporally synchronized videos taken by different wearable cameras. Our basic idea is to utilize the human motion consistency for CVPI, where human motion can be computed by optical flow. However, optical flow is view-variant - the same person's optical flow in different videos can be very different due to view angle change. In this paper, we attempt to utilize 3D human-skeleton sequences to learn a model that can extract view-invariant motion features from optical flows in different views. For this purpose, we use 3D Mocap database to build a synthetic optical flow dataset and train a Triplet Network (TN) consisting of three sub-networks: two for optical flow sequences from different views and one for the underlying 3D Mocap skeleton sequence. Finally, sub-networks for optical flows are used to extract view-invariant features for CVPI. Experimental results show that, using only the motion information, the proposed method can achieve comparable performance with the state-of-the-art methods. Further combination of the proposed method with an appearance-based method achieves new state-of-the-art performance.},
doi = {10.1109/ICCV.2017.311},
url = {https://www.osti.gov/biblio/1491128}, journal = {},
number = ,
volume = ,
place = {United States},
year = {Sun Oct 22 00:00:00 EDT 2017},
month = {Sun Oct 22 00:00:00 EDT 2017}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: