DOE Patents title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Online coupled camera pose estimation and dense reconstruction from video

Abstract

A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing themore » next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

Inventors:
;
Issue Date:
Research Org.:
Univ. of Southern California, Los Angeles, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1330704
Patent Number(s):
9483703
Application Number:
14/120,370
Assignee:
UNIVERSITY OF SOUTHERN CALIFORNIA
Patent Classifications (CPCs):
G - PHYSICS G06 - COMPUTING G06T - IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
DOE Contract Number:  
FG52-08NA28775
Resource Type:
Patent
Resource Relation:
Patent File Date: 2014 May 14
Country of Publication:
United States
Language:
English
Subject:
99 GENERAL AND MISCELLANEOUS; 97 MATHEMATICS AND COMPUTING

Citation Formats

Medioni, Gerard, and Kang, Zhuoliang. Online coupled camera pose estimation and dense reconstruction from video. United States: N. p., 2016. Web.
Medioni, Gerard, & Kang, Zhuoliang. Online coupled camera pose estimation and dense reconstruction from video. United States.
Medioni, Gerard, and Kang, Zhuoliang. Tue . "Online coupled camera pose estimation and dense reconstruction from video". United States. https://www.osti.gov/servlets/purl/1330704.
@article{osti_1330704,
title = {Online coupled camera pose estimation and dense reconstruction from video},
author = {Medioni, Gerard and Kang, Zhuoliang},
abstractNote = {A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2016},
month = {11}
}

Works referenced in this record:

Radar system for multiple object tracking and discrimination
patent, August 1988


Live dense reconstruction with a single moving camera
conference, June 2010

  • Newcombe, Richard A.; Davison, Andrew J.
  • 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • https://doi.org/10.1109/CVPR.2010.5539794

DTAM: Dense tracking and mapping in real-time
conference, November 2011


Fast dense 3D reconstruction using an adaptive multiscale discrete-continuous variational method
conference, March 2014


Parallel Tracking and Mapping for Small AR Workspaces
conference, November 2007

  • Klein, Georg; Murray, David
  • 2007 6th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
  • https://doi.org/10.1109/ISMAR.2007.4538852

Parallel Tracking and Mapping on a camera phone
conference, October 2009


Towards Linear-Time Incremental Structure from Motion
conference, June 2013