Robot navigation using image sequences
- Yale Univ., New Haven, CT (United States)
We describe a framework for robot navigation that exploits the continuity of image sequences. Tracked visual features both guide the robot and provide predictive information about subsequent features to track. Our hypothesis is that image-based techniques will allow accurate motion without a precise geometric model of the world, while using predictive information will add speed and robustness. A basic component of our framework is called a scene, which is the set of image features stable over some segment of motion. When the scene changes, it is appended to a stored sequence. As the robot moves, correspondences and dissimilarities between current, remembered, and expected scenes provide cues to join and split scene sequences, forming a map-like directed graph. Visual servoing on features in successive scenes is used to traverse a path between robot and goal map locations. In our framework, a human guide serves as a scene recognition oracle during a map-learning phase; thereafter, assuming a known starting position, the robot can independently determine its location without general scene recognition ability. A prototype implementation of this framework uses as features color patches, sum-of-squared differences (SSD) subimages, or image projections of rectangles.
- OSTI ID:
- 430765
- Report Number(s):
- CONF-960876--
- Country of Publication:
- United States
- Language:
- English
Similar Records
Intelligent robots and computer vision XII: Active vision and 3D methods; Proceedings of the Meeting, Boston, MA, Sept. 8, 9, 1993
Robotic exploration under the controlled active vision framework