skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Incremental online object learning in a vehicular radar-vision fusion framework

Journal Article ·
OSTI ID:1038882

In this paper, we propose an object learning system that incorporates sensory information from an automotive radar system and a video camera. The radar system provides a coarse attention for the focus of visual analysis on relatively small areas within the image plane. The attended visual areas are coded and learned by a 3-layer neural network utilizing what is called in-place learning, where every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through inhibitory and excitatory connections with other neurons. The modeled bottom-up, lateral, and top-down connections in the network enable sensory sparse coding, unsupervised learning and supervised learning to occur concurrently. The presented work is applied to learn two types of encountered objects in multiple outdoor driving settings. Cross validation results show the overall recognition accuracy above 95% for the radar-attended window images. In comparison with the uncoded representation and purely unsupervised learning (without top-down connection), the proposed network improves the recognition rate by 15.93% and 6.35% respectively. The proposed system is also compared with other learning algorithms favorably. The result indicates that our learning system is the only one to fit all the challenging criteria for the development of an incremental and online object learning system.

Research Organization:
Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC52-06NA25396
OSTI ID:
1038882
Report Number(s):
LA-UR-10-07067; LA-UR-10-7067; TRN: US201209%%34
Country of Publication:
United States
Language:
English