Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Modelbased sensor fusion for aviation Misha Pavel and Ravi K. Sharma
 

Summary: Model­based sensor fusion for aviation
Misha Pavel and Ravi K. Sharma
Department of Electrical Engineering
Oregon Graduate Institute of Science and Technology
P.O. Box 91000, Portland, OR 97291­1000, USA
ABSTRACT
We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors.
Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated
by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of
techniques to map the sensor signals onto perceptual dimensions (e.g., color), such that the human operator can
benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We
examine several color mapping schemes.
Keywords: sensor fusion, image fusion, color fusion, color vision, color mapping, multisensor fusion, enhanced vision
1. INTRODUCTION
The efficiency, robustness, and safety of many visually guided systems, such as flight control, can be improved by
providing the operator with the necessary visual information at all times, even in low visibility conditions. One way
to achieve this objective, is to use a suite of sensors, each specialized for different environmental conditions, e.g., TV,
infrared (IR) camera or a millimeter wave sensor. To generate an appropriate flight control signal, the information
from the different sources must be combined (fused). In general, the fusion can be performed by the system or by a
human pilot.

  

Source: Ahumada Jr., Al - Vision Science and Technology Group, Human Factors Research and Technology Division, NASA Ames Research Center

 

Collections: Engineering