Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Graph-Based Visual Saliency Jonathan Harel, Christof Koch , Pietro Perona
 

Summary: Graph-Based Visual Saliency
Jonathan Harel, Christof Koch , Pietro Perona
California Institute of Technology
Pasadena, CA 91125
{harel,koch}@klab.caltech.edu, perona@vision.caltech.edu
Abstract
A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is
proposed. It consists of two steps: rst forming activation maps on certain feature
channels, and then normalizing them in a way which highlights conspicuity and
admits combination with other maps. The model is simple, and biologically plau-
sible insofar as it is naturally parallelized. This model powerfully predicts human
xations on 749 variations of 108 natural images, achieving 98% of the ROC area
of a human-based control, whereas the classical algorithms of Itti & Koch ([2],
[3], [4]) achieve only 84%.
1 Introduction
Most vertebrates, including humans, can move their eyes. They use this ability to sample in detail
the most relevant features of a scene, while spending only limited processing resources elsewhere.
The ability to predict, given an image (or video), where a human might xate in a xed-time free-
viewing scenario has long been of interest in the vision community. Besides the purely scienti c
goal of understanding this remarkable behavior of humans, and animals in general, to consistently

  

Source: Adolphs, Ralph - Psychology and Neuroscience, California Institute of Technology
Koch, Christof - Division of Biology, California Institute of Technology

 

Collections: Biology and Medicine