Abstract View |
IRRELEVANCE OF FEATURE MAPS FOR BOTTOM UP
VISUAL SALIENCY IN SEGMENTATION AND SEARCH TASKS |
L.Zhaoping*;
K.May
|
Dept Psychol, Univ Col. London, London, United
Kingdom | |
Traditional models of selection using saliency
maps assume that visual inputs are processed by separate
feature maps whose outputs are subsequently added to form a
master saliency map. A recent hypothesis (Li, TICS 6:9-16,
2002) that V1 implements a saliency map requires no separate
feature maps. Rather, saliency at a visual location
corresponds to the activity of the most active V1 cell
responding to inputs there, regardless of its feature tuning.
We test the models using texture segmentation and visual
search tasks. Texture borders in Fig. A and B pop out due to
higher saliency of the bars at the borders. Traditional models
predict easier texture segmentation in pattern C (created by
superposing A and B) than in A and B, while the V1 model does
not. Traditional models predict no interference of the
component pattern D in segmenting pattern E which is created
by superposing A and D, while the V1 model predicts
interference. Using reaction time as a measure of the task
difficulty, the V1 model s
predictions were confirmed. Analogous results were found in
search tasks for orientation singletons in stimuli of target
and distractors made of single or composite bars. The V1 model
was also confirmed using stimuli made of color-orientation
feature composites. Support Contributed By: Gatsby
Charitable Foundation/EPSRC
|
 |
Citation:L. Zhaoping, K. May.
IRRELEVANCE OF FEATURE MAPS FOR BOTTOM UP VISUAL SALIENCY IN
SEGMENTATION AND SEARCH TASKS Program No. 20.1. 2004
Abstract Viewer/Itinerary Planner. Washington, DC: Society
for Neuroscience, 2004. Online. |
2004 Copyright by the Society for Neuroscience all
rights reserved. Permission to republish any abstract or part
of any abstract in any form must be obtained in writing from
the SfN office prior to publication |
 | | |