Model comparison and challenges II Compositional bias of salient object detection benchmarking Xiaodi Hou K-Lab, Computation and Neural Systems California Institute of Technology for the Crash Course on Visual Saliency Modeling: Behavioral Findings and Computational Models CVPR 2013
35
Embed
Model comparison and challenges II Compositional bias of salient object detection benchmarking
Model comparison and challenges II Compositional bias of salient object detection benchmarking. for the Crash Course on Visual Saliency Modeling: Behavioral Findings and Computational Models CVPR 2013. Xiaodi Hou K-Lab, Computation and Neural Systems California Institute of Technology. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Model comparison and challenges IICompositional bias of salient object detection benchmarking
Xiaodi HouK-Lab, Computation and Neural Systems
California Institute of Technology
for the Crash Course on Visual Saliency Modeling:Behavioral Findings and Computational Models
CVPR 2013
Schedule
On detecting salient objects• Learning to Detect A Salient Object [Liu et. al., CVPR 07]• Frequency-tuned Salient Region Detection [Achanta et. al., CVPR 09]
The progress!• Some top performers:
– [PCA] What makes a patch distinct [Margolin et. al., CVPR 13]– [SF]Saliency filters [Perazzi et. al., CVPR 12]:
• Our algorithm works on images with salient objects only!
The paradox of salient object detection
But hey, what is a “salient object”?
COMPOSITIONAL BIAS
Before we proceed…
• Google Image Search: “science”– Rutherford atomic model (9)– Test tubes (10)– Microscopes (4)– Double helix (3)– Old guys with crazy hair and glasses (3)
Stereotypes of science are not sciences!
How to compose a biased salient object detection dataset
Decide to build a new salient object dataset!
So what is saliency?
Searching for unambiguous examples of saliency…
Found one! Add to my dataset!
Job done! Let other people play with my
dataset!
The compositional bias
• Compositional bias: Biases introduced during the composition of a dataset:– Exaggerating on stereotypical attributes.• Limited variability in positive samples.• Lack of negative samples at all.
Unlike datasets in machine learning, where the dataset is the world, computer vision datasets are supposed to be a representation of the world.
---- [Torralba and Efros: Unbiased look at Dataset bias]
Compositional bias: the statistics
• Object number
Compositional bias: the statistics
• Object eccentricity
Compositional bias: the statistics• Global foreground and background contrast
Compositional bias: the statistics• Local foreground/background contrast (contour strength)
TOWARDS A BETTER SALIENT OBJECT DATASET
The new project
• Build a salient object detection dataset from a good object detection dataset (e.g. PASCAL VOC).
Let the eye fixations pick up those salient objects!
Data collection (in process)
• SR Research EyeLink 1000• 2-sec viewing time.• “Free-viewing” instruction (will mention it later).• 3 subjects (more subjects on the way).
We will release the dataset very soon!
What makes an object salient• Unit conversion:– From fixation maps– To object fixation score• sum of blurred fixation map intensity within the object
mask.
Object size and saliency
• Large objects attract more fixations.
• Small objects receive denser fixations.
Object size and saliency
Objects, salient objects, and the most salient objects
• Salient objects:– Fixation score higher than
mean (67.3% objects).• Most salient objects:– Fixation score higher than
mean*2 (27.8% objects).
Image with fixation Object labeling Salient objects Most salient object(s)
Salient objects and salient object detection
• Guess how does the algorithms perform on “salient objects” and “most salient objects”?
On all objects:• FT: 0.28• GC: 0.39• SF: 0.35• PC: 0.38