Top Banner
Exemplars for Object Detection Noah Snavely CS7670: September 5, 2011
43

Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Oct 16, 2019

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Exemplars for Object Detection

Noah Snavely

CS7670: September 5, 2011

Page 2: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Announcements

• Office hours: Thursdays 1pm – 2:30pm

• Course schedule is now online

Page 3: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Object detection: where are we?

• Incredible progress in the last ten years

• Better features, better models, better learning methods, better datasets

• Combination of science and hacks

Credit: Flickr user neilalderney123

Page 4: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

The 800-lb Gorilla of Vision Contests

• PASCAL VOC Challenge

• 20 categories

• Annual classification, detection, segmentation, … challenges

Page 5: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Object detection performance (2010)

Page 6: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Object detection performance (2010)

The 2011 server opened for submissions today!

Page 7: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Machine learning for object detection

• What features do we use?

– intensity, color, gradient information, …

• Which machine learning methods?

– generative vs. discriminative

– k-nearest neighbors, boosting, SVMs, …

• What hacks do we need to get things working?

Page 8: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Histogram of Oriented Gradients (HoG)

[Dalal and Triggs, CVPR 2005]

10x10 cells

20x20 cells

HoGify

Page 9: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Histogram of Oriented Gradients (HoG)

Page 10: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Histogram of Oriented Gradients (HoG)

• Like SIFT (Scale Invariant Feature Transform), but…

– Sampled on a dense, regular grid

– Gradients are contrast normalized in overlapping blocks

10x10 cells

20x20 cells

HoGify

[Dalal and Triggs, CVPR 2005]

Page 11: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Histogram of Oriented Gradients (HoG)

• First used for application of person detection [Dalal and Triggs, CVPR 2005]

• Cited since in thousands of computer vision papers

Page 12: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Linear classifiers

• Find linear function to separate positive and negative examples

0:negative

0:positive

b

b

ii

ii

wxx

wxx

Which lineis best?

[slide credit: Kristin Grauman]

Page 13: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Support Vector Machines (SVMs)

• Discriminative classifier based on optimal separating line (for 2D case)

• Maximize the marginbetween the positive and negative training examples

[slide credit: Kristin Grauman]

Page 14: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Support vector machines

• Want line that maximizes the margin.

1:1)(negative

1:1)( positive

by

by

iii

iii

wxx

wxx

MarginSupport vectors

C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998

For support, vectors, 1 bi wx

[slide credit: Kristin Grauman]

Page 15: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Person detection, ca. 20051. Represent each example with a single, fixed

HoG template

2. Learn a single [linear] SVM as a detector

Code available: http://pascal.inrialpes.fr/soft/olt/

Page 16: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Positive and negative examples

+ thousands more…

+ millions more…

Page 17: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

HoG templates for person detection

Page 18: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Person detection with HoG & linear SVM

[Dalal and Triggs, CVPR 2005]

Page 19: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Are we done?

Page 20: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Are we done?• Single, rigid template usually not enough to

represent a category

– Many objects (e.g. humans) are articulated, or have parts that can vary in configuration

– Many object categories look very different from different viewpoints, or from instance to instance

Page 21: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Difficulty of representing positive instances

• Discriminative methods have proven very powerful

• But linear SVM on HoG templates not sufficient?

• Alternatives:

– Parts-based models [Felzenszwalb et al. CVPR 2008]

– Latent SVMs [Felzenszwalb et al. CVPR 2008]

– Today’s paper [Exemplar-SVMs, Malisiewicz, et al. ICCV 2011]

Page 22: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Parts-based models

Felzenszwalb, et al., Discriminatively Trained Deformable Part Models, http://people.cs.uchicago.edu/~pff/latent/

Page 23: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Latent SVMs

• Rather than training a single linear SVM separating positive examples…

• … cluster positive examples into “components” and train a classifier for each (using all negative examples)

Page 24: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Two-component bicycle model

“side” component

“frontal” component

Page 25: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Latent SVMs

• Latent because component labels are unknown in advance

Page 26: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Training of Latent SVMs

• Components are initialized by clustering positive instances by bounding box aspect ratio

• Linear SVM is learned for each component

• Each positive instance reassigned to the component that gives the max SVM response

• SVMs are retrained, and the process repeats

• Before training, training data is doubled through flipping

[Felzenszwalb et al., PAMI 2010]

Page 27: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Exemplar-SVMs for Object Detection

• Brings us to today…

• Why do discriminative techniques work so well?

– When there are 100s of millions of training instances, kNN is infeasible

– Parametric classifiers very good at generalizing from millions of negative examples

– This paper’s claim: parametric classifiers *aren’t* the right way to represent positive examples

Ensemble of Exemplar-SVMs for Object Detection and Beyond. Malisiewicz, Gupta, Efros, ICCV 2011.

Page 28: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Representing positive examples

Page 29: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Exemplar-SVMs

• This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of negative examples)

• Each positive instance becomes an exemplar with an associated linear SVM; at test time each classifier is applied to a test image

• “Non-parametric when representing the positives, but parametric… when representing the negatives

• Allows for more accurate correspondence and information transfer

Page 30: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

vs.

Page 31: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Example

Raw HoG+ NN

Exemplar-SVM

Learned distance fn

Exemplar w Top 5 detections

“learns what the exemplar is not”

Page 32: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Multiple instances of a category

• Each classifier fires on similar trains

Top detections

Page 33: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Successful classifications

Page 34: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Failed classifications

Page 35: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Does it really work?

Page 36: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Geometry transfer

Page 37: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Geometry transfer

Page 38: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Segmentation transfer

Page 39: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Segmentation transfer

Page 40: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Segmentation transfer

Page 41: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

“Object priming” transfer

Page 42: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Histogram of Oriented Gradients (HoG)

Page 43: Exemplars for Object Detection - Cornell University · Exemplar-SVMs •This paper goes to the extreme, and learns a separate classifier for every positive example (and millions of

Conclusions and open issues

• Interesting new idea for object detection

• … but does it really work? Seems to perform well on some categories, but not others

• Maybe this is too extreme -- some grouping of positives seems like a good idea

• How to come up with better ways of clustering?