Object Recognizing
Object Classes
Individual Recognition
Object parts
Headlight
Window
Door knob
Back wheel
Mirror
Front wheel Headlight
Window
Bumper
Class Non-class
Class Non-class
Unsupervised Training Data
Features and Classifiers
Same features with different classifiersSame classifier with different features
Generic Features
Simple (wavelets) Complex (Geons)
Class-specific Features: Common Building Blocks
Mutual information
H(C) when F=1 H(C) when F=0
I(C;F) = H(C) – H(C/F)
F=1 F=0
H(C)
))(()()( cPLogcPcH
Mutual Information I(C,F)
Class: 1 1 0 1 0 1 0 0
Feature: 1 0 0 1 1 1 0 0
I(F,C) = H(C) – H(C|F)
Optimal classification features
• Theoretically: maximizing delivered information minimizes classification error
• In practice: informative object components can be identified in training images
Mutual Info vs. Threshold
0.00 20.00 40.00
Detection threshold
Mu
tu
al
Info
forehead
hairline
mouth
eye
nose
nosebridge
long_hairline
chin
twoeyes
Selecting Fragments
Adding a New Fragment(max-min selection)
?
MIΔ
MI = MI [Δ ;class] - MI [ ;class ]Select: Maxi Mink ΔMI (Fi, Fk)
)Min. over existing fragments, Max. over the entire pool(
);(),;(min);(),;( jjiij
i FCMIFFCMIFCMIFFCMI
Horse-class features
Car-class features
Pictorial features Learned from examples
Star model
Detected fragments ‘vote’ for the center location
Find location with maximal vote
In variations, a popular state-of-the art scheme
Fragment-based Classification
Fergus, Perona, Zisserman 2003Agarwal, Roth 2002Ullman, Sali 1999
Variability of Airplanes Detected
Recognition Features in the Brain
Class-fragments and Activation
Malach et al 2008
EEG
ERP
FACE FEATURES
milliseconds0 200 400 600 0 200 400 600
milliseconds
Left Hemisphere Right Hemisphere
Posterior-Temporal sites
FACE FEATURES
milliseconds0 200 400 600 0 200 400 600
milliseconds
Left Hemisphere Right Hemisphere
Posterior-Temporal sites
MI 1— MI 2— MI 3— MI 4— MI 5—
Harel, Ullman,Epshtein, Bentin Vis Res 2007
Bag of words
ObjectObject Bag of ‘words’Bag of ‘words’
Bag of visual words A large collection of image patches
–
1.Feature detection 1.Feature detection and representationand representation
•Regular grid– & VogelSchiele ,2003
–Fei- ,Fei & Perona2005
Generate a dictionary using K-means clustering
Each class has its words historgram
–
–
–
Limited or no GeometrySimple and popular, no longer state-of-the art .
Classifiers
SVM – linear separation in feature space
Optimal Separation
SVM
Find a separating plane such that the closest points are as far as possible
Advantages of SVM :
Optimal separation Extensions to the non-separable case: Kernel SVM
Separating line: w ∙ x + b = 0 Far line: w ∙ x + b = +1Their distance: w ∙ ∆x = +1 Separation: |∆x| = 1/|w|Margin: 2/|w|
0+1
-1 The Margin
Max Margin Classification
)Equivalently, usually used
How to solve such constraint optimization ?
The examples are vectors xi
The labels yi are +1 for class, -1 for non-class
Using Lagrange multipliers :
Using Lagrange multipliers: Minimize LP =
With αi > 0 the Lagrange multipliers
Minimizing the Lagrangian
Minimize Lp :
Set all derivatives to 0:
Also for the derivative w.r.t. αi
Dual formulation: Maximize the Lagrangian w.r.t. the αi and the above two conditions.
Solved in ‘dual’ formulation
Maximize w.r.t αi :
With the conditions:
Put into Lp
W will drop out of the expression
Dual formulation
Mathematically equivalent formulation: Can maximize the Lagrangian with respect to the αi
After manipulations – concise matrix form :
SVM: in simple matrix form
We first find the α. From this we can find: w, b, and the support vectors.
The matrix H is a simple ‘data matrix’: Hij = yiyj <xi∙xj>
Final classification: w∙x + b ∑αi yi <xi x> + b
Because w = ∑αi yi xi Only <xi x> with support vectors are used
DPM Felzenszwalb
• Felzenszwalb, McAllester, Ramanan CVPR 2008. A Discriminatively Trained, Multiscale, Deformable Part Model
• Many implementation details, will describe the main points.
HoG descriptor
HoG Descriptor
Dallal, N & Triggs, B. Histograms of Oriented Gradients for Human Detection
Using patches with HoG descriptors and classification by SVM
Person model: HoG
Object model using HoG
A bicycle and its ‘root filter ’The root filter is a patch of HoG descriptor Image is partitioned into 8x8 pixel cells In each block we compute a histogram of gradient orientations
The filter is searched on a pyramid of HoG descriptors, to deal with unknown scale
Dealing with scale: multi-scale analysis
A part Pi = (Fi, vi, si, ai, bi) .
Fi is filter for the i-th part, vi is the center for a box of possible positions for part i relative to the root position, si the size of this box
ai and bi are two-dimensional vectors specifying coefficients of a quadratic function measuring a score for each possible placement of the i-th part. That is, ai and bi are two numbers each, and the penalty for deviation ∆x, ∆y from the expected location is a1 ∆ x + a2 ∆y + b1 ∆x2 + b2 ∆y2
Adding Parts
Bicycle model: root, parts, spatial map
Person model
The full score of a potential match is: ∑ Fi ∙ Hi + ∑ ai1 xi + ai2 yi
+ bi1xi2 + bi2yi
2
Fi ∙ Hi is the appearance part
xi, yi, is the deviation of part pi from its expected location in the model. This is the spatial part.
Match Score
search with gradient descent over the placement. This includes also the levels in the hierarchy. Start with the root filter, find places of high score for it. For these high-scoring locations, each for the optimal placement of the parts at a level with twice the resolution as the root-filter, using GD.
Final decision β∙ψ > θ implies class
Recognition
Essentially maximize ∑Fi Hi + ∑ ai1 xi + ai2 y + bi1x2 + bi2y2
Over placements (xi yi)
‘Pascal Challenge’ Airplanes
Obtaining human-level performance ?
All images contain at least 1 bike
Bike Recognition