Lecture 08 27/12/2011 Shai Avidan עעעעע: עעעעע עעעעעע עעע עעעעע עעעעע עעעעע עעע עע עעעעעע / .עע עעעעע עעעעע
Lecture 08
27/12/2011Shai Avidan
מופיע: / לא המופיע זה ולא בכיתה הנלמד החומר הוא המחייב החומר הבהרה .במצגת
Today
• Hough Transform
• Generalized Hough Transform
• Implicit Shape Model
• Video Google
Hough Transform & Generalized Hough Transform
K. Grauman, B. Leibe
Hough Transform• Origin: Detection of straight lines in clutter
– Basic idea: each candidate point votes for all lines that it is consistent with.
– Votes are accumulated in quantized array– Local maxima correspond to candidate lines
• Representation of a line– Usual form y = a x + b has a singularity around 90º.– Better parameterization: x cos() + y sin() =
x
y
θ
ρ
x
y
K. Grauman, B. Leibe
Examples
– Hough transform for a square (left) and a circle (right)
K. Grauman, B. Leibe
Hough Transform: Noisy Line
• Problem: Finding the true maximum
Tokens Votesθ
ρ
K. Grauman, B. Leibe
Hough Transform: Noisy Input
• Problem: Lots of spurious maxima
Tokens Votesθ
ρ
K. Grauman, B. Leibe
Generalized Hough Transform [Ballard81]
• Generalization for an arbitrary contour or shape– Choose reference point for the contour (e.g. center)– For each point on the contour remember where it is located w.r.t. to
the reference point – Remember radius r and angle
relative to the contour tangent– Recognition: whenever you find
a contour point, calculate the tangent angle and ‘vote’ for all possible reference points
– Instead of reference point, can also vote for transformation The same idea can be used with local features!
Slide credit: Bernt Schiele
Implicit Shape Model
K. Grauman, B. Leibe
Gen. Hough Transform with Local Features
• For every feature, store possible “occurrences”
• For new image, let the matched features vote for possible object positions
K. Grauman, B. Leibe
3D Object Recognition• Gen. HT for Recognition
– Typically only 3 feature matches needed for recognition
– Extra matches provide robustness– Affine model can be used for planar
objects
Slide credit: David Lowe
[Lowe99]
K. Grauman, B. Leibe
View Interpolation
• Training– Training views from similar
viewpoints are clusteredbased on feature matches.
– Matching features between adjacent views are linked.
• Recognition– Feature matches may be
spread over several training viewpoints.
Use the known links to “transfer votes” to other viewpoints.
[Lowe01]
K. Grauman, B. Leibe
Recognition Using View Interpolation
K. Grauman, B. Leibe
Location Recognition
Training
16 K. Grauman, B. Leibe
Applications• Sony Aibo
(Evolution Robotics)
• SIFT usage– Recognize
docking station– Communicate
with visual cards
• Other uses– Place recognition– Loop closure in SLAM
Slide credit: David Lowe
Video Google
Indexing local features
• Each patch / region has a descriptor, which is a point in some high-dimensional feature space (e.g., SIFT)
K. Grauman, B. Leibe
Indexing local features
• When we see close points in feature space, we have similar descriptors, which indicates similar local content.
Figure credit: A. ZissermanK. Grauman, B. Leibe
Indexing local features
• We saw in the previous section how to use voting and pose clustering to identify objects using local features
K. Grauman, B. Leibe
Figure credit: David Lowe
Indexing local features• With potentially thousands of features per image,
and hundreds to millions of images to search, how to efficiently find those that are relevant to a new image?
– Low-dimensional descriptors : can use standard efficient data structures for nearest neighbor search
– High-dimensional descriptors: approximate nearest neighbor search methods more practical
– Inverted file indexing schemes
K. Grauman, B. Leibe
• For text documents, an efficient way to find all pages on which a word occurs is to use an index…
• We want to find all images in which a feature occurs.
• To use this idea, we’ll need to map our features to “visual words”.
K. Grauman, B. Leibe
Indexing local features: inverted file index
Visual words
K. Grauman, B. Leibe
• More recently used for describing scenes and objects for the sake of indexing or classification.
Sivic & Zisserman 2003; Csurka, Bray, Dance, & Fan 2004; many others.
Inverted file index for images comprised of visual words
Image credit: A. ZissermanK. Grauman, B. Leibe
Word number
List of image numbers
Bags of visual words• Summarize entire image
based on its distribution (histogram) of word occurrences.
• Analogous to bag of words representation commonly used for documents.
K. Grauman, B. LeibeImage credit: Fei-Fei Li
Video Google System1. Collect all words within query
region2. Inverted file index to find
relevant frames3. Compare word counts4. Spatial verification
Sivic & Zisserman, ICCV 2003
• Demo online at : http://www.robots.ox.ac.uk/~vgg/research/vgoogle/index.html
26 K. Grauman, B. Leibe
Query region
Retrieved frames
Visual vocabulary formation
Issues:• Sampling strategy• Clustering / quantization algorithm• What corpus provides features (universal
vocabulary?)• Vocabulary size, number of words
K. Grauman, B. Leibe
Sampling strategies
K. Grauman, B. LeibeImage credits: F-F. Li, E. Nowak, J. Sivic
Dense, uniformly Sparse, at interest points
Randomly
Multiple interest operators
• To find specific, textured objects, sparse sampling from interest points often more reliable.
• Multiple complementary interest operators offer more image coverage.
• For object categorization, dense sampling offers better coverage.
[See Nowak, Jurie & Triggs, ECCV 2006]
Clustering / quantization methods
• k-means (typical choice), agglomerative clustering, mean-shift,…
29 K. Grauman, B. Leibe