Computational Photography Si Lu Spring 2018 http://web.cecs.pdx.edu/~lusi/CS510/CS510_Computati onal_Photography.htm 04/24/2018
Computational Photography
Si Lu
Spring 2018
http://web.cecs.pdx.edu/~lusi/CS510/CS510_Computational_Photography.htm
04/24/2018
Last Time
2
o Relightingn Tone Mappingn HDR
Today
o Panorama n Overviewn Feature detectionn Feature matching
o Mid-term project presentationn Not real mid-termn 8 minutes presentation n Scheduleo May 1 and 3
3With slides by Prof. C. Dyer and K. Grauman
Panorama Building: History
4
Along the River During Ching Ming Festival by Z.D Zhang (1085-1145 )
San Francisco from Rincon Hill, 1851, by Martin Behrmanx
Panorama Building: A Concise History
o The state of the art and practice is good at assembling images into panoramas
n Mid 90s –Commercial Players (e.g. QuicktimeVR)n Late 90s –Robust stitchers(in research)n Early 00s –Consumer stitching commonn Mid 00s –Automation
5
Stitching Recipeo Align pairs of images
o Align all to a common frame
o Adjust (Global) & Blend
6
Stitching Images Together
7
When do two images “stitch”?
8
Images can be transformed to match
9
Images are related by Homographies
10
Compute Homographies
11
Automatic Feature Points Matching
o Match local neighborhoods around pointso Use descriptors to efficiently compare: SIFTn [Lowe 04] most common choice
12
Stitching Recipeo Align pairs of images
o Align all to a common frame
o Adjust (Global) & Blend
13
Wide Baseline Matching
• Images taken by cameras that are far apart make the correspondence problem very difficult
• Feature-based approach: Detect and match feature points in pairs of images
Credit: C. Dyer
• Detect feature points
• Find corresponding pairs
Matching with Features
Credit: C. Dyer
Matching with Features
o Problem 1:
n Detect the same point independently in both images
no chance to match!
We need a repeatable detector
Credit: C. Dyer
Matching with Features
o Problem 2:
n For each point correctly recognize the corresponding point
?
We need a reliable and distinctive descriptor
Credit: C. Dyer
o Local: features are local, so robust to occlusion and clutter (no prior segmentation)
o Invariant (or covariant) to many kinds of geometric and photometric transformations
o Robust: noise, blur, discretization, compression, etc. do not have a big impact on the feature
o Distinctive: individual features can be matched to a large database of objects
o Quantity: many features can be generated for even small objects
o Accurate: precise localization
o Efficient: close to real-time performance
Properties of an Ideal Feature
Credit: C. Dyer
Problem 1: Detecting Good Feature Points
Credit: C. Dyer
[Image from T. Tuytelaars ECCV 2006 tutorial]
o Hessiano Harriso Lowe: SIFT (DoG)o Mikolajczyk & Schmid:
Hessian/Harris-Laplacian/Affineo Tuytelaars & Van Gool: EBR and IBRo Matas: MSERo Kadir & Brady: Salient Regions o Others
Feature Detectors
Credit: C. Dyer
C. Harris, M. Stephens, “A Combined Corner and Edge Detector,” 1988
Harris Corner Point Detector
Credit: C. Dyer
o We should recognize the point by looking through a small window
o Shifting a window in any direction should give a large change in response
Harris Detector: Basic Idea
Credit: C. Dyer
“flat” region:no change in all directions
“edge”:no change along the edge direction
“corner”:significant change in all directions
Harris Detector: Basic Idea
Credit: C. Dyer
2
,
( , ) ( , ) ( , ) ( , )x y
E u v w x y I x u y v I x y
Change of intensity for a (small) shift by [u,v] in image I:
IntensityShifted intensity
Weighting function
orWeighting function w(x,y) =
Gaussian1 in window, 0 outside
Harris Detector: Derivation
Credit: R. Szeliski
Apply 2nd order Taylor series expansion:
yxyx
yxy
yxx
yxIyxIyxwC
yxIyxwB
yxIyxwABvCuvAuvuE
,
,
2
,
2
22
),(),(),(
),(),(
),(),(2),(
( , )A C u
E u v u vC B v
xyxII x /),(
yyxII y /),(
Harris Detector
Credit: R. Szeliski
( , ) ,u
E u v u v Mv
Expanding E(u,v) in a 2nd order Taylor series, we have, for small shifts, [u,v], a bilinear approximation:
2
2,
( , ) x x y
x y x y y
I I IM w x y
I I I
where M is a 2 2 matrix computed from image derivatives:
Note: Sum computed over small neighborhood around given pixel
xyxII x /),(yyxII y /),(
Harris Corner Detector
Credit: R. Szeliski
( , ) ,u
E u v u v Mv
Intensity change in shifting window: eigenvalue analysis
1, 2 – eigenvalues of M
direction of the slowest change
direction of the fastest change
(max)-1/2
(min)-1/2
Ellipse E(u,v) = const
Harris Corner Detector
Credit: R. Szeliski
1 and 2 both large
Image patch
SSD surface
Selecting Good Features
Credit: C. Dyer
large 1, small 2
SSD surface
Selecting Good Features
Credit: C. Dyer
small 1, small 2
SSD surface
Selecting Good Features
Credit: C. Dyer
1
2
“Corner”1 and 2 both large, 1 ~ 2;E increases in all directions
1 and 2 are small;E is almost constant in all directions
“Edge” 1 >> 2
“Edge” 2 >> 1
“Flat” region
Classification of image points using eigenvalues of M:
Harris Corner Detector
Credit: C. Dyer
Harris Corner Detector
Measure of corner response:
2det traceR M k M
1 2
1 2
dettrace
MM
k is an empirically-determined constant; e.g., k = 0.05
Credit: C. Dyer
Harris Corner Detector
1
2 “Corner”
“Edge”
“Edge”
“Flat”
• R depends only on eigenvalues of M
• R is large for a corner
• R is negative with large magnitude for an edge
• |R| is small for a flat region
R > 0
R < 0
R < 0|R| small
Credit: C. Dyer
Harris Corner Detector: Algorithm
o Algorithm:
1. Find points with large corner response function R
(i.e., R > threshold)
2. Take the points of local maxima of R (for localization) by non-maximum suppression
Credit: C. Dyer
Harris Detector: Example
Credit: C. Dyer
Compute corner response R = 12 – k(1 + 2)2
Harris Detector: Example
Credit: C. Dyer
Harris Detector: Example
Find points with large corner response: R > thresholdCredit: C. Dyer
Take only the points of local maxima of R
Harris Detector: Example
Credit: C. Dyer
Harris Detector: Example
Credit: C. Dyer
Interest points extracted with Harris (~ 500 points)
Harris Detector: Example
Credit: C. Dyer
Harris Detector: Example
Credit: C. Dyer
Harris Detector: Summary
o Average intensity change in direction [u,v] can be expressed in bilinear form:
o Describe a point in terms of eigenvalues of M:measure of corner response:
o A good (corner) point should have a large intensity change in all directions, i.e., R should be a large positive value
( , ) ,u
E u v u v Mv
21 2 1 2R k
Credit: C. Dyer
Harris Detector Properties
o Rotation invariance
Ellipse rotates but its shape (i.e., eigenvalues) remains the same
Corner response R is invariant to image rotation
Credit: C. Dyer
o But not invariant to image scale
Fine scale: All points will be classified as edges
Coarse scale: Corner
Harris Detector Properties
Credit: C. Dyer
Harris Detector Properties
o Quality of Harris detector for different scale changes
Repeatability rate:# correct correspondences# possible correspondences
C. Schmid et al., “Evaluation of Interest Point Detectors,” IJCV 2000
Credit: C. Dyer
46
Invariant Local Features
o Goal: Detect the same interest points regardless of image changes due to translation, rotation, scale, viewpoint
47
oGeometryn Rotationn Similarity (rotation + uniform scale)
n Affine (scale dependent on direction)valid for: orthographic camera, locally planar object
oPhotometryn Affine intensity change (I a I + b)
Models of Image Change
Credit: C. Dyer
SIFT Detector [Lowe ’04]
o Difference-of-Gaussian (DoG) is an approximation of the Laplacian-of-Gaussian (LoG)
=
Lowe, D. G., “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 60, 2, pp. 91-110, 2004Credit: C. Dyer
SIFT Detector
Credit: C. Dyer
SIFT Detector
Credit: C. Dyer
SIFT Detector Algorithm Summary
o Detect local maxima in position and scale of squared values of difference-of-Gaussian
o Fit a quadratic to surrounding values for sub-pixel and sub-scale interpolation
o Output = list of (x, y, ) points
Blur ResampleSubtract
Credit: C. Dyer
References on Feature Descriptors
o A performance evaluation of local descriptors, K. Mikolajczyk and C. Schmid, IEEE Trans. PAMI 27(10), 2005
o Evaluation of features detectors and descriptors based on 3D objects, P. Moreels and P. Perona, Int. J. Computer Vision 73(3), 2007
Credit: C. Dyer
Today
o Panorama n Overviewn Feature detectionn Feature matching
53With slides by Prof. C. Dyer and K. Grauman
Stitching Recipeo Align pairs of images
n Feature Detectionn Feature Matchingn Homography Estimation
o Align all to a common frameo Adjust (Global) & Blend
54
55
Invariant Local Features
o Goal: Detect the same interest points regardless of image changes due to translation, rotation, scale, viewpoint
o After detecting points (and patches) in each image,o Next question: How to match them?
?Point descriptor should be:
1. Invariant2. Distinctive
Feature Point Descriptors
All the following slides are used from Prof. C. Dyer’s relevant course, except those with explicit acknowledgement.
1. Detection: Identify the interest points
2. Description: Extract feature vector for each interest point
3. Matching: Determine correspondence between descriptors in two views
],,[ )1()1(11 dxx x
],,[ )2()2(12 dxx x
Local Features: Description
Geometric Transformations
e.g. scale, translation, rotation
Photometric Transformations
Figure from T. Tuytelaars ECCV 2006 tutorial
Raw Patches as Local DescriptorsThe simplest way to describe the neighborhood around an interest point is to write down the list of intensities to form a feature vector
But this is very sensitive to even small shifts or rotations
§ Find local orientation
Dominant direction of gradient:
§ Compute description relative to this orientation
1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 20012 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004
Making Descriptors Invariant to Rotation
o Compute histogram of local gradient directions computed at selected scale in neighborhood of a feature point relative to dominant local orientation
o Compute gradients within sub-patches, and compute histogram of orientations using discrete “bins”
o Descriptor is rotation and scale invariant, and also has some illumination invariance (why?)
0 2
SIFT Descriptor: Select Major Orientation
o Compute gradient orientation histograms on 4 x 4 neighborhoods over 16 x 16 array of locations in scale space around each keypoint position, relative to the keypoint orientation using thresholded image gradients from Gaussian pyramid level at keypoint’s scale
o Quantize orientations to 8 valueso 4 x 4 array of histogramso SIFT feature vector of length 4 x 4 x 8 = 128 values for each keypointo Normalize the descriptor to make it invariant to intensity change
D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints,” IJCV 2004
SIFT Descriptor
64
o Stable (repeatable) feature points can currently be detected that are invariant ton Rotation, scale, and affine transformations, but
not to more general perspective and projective transformations
o Feature point descriptors can be computed, butn are noisy due to use of differential operatorsn are not invariant to projective transformations
Feature Detection and Description Summary
Feature Matching
Wide-Baseline Feature Matching
o Standard approach for pair-wise matching:n For each feature point in image An Find the feature point with the closest descriptor in
image B
From Schaffalitzky and Zisserman ’02
Wide-Baseline Feature Matching
o Compare the distance, d1, to the closest feature, to the distance, d2, to the second closest feature
o Accept if d1/d2 < 0.6n If the ratio of distances is less than a threshold,
keep the feature
o Why the ratio test?n Eliminates hard-to-match repeated featuresn Distances in SIFT descriptor space seem to be
non-uniform
Feature Matching
o Exhaustive searchn for each feature in one image, look at all the other
features in the other image(s)
o Hashingn compute a short descriptor from each feature
vector, or hash longer descriptors (randomly)
o Nearest neighbor techniquesn k-trees and their variants
Wide-Baseline Feature Matching
o Because of the high dimensionality of features, approximate nearest neighbors are necessary for efficient performance
o See ANN package, Mount and Aryahttp://www.cs.umd.edu/~mount/ANN/
Next Time
o Panoraman Homography estimationn Blendingn Multi-perspective panoramas
71