Representative Previous Work

Post on 22-Nov-2014

486 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

Transcript

Representative Previous Work

ISOMAP: Geodesic

Distance Preserving

J. Tenenbaum et al., 2000

LLE: Local Neighborhood

Relationship Preserving

S. Roweis & L. Saul, 2000

LE/LPP: Local Similarity Preserving, M. Belkin, P. Niyogi et al., 2001, 2003

PCA LDA

Dimensionality Reduction Algorithms

• Any common perspective to understand and explain these dimensionality reduction algorithms? Or any unified formulation that is shared by them?

• Any general tool to guide developing new algorithms for dimensionality reduction?

Statistics-based Geometry-based

PCA/KPCA LDA/KDA … ISOMAP LLE LE/LPP …

Matrix Tensor

Hundreds

Our Answers

Direct Graph Embedding

1minT

T

y B yy Ly

Original PCA & LDA,ISOMAP, LLE,

Laplacian Eigenmap

Linearization

PCA, LDA, LPP

wXy T

Kernelization

KPCA, KDA

)( iii xw

Tensorization

CSA, DATER

nnii wwwy 2

21

1X

Type

Formulation

Example

S. Yan, D. Xu, H. Zhang and et al., CVPR, 2005, T-PAMI,2007

Direct Graph Embedding

1 2[ , ,..., ]NX x x x1 2[ , ,..., ]T

Ny y y y

Data in high-dimensional space and low-dimensional space (assumed as 1D space here):

L, B: Laplacian matrix from S, SP;

[ , ]i ijG SxIntrinsic Graph:

Penalty Graph

S, SP: Similarity matrix (graph edge)

[ , ]P PijiG Sx

, ii ijj iL D S D S i

Similarity in high dimensional space

Direct Graph Embedding -- Continued

1 2[ , ,..., ]NX x x x1 2[ , ,..., ]T

Ny y y y

* 2

1 1 1 1

arg min || || arg mini j ijy y or y y ori jy By y B y

y y y S y L y

* 2

1 1

arg min || ||i j ijy y or i jy By

y y y S

Data in high-dimensional space and low-dimensional space (assumed as 1D space here):

L, B: Laplacian matrix from S, SP; [ , ]i ijG Sx

Criterion to Preserve Graph Similarity:

Intrinsic Graph:

Penalty Graph

S, SP: Similarity matrix (graph edge)

Special case B is Identity matrix (Scale normalization)

[ , ]P PijiG Sx

Problem: It cannot handle new test data.

, ii ijj iL D S D S i

Similarity in high

dimensional space

Linearization

y X w

*

1 1

arg minw w or

w XBX w

w w XL X w

Linear mapping function

Objective function in Linearization

Intrinsic Graph

Penalty Graph

Problem: linear mapping function is not enough to preserve the real nonlinear structure?

Kernelization

: ix Ff

the original input space to anotherhigher dimensional Hilbert space.

Nonlinear mapping:

( , ) ( ) ( )k x y x y ( , )ij i jK k x x

( )i iiw x

*

1 1

arg minK orKBK

a KLK

Kernel matrix:

Constraint:

Objective function in Kernelization

Intrinsic Graph

Penalty Graph

Tensorization

Low dimensional representation is obtained as:

Objective function in Tensorization

1 21 2 ... n

i i ny w w w X

1 1 2 1 2

1

* 21 2 1 2

( ,..., ) 1

( ,..., ) arg min || ... ... ||n n n

ni n j n ij

f w w i j

w w w w w w w w S

X X

1 1 2

1 1 2 1 2

21 21

21 2 1 2

( ,..., ) || ... ||

( ,..., ) || ... ... ||

n n

n n n

N

i n iii

Pi n j n ij

i j

f w w w w w B or

f w w w w w w w w S

X

X Xwhere

Intrinsic Graph

Penalty Graph

Common Formulation

Tensorization1 1 2 1 2

1

* 21 2 1 2

( ,..., ) 1

( ,..., ) arg min || ... ... ||n n n

ni n j n ij

f w w i j

w w w w w w w w S

X X

1 1 2

1 1 2 1 2

21 21

21 2 1 2

( ,..., ) || ... ||

( ,..., ) || ... ... ||

n n

n n n

N

i n iii

Pi n j n ij

i j

f w w w w w B or

f w w w w w w w w S

X

X Xwhere

Linearization

Kernelization

Direct Graph Embedding

L, B: Laplacian matrix from S, SP;

S, SP: Similarity matrixIntrinsic graph

Penalty graph

*

1 1

arg minw w or

w XBX w

w w XL X w

*

1 1

arg minK orKBK

a KLK

*

1 1

arg miny y ory By

y y L y

A General Framework for Dimensionality Reduction

Algorithm S & B Definition Embedding Type

PCA/KPCA/CSA L/K/T

LDA/KDA/DATER L/K/T

ISOMAP D

LLE D

LE/LPP

if ; B=D D/L

1 , ;NijS i j B I

1, ,

i j iij l l l NS n B I ee

( ) , ;ij G ijS D i j B I

;S M M M M B I

2exp{ || || / }ij i jS x x t

|| ||i jx x

D: Direct Graph Embedding L: LinearizationK: Kernelization T: Tensorization

New Dimensionality Reduction Algorithm: Marginal Fisher Analysis

ijS

Important Information for face recognition:

1) Label information 2) Local manifold structure (neighborhood or margin)

1: if xi is among the k1-nearest neighbors of xj in the same class;0 : otherwise

1: if the pair (i,j) is among the k2 shortest pairs among the data set;0: otherwise

PijS

Marginal Fisher Analysis: Advantage

No Gaussian distribution assumption

Experiments: Face Recognition

PIE-1 G3/P7 G4/P6

PCA+LDA (Linearization) 65.8% 80.2%

PCA+MFA (Ours) 71.0% 84.9%

KDA (Kernelization) 70.0% 81.0%

KMFA (Ours) 72.3% 85.2%

DATER-2 (Tensorization) 80.0% 82.3%

TMFA-2 (Ours) 82.1% 85.2%

ORL G3/P7 G4/P6

PCA+LDA (Linearization)

87.9% 88.3%

PCA+MFA (Ours) 89.3% 91.3%

KDA (Kernelization) 87.5% 91.7%

KMFA (Ours) 88.6% 93.8%

DATER-2 (Tensorization) 89.3% 92.0%

TMFA-2 (Ours) 95.0% 96.3%

Summary

• Optimization framework that unifies previous dimensionality reduction algorithms as special cases.

• A new dimensionality reduction algorithm: Marginal Fisher Analysis.

Event Recognition in News Video

Online and offline video search56 events are defined in LSCOM

Airplane Flying Existing Car Riot

Geometric and photometric variances

Clutter background

Complex camera motion and object motionMore diverse !

Earth Mover’s Distance in Temporal Domain (T-MM, Under Review)

.

.

.

P

P1

Pm

.

.

.

Q

Q1

Q2

Qn

Key Frames of two video clips in class “riot”

EMD can efficiently utilize the information from multiple frames.

Multi-level Pyramid Matching (CVPR 2007, Under Review)

......

Subclip

CLIP

Subclip

Subclip

Subclip

Subclip

0P

11P

12P

21P

23P

24P

...

Subclip2

2P ......

Subclip

Subclip

Subclip

Subclip

Subclip

Subclip

0Q

21Q

22Q

23Q

24Q

11Q

12Q

CLIP

Fire

Smoke Fire

Smoke

Level-0 Level-0

Level-1

Level-1

Level-1

Level-1

Solution: Multi-level Pyramid Matching in Temporal Domain

One Clip = several subclips (stages of event

evolution) . No prior knowledge

about the number of stages in an event, and videos of the same event may include a subset of stage only.

Other Publications & Professional ActivitiesOther Publications: Kernel based Learning: Coupled Kernel-based Subspace Analysis: CVPR 2005 Fisher+Kernel Criterion for Discriminant Analysis: CVPR 2005 Manifold Learning: Nonlinear Discriminant Analysis on Embedding Manifold : T-CSVT (Accepted) Face Verification: Face Verification with Balanced Thresholds: T-IP (Accepted) Multimedia: Insignificant Shadow Detection for Video Segmentation: T-CSVT 2005 Anchorperson extraction for Picture in Picture News Video: PRL 2005Guest Editor: Special issue on Video Analysis, Computer Vision and Image Understanding Special issue on Video-based Object and Event Analysis, Pattern Recognition

LettersBook Editor: Semantic Mining Technologies for Multimedia Databases Publisher: Idea Group Inc. (www.idea-group.com)

Computer Vision

Future Work

Pattern Recognition

Machine Learning

Multimedia

Event RecognitionBiometric

Web SearchMultimedia Content

Analysis

Acknowledgement

Shuicheng Yan UIUC

Steve Lin Microsoft

Lei Zhang Microsoft

Xuelong Li UK

Xiaoou TangHong Kong

Hong-Jiang ZhangMicrosoft

Shih-Fu ChangColumbia

Zhengkai Liu, USTC

Thank You very much!

What is Gabor Features?Gabor features can improve recognition performance in comparison to grayscale features. Chengjun Liu T-IP, 2002

Gabor Wavelet Kernels

Eight Orientations

Five S

cales

Input: Grayscale

Image Output: 40 Gabor-filtered

Images

How to Utilize More Correlations?

PixelRearrangement

Sets of highlycorrelated pixels

Columns of highlycorrelated pixels

Pixel Rearrangement

Potential Assumption in Previous Tensor-based Subspace Learning:

Intra-tensor correlations: Correlations among the features within certain tensor dimensions, such as rows, columns and Gabor features…

Tensor Representation: Advantages

1. Enhanced Learnability

2. Appreciable reductions in computational costs

3. Large number of available projection directions

4. Utilize the structure information

PCA CSA

Feature Dimension

Sample Number

Computation Complexity

33m (100 )

2 2(100 )Nm N

(100)m

9 9[O(10O m ) 0( )]4 4[O(3*10O(3 ]m 0) )

N

Connection to Previous Work –Tensorface (M. Vasilescu and D. Terzopoulos, 2002)

Person

Image Vector

Illumination

Pose

Expression

Image Object Dim 1

Image Object Dim 2

Image Object Dim 3

Image Object Dim 4

.

.

.

.

.

.

Image object 1 Image object 2

Image Object Dim 1

Image Object Dim 2

Image Object Dim 3

Image Object Dim 4

.

.

.

. . .

(a) Tensorface (b) CSA

From an algorithmic view or mathematics view, CSA and Tensorface are both variants of Rank-(R1,R2,…,Rn) decomposition.

Tensorface CSA

Motivation Characterize external factors Characterize internal factors

Input: Gray-level Image Vector Matrix

Input: Gabor-filtered Image (Video Sequence )

Not address 3rd-order tensor

When equal to PCA The number of images per person are only one or are a

prime numberNever

Number of Images per Person for Training

Lots of images per person One image per person

top related