Top Banner
49
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Ch13
Page 2: Ch13

Chapter 13 Discrete Image Transforms

13.1 Introduction

13.2 Linear transformations One-dimensional discrete linear transformations

Definition. If x is an N-by-1 vector and T is an N-by-N matrix, then is a linear transformation of the vector x, I.e.,

T is called the kernel matrix of the transformation

1

0

N

jjiji xty

Txy

Page 3: Ch13

Linear Transformations

Rotation and scaling are examples of linear transformations

If T is a nonsingular matrix, the inverse linear transformation is

2

1

2

1

)cos()sin(

)sin()cos(

x

x

y

y

yTx 1

Page 4: Ch13

Linear Transformations

Unitary Transforms If the kernel matrix of a linear system is a

unitary matrix, the linear system is called a unitary transformation.

A matrix T is unitary if

If T is unitary and real, then T is an orthogonal matrix.

ITTTTTT ttt )()( i.e., ,)(1

ITTTTTT ttt i.e., ,1

Page 5: Ch13

Unitary Transforms

The rows ( also the columns ) of an orthogonal matrix are a set of orthogonal vectors.

One-dimensional DFT is an example of a unitary transform

or

where the matrix W is a unitary matrix.

1

0

)2exp(1 N

iik N

ikjf

NF

WfF

Page 6: Ch13

Unitary Transforms

Interpretation The rows (or columns) of a unitary matrix form a

orthonormal basis for the N-dimensional vector space. A unitary transform can be viewed as a coordinate

transformation, rotating the vector in N-space without changing its length.

A unitary linear transformation converts a N-dimensional vector to a N-dimensional vector of transform coefficients, each of which is computed as the inner product of the input vector x with one of the rows of the transform matrix T.

The forward transformation is referred to as analysis, and the backward transformation is referred to as synthesis.

Page 7: Ch13

Two-dimensional discrete linear transformations

A two-dimensional linear transformation is

where forms an N2-by-N2 block matrix having N-by-N blocks, each of which is an N-by-N matrix

If , the transformation is called separable.

1

0

1

0

),;,(N

i

N

kikmn nmkitFG

),;,( nmkit

),(),(),;,( nktmitnmkit cr

1

0

1

0

),(),(N

ir

N

kcikmn mitnktFG

Page 8: Ch13

2-D separable symmetric unitary transforms

If , the transformation is symmetric,

The inverse transform is

Example: The two-dimensional DFT

),(),(),;,( nktmitnmkit

TFTG

or ),(),(1

0

1

0

N

i

N

kikmn mitnktFG

tt )()(11 TGTGTTF

tt )()( , WGWFWFWG

Page 9: Ch13

Orthogonal transformations

If the matrix T is real, the linear transformation is called an orthogonal transformation. Its inverse transformation is

If T is a symmetric matrix, the forward and inverse transformation are identical,

TGTF

TGTFTFTG and

Page 10: Ch13

Basis functions and basis images

Basis functions The rows of a unitary form a orthonormal basis

for a N-dimensional vector space. There are different unitary transformations with different choice of basis vectors. A unitary transformation corresponds to a rotation of a vector in a N-dimensional (N2 for two-dimensional case) vector space.

Page 11: Ch13

Basis Images

The inverse unitary transform an be viewed as the weighted sum of N2 basis image, which is the inverse transformation of a matrix as

This means that each basis image is the outer product of two rows of the transform matrix. Any image can be decomposed into a set of basis image by the forward transformation, and the inverse transformation reconstitute the image by summing the basis images.

}{ ,,

qjpiqp

G

),(),(),(),(1

0

1

0, nqtmptnktmitF

n

i

N

kqkpimn

Page 12: Ch13

13.4 Sinusoidal Transforms

13.4.1 The discrete Fourier transform The forward and inverse DFT’s are

where FWfWfF t)( and

1,10,1

1,00,0

NNN

N

ww

ww

W

N

ikj

ki eN

w2

,

1

Page 13: Ch13

13.4 Sinusoidal Transforms

The spectrum vector The frequency corresponding to the ith element

of F is

The frequency components are arranged as

112/ )(2

2/0 2

NiNfN

iN

NifN

i

s

N

N

i

0 1 N/2 N-1

0 2/N

fN

2/N

fi N

Nf2/

)(N

fiN N

2/N

fN

i

Page 14: Ch13

13.4 Sinusoidal Transforms

The frequencies are symmetric about the highest frequency component. Using a circular right shift by the amount N/2, we can place the zero-frequency at N/2 and frequency increases in both directions from there. The Nyquist frequency (the highest frequency) at F0. This can be done by changing the signs of the odd-numbered elements of f(x) prior to computing the DFT. This is because

Page 15: Ch13

13.4 Sinusoidal Transforms

The two-dimensional DFT For the two-dimensional DFT, changing the sign of

half the elements of the image matrix shifts its zero frequency component to the center of the spectrum

)()1()()exp(

)()2/

2exp()2/()()(

xfxfxj

xfN

NxjNuFxuF

x

),()1()2/,2/(),(),( yxfNvNuFyxfvuF yx

1

2 3

4

1

23

4

Page 16: Ch13
Page 17: Ch13

13.4.2 Discrete Cosine Transform

The two-dimensional discrete cosine transform (DCT) is defined as

and its inverse

where

1

0

1

0 2

)12(cos

2

)12(cos),()()(),(

N

i

N

kc N

nk

N

mikignmnmG

1

0

1

0 2

)12(cos

2

)12(cos),()()(),(

N

m

N

nc N

nk

N

minmGnmkig

NmN

mN

1for 2

)( and 1

)0(

Page 18: Ch13

The discrete cosine transform

DCT can be expressed as a unitary matrix form

Where the kernel matrix has elements

The DCT is useful in image compression.

CgCGc

N

mimC mi 2

)12(cos)(,

Page 19: Ch13
Page 20: Ch13

The sine transform

The discrete sine transform (DST) is defined as

and

The DST has unitary kernel

1

0

1

0 1

)1)(1(sin

1

)1)(1(sin),(

1

2),(

N

i

N

ks N

nk

N

mikig

NnmG

1

0

1

0 1

)1)(1(sin

1

)1)(1(sin),(

1

2),(

N

m

N

ns N

nk

N

minmG

Nkig

1

)1)(1(sin

1

2, N

ki

Nt ki

Page 21: Ch13

The Hartley transform

The forward two-dimensional discrete Hartley transform

The inverse DHT

where the basis function

1

0

1

0,, )(

2cas

1 N

i

N

kkinm knim

Ng

NG

1

0

1

0,, )(

2cas

1 N

m

N

nnmki knim

NG

Ng

)4/cos(2)sin()cos()(cas

Page 22: Ch13

The Hartley transform

The unitary kernel matrix of the Hartley transform has elements

The Hartley transform is the real part minus the imaginary part of the corresponding Fourier transform, and the Fourier transform is the even part minus j times the odd part of the Hartley transform.

N

ik

Nt ki 2cas

1,

Page 23: Ch13

13.5 Rectangular wave transforms

13.5.1 The Hadamard Transform Also called the Walsh transform. The Hadamard transform is a symmetric, separable

orthogonal transformation that has only +1 and –1 as elements in its kernel matrix. It exists only for

For the two-by-two case

And for general cases

nN 2

11

11

2

12H

2/2/

2/2/11

NN

NNN HH

HH

NH

N

Page 24: Ch13

The Hadamard Transform

H8

Ordered Hadamard transform

5

2

6

1

4

3

7

0

11111111

11111111

11111111

11111111

11111111

11111111

11111111

11111111

22

18

H

7

6

5

4

3

2

1

0

11111111

11111111

11111111

11111111

11111111

11111111

11111111

11111111

22

18

H

Page 25: Ch13

13.5.3 The Slant Transform

The orthogonal kernel matrix for the slant transform is obtained iteratively as

11

11

2

12S

10

1

10

3

10

3

10

12

1

2

1

2

1

2

110

3

10

1

10

1

10

32

1

2

1

2

1

2

1

2

1

2

1

2

100

2

1

2

100

002

1

2

1

002

1

2

1

5

2

5

1

5

2

5

110105

1

5

2

5

1

5

20101

2

14S

Page 26: Ch13

13.5.3 The Slant Transform

And

where I is the identity matrix of order N/2-2 and

2/

2/ 1010

0101

2

1

N

N

NNNN

NNNN

N S

S

abab

baba

S0

0

I0I0

00

I0I0

00

14

1 ,

14

32

2

22

2

2

N

Nb

N

Na NN

Page 27: Ch13

13.5.3 The Slant Transform

The basis function for N=80

1

2

3

4

5

6

7

Page 28: Ch13

13.5.4 The Haar Transform

The Basis functions of Haar transform For any integer , let

where and is the largest power of 2 that , and is the remainder.

The Haar function is defined by

10 Nk

12 qk p

1 ,0 qp p2

kp 2

Nxh

1)(0

1q

Page 29: Ch13

13.5.4 The Haar Transform

And

The 8-by-8 Haar orthogonal kernel matrix is

otherwise 022

2/1 2

2

2/1

2

1 2

1)( 2/

2/

ppp

ppp

k

qx

q

qx

q

Nxh

22000001

00220000

00002200

00000022

22220000

00002222

11111111

11111111

22

1Hr

Page 30: Ch13

13.5.4 The Haar Transform Basis Functions for N=8

Page 31: Ch13

Basis images of Haar

Page 32: Ch13

Basis image of Hadamard

Page 33: Ch13

Basis images of DCT

Page 34: Ch13

13.6 Eigenvector-based transforms

Eigenvalues and eigenvectors For an N-by-N matrix A, is a scalar, if

then is called an eigenvalue of A. The vector that satisfies

is called an eigenvectors of A.

0|| IA

vv A

v

Page 35: Ch13

13.6 Eigenvector-based transforms

13.6.2 Principal-Component Analysis Suppose x is an N-by-1 random vector, The

mean vector can be estimated from its L samples as

and its covariance matrix can be estimated by

The matrix is a real and symmetric matrix.

L

llL 1

1xmx

tL

l

tllL xxxxx mmxxmxmxC

1

1}))({(

xC

Page 36: Ch13

13.6 Eigenvector-based transforms

Let A be a matrix whose rows are eigenvectors of , then is a diagonal matrix having the eigenvalues of along its diagonal, I.e.,

Let the matrix A define a linear transformation by

xC AACC xy

xC

N

0

01

yC

)( xmxAy

Page 37: Ch13

13.6 Eigenvector-based transforms

It can be shown that the covariance matrix of the vector is . Since the matrix

is a diagonal matrix, its off-diagonal elements are zero, the element of are uncorrelated. Thus the linear transformation remove the correlation among the variables.

The reverse transform

can reconstruct x from y.

y AACC xy

yC

y

myAmyAx 1

Page 38: Ch13

13.6 Dimension Reduction

We can reduce the dimensionality of the y vector by ignoring one or more of the eigenvectors that have small eigenvalues.

Let B be the M-by-N matrix (M<N) formed by discarding the lower N-M rows of A, and let mx=0 for simplicity, then the transformed vector has smaller dimension

Bxy ˆ

Page 39: Ch13

13.6 Dimension Reduction

The vector can be reconstructed(approximately) by

The mean square error is

The vector is called the principal component of the vector x.

x

yBx ˆˆ

N

Mkk

1

MSE

y

Page 40: Ch13

13.6.3 The Karhunen-Loeve Transform

The K-L transform is defined as

The dimension-reducing capability of the K-L transform makes it quite useful for image compression. When the image is a first-order Markov process, where the correlation between pixels decreases linearly with their separation distance, the basis images for the K-L transform can be written explicitly.

)( xmxAy

Page 41: Ch13

13.6.3 The Karhunen-Loeve Transform

When the correlation between adjacent pixels approaches unity, the K-L basis functions approach those of discrete cosine transform. Thus, DCT is a good approximation for the K-L transform.

Page 42: Ch13

13.6.4 The SVD Transform

Singular value decomposition Any N-by-N matrix A can be decomposed as

where the columns of U and V are the eigenvectors of and , respectively. is an N-by-N diagonal matrix containing the singular values of A.

The forward singular value decomposition(SVD) transform

The inverse SVD transform

tVUA tAA

AA t

AVU t

tVUA

Page 43: Ch13

13.6.4 The SVD transform

For SVD transform, the kernel matrices are image-dependent.

The SVD has a very high power of image compression, we can get lossless compression by at least a factor of N. and even higher lossy compression ratio by ignoring some small singular values may be achieved.

Page 44: Ch13

The SVD transform

Illustration of figure 13-7

Page 45: Ch13

13.7 Transform domain filtering

Like in the Fourier transform domain, filter can be designed in other transform domain. Transform domain filtering involves modification of the weighting coefficients prior to reconstruction of the image via the inverse transform.If either of the desired components or the undesired components of the image resemble one or a few of the basis image of a particular transform, then that transform will be useful in separating the two.

Page 46: Ch13

13.7 Transform domain filtering

Haar transform is a good candidate for detecting vertical and horizontal lines and edges.

Page 47: Ch13

13.7 Transform domain filtering

Illustration of fig. 13-8.

Page 48: Ch13

13.7 Transform domain filtering

Figure 13-9

Page 49: Ch13