Lecture 4 Linear Filters and Convolution Slides by: David A. Forsyth Clark F. Olson Steven M. Seitz Linda G. Shapiro.

Post on 19-Dec-2015

221 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

Transcript

Lecture 4Linear Filters and Convolution

Slides by:David A. Forsyth

Clark F. OlsonSteven M. Seitz

Linda G. Shapiro

2

Image noise

In finding the interesting features (such as edges) in an image, the biggest problem is noise.

Noise is:• Sensor error in acquiring the image• Anything other than what you are

looking for

Noise is often caused by underexposure (low light, high film speed)

Noisy image from Wikipedia page on image noise

3

Noise

Common types of noise:• Salt and pepper noise -

contains random occurrences of black and white pixels• Impulse noise - contains

random occurrences of white pixels•Gaussian noise -

variations in intensity drawn from a Gaussian (normal) distribution

4

“Simple” noise model:• Independent stationary additive Gaussian noise• The noise value at each pixel is given by an independent draw from the same

normal (i.e., Gaussian) probability distribution• The scale (σ) determines how large the effect of the noise is.

,2

1),(

2 2

2

zdxecrn x

),(),(),(ˆ crncrIcrI

where z is a random number between 0 and 1.

Result image “Perfect” image additive noise

Image noise

5

Image noise

Issues:• This model allows noise values that could be greater than maximum

camera output or less than zero.• For small standard deviations, this isn’t too much of a problem - it’s a

fairly good model.• Independence may not be justified (e.g., damage to lens).• Noise may not be stationary (e.g., thermal gradients in the ccd).

Advantages:• Fairly accurate• Relatively easy to determine response of filters to such noise

6

Linear filters

We use linear filtering to reduce the effect of noise (among other things).

General process:Form new image, where pixels are a weighted sum of nearby pixel values in original image, using the same set of weights at each point

Properties:Output is a linear function of the input

Output is a shift-invariant function of the input (i.e. shift the input image two pixels to the left, the output is shifted two pixels to the left)

7

Linear filtering

Filtering operations use a “kernel” or “mask” composed of weights to determine how to compute the weighted average in a neighborhood.

Usually, the mask is centered on the pixel and the weights are applied by multiplying by the corresponding pixel in the image and summing.

36 36 36 36 3636 36 45 45 4536 45 45 45 5436 45 54 54 5445 45 54 54 54

1/9 1/9 1/91/9 1/9 1/91/9 1/9 1/9

** ** ** ** **** 39 ** ** **** ** ** ** **** ** ** ** **** ** ** ** **

Input Image

3x3 Mask

Output Image

8

Mean filtering

A mean filter (as on the previous slide) averages the pixels in some neighborhood (such as the 3x3 box surrounding the pixel).

For this neighborhood, every pixel in the output (except for the borders) is defined as:

)1,1(*9/1),1(*9/1)1,1(*9/1

)1,(*9/1),(*9/1)1,(*9/1

)1,1(*9/1),1(*9/1)1,1(*9/1),(

crIcrIcrI

crIcrIcrI

crIcrIcrIcrO

9

Kernel

The kernel is a 2D array or matrix or image.

The kernel has an origin that represents the location that is multiplied by the pixel at the location of the output pixel.• Usually at the center of the kernel, but not necessarily• Kernel for mean filtering in a 3x3 neighborhood (center is bold):

For smoothing or averaging, the kernel coefficients always add up to one.

Larger (sometimes much larger) kernels are common.

1/9 1/9 1/91/9 1/9 1/91/9 1/9 1/9

10

Image boundaries

At the image boundary, we can’t use the same process, since part of the kernel will be outside of the input image.

Some methods for handling the boundary:• Shrink the output image (ignore the boundaries)• Consider every pixel outside of the input to be:

− Black (zero)− The same as the nearest pixel inside the image

• Extends the borders infinitely− A mirror image of the pixels inside the image

• Less likely to appear as edge at boundary, but second order effects occur (second derivative may appear large)

11

Mean filtering

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 0 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 0 0 0 0 0 0 0

0 0 90 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 10 20 30 30 30 20 10

0 20 40 60 60 60 40 20

0 30 60 90 90 90 60 30

0 30 50 80 80 90 60 30

0 30 50 80 80 90 60 30

0 20 30 50 50 60 40 20

10 20 30 30 30 30 20 10

10 10 10 0 0 0 0 0

12

Mean filtering

As the size of the kernel is increased, the noise is more smoothed, but so is the rest of the image.

13

Linear filtering

Some examples of linear filtering• Smoothing by averaging (mean filtering)

Form the average of pixels in a neighborhood

• Smoothing with a GaussianForm a weighted average of pixels in a neighborhood

• Finding a derivative (approximation)Form a weighted average of pixels in a neighborhood

14

Convolution

Linear filtering can be performed using a process called discrete convolution.

• Represent the pixel weights as an image, K• K is usually called the kernel in convolution• Operation is associative (if defined correctly)

Continuous convolution is common in signal processing (and other fields), but, since images are not continuous, we will use only discrete convolution

15

Convolution

Algorithmically, convolution corresponds to four nested loops (two over the image, two over the kernel).

For each image row in output image:

For each image column in output image:

Set running total to zero.

For each kernel row:

For each kernel column:

Multiply kernel value by appropriate image value

Add result to running total

Set output image pixel to value of running total

16

Convolution

Mathematically:

Odd definition preserves associativity and commutativity.

Subtracting u and v from the image indices implies that the kernel is flipped before applying it to the image.

All linear operations can be written as a convolution with some kernel.

Variables u and v range over the size of the kernel.

Note that the kernel origin (0,0) is usually at the center of the kernel (but does not need to be).

u v

K vjuiIvuKKIjiI ),(),(),(

17

Convolution

The “center” of the kernel is at the origin.

For our “mean filter” kernel, we have:

-1 ≤ u ≤ 1

-1 ≤ v ≤ 1

Again, note the change in the sign of u and v – this is flipping the image (or, equivalently, the kernel).

1/9 1/9 1/91/9 1/9 1/91/9 1/9 1/9

K(0, 0)K(-1, -1)

Kernel K

K(1, 0)

v

u

1..1 1..1

),(),(),(u v

vjuiIvuKjiO

18

Convolution

Convolution is written in shorthand as O = K * I.

The “flipping” preserves commutativity:K * I = I * K

and associativity:J * (K * I) = (J * K) * I,

but only if the borders are handled correctly.

Must expand the output, treating values outside the input image as zero.

19

Cross-correlation

Cross-correlation is the same as convolution, except that you don’t flip the kernel.

How does this differ from convolution for:Mean filtering?

Gaussian filtering?

20

Example: smoothing by averaging

Input Output

Kernel

20

21

Smoothing with a Gaussian

Smoothing with an average actually doesn’t compare at all well with a defocused lens (e.g., in an eye).• A defocused lens smoothes an image

symmetrically, which is what we want.• Most obvious difference is that a single

point of light viewed in a defocused lens looks like a fuzzy blob, but the averaging process would give a little square.• We want smoothing to be the same in

all directions.

A Gaussian gives a good model of a fuzzy blob

22

Plot of:

The image shows a smoothing kernel proportional to a Gaussian (a circularly symmetric fuzzy blob)

Sigma (σ) is often referred to as the scale of the Gaussian

An isotropic GaussianThe constant is necessary so that the function integrates to 1.

23

Gaussian smoothing

In practice, we must discretize the (continuous) Gaussian function:

We could generate the following 3x3 kernel with σ=1:

(Normally, we would use a larger kernel.)

h(-1, -1) h(-1, 0) h(-1, 1)h(0, -1) h(0, 0) h(0, 1)h(1, -1) h(1, 0) h(1, 1)

0.059 0.097 0.0590.097 0.159 0.0970.059 0.097 0.059

=

24

Gaussian smoothing

Unfortunately, the sum of the values for the kernel on the previous slide is only 0.779.

We need to normalize the kernel by dividing each value by 0.779.

The sum is now 1.

0.075 0.124 0.0750.124 0.204 0.1240.075 0.124 0.075

0.059 0.097 0.0590.097 0.159 0.0970.059 0.097 0.059

25

Smoothing with a Gaussian

25

26

Averaging vs. Gaussian smoothing

26

27

Differentiation

Recall that:

This is linear and shift invariant, so it must be the result of a convolution.

fx

lim 0

f x , y

f x,y

28

Differentiation and convolution

We can approximate this as:

This is called a “finite difference.”

It is definitely a convolution – what is the kernel?

Often called the gradient when applied to an image.

This finite difference (gradient) measures horizontal change.

By itself, it’s not a very good way to do things, since it is very sensitive to noise.

fx

f xn1,y f xn , y

x

29

Gradient kernels

To determine the horizontal image gradient, we could use one of the following kernels:

The first has better “localization,” but shifts the image by half of a pixel.

For vertical image gradients, we use one of:

-1 1 -1 0 1

-1 1

-1 0 1

30

Finite differences (horizontal)Large (bright) values for light/dark transitions

Negative (dark) values for dark/light transitions

Small (grey) values for non-transitions

1 -1Kernel:

Detects only horizontal changes.

31

Finite differences

Finite difference filters respond strongly to noise.• Image noise results in pixels that look very different from their

neighbors

Generally, the larger the noise, the stronger the response.

32

Finite differences responding to noise

Low noise Medium noise High noise

33

Finite differences and noise

What is to be done?• Intuitively, most pixels in images look quite a lot like their

neighbors.• This is somewhat true even at an edge; along the edge they’re

similar, across the edge they’re not.• This suggests that smoothing the image should help, by forcing

pixels different to their neighbors (noise pixels?) to look more like neighbors.

34

Filter responses are correlated

The filter responses are correlated over scales similar to the scale of the filter.

Filtered noise is sometimes useful.• It looks like some natural textures, can be used to simulate fire, etc.

35

Independent stationary Gaussian noise convolved with a Gaussian kernel.The scores are correlated over the same scale as the kernel.

Filtered noise

36

Independent stationary Gaussian noise convolved with a Gaussian kernel.The scores are correlated over the same scale as the kernel.

Filtered noise

37

Independent stationary Gaussian noise convolved with a Gaussian kernel.The scores are correlated over the same scale as the kernel.

Filtered noise

38

A median filter takes the median value in the neighborhood of a pixel, rather than a weighted average.

Is this a convolution?

Advantage: It doesn’t smooth over region boundaries.

Median filtering

Noise added to the images is Gaussian.

39

Median filtering works best with salt and pepper noise.

Median filtering

top related