Top Banner
1 A.V.C College of Engineering, Mannampandal – 609 305 Department of Information Technology ME (CSE) -SYLLABUS CP7004 - IMAGE PROCESSING AND ANALYSIS L T P C 3 0 0 3 UNIT I SPATIAL DOMAIN PROCESSING 9 Introduction to image processing – imaging modalities – image file formats – image sensing and acquisition – image sampling and quantization – noise models – spatial filtering operations – histograms – smoothing filters – sharpening filters – fuzzy techniques for spatial filtering – spatial filters for noise removal UNIT II FREQUENCY DOMAIN PROCESSING 9 Frequency domain – Review of Fourier Transform (FT), Discrete Fourier Transform (DFT), and Fast Fourier Transform (FFT) – filtering in frequency domain – image smoothing – image sharpening – selective filtering – frequency domain noise filters wavelets – Haar Transform – multiresolution expansions – wavelet transforms wavelets based image processing UNIT III SEGMENTATION AND EDGE DETECTION 9 Thresholding techniques – region growing methods – region splitting and merging adaptive thresholding – threshold selection – global valley – histogram concavity edge detection –template matching – gradient operators – circular operators differential edge operators –hysteresis thresholding – Canny operator – Laplacian operator – active contours – object segmentation UNIT IV INTEREST POINTS, MORPHOLOGY, AND TEXTURE 9 Corner and interest point detection – template matching – second order derivatives median filter based detection – Harris interest
77

Cp 7004_image Processing _ Unit 1

Nov 22, 2015

Download

Documents

Kavitha Ravi
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

A.V.C College of Engineering, Mannampandal 609 305

Department of Information Technology

ME (CSE) -SYLLABUS

CP7004 - IMAGE PROCESSING AND ANALYSIS L T P C

3 0 0 3 UNIT I SPATIAL DOMAIN PROCESSING 9

Introduction to image processing imaging modalities image file formats image sensing and acquisition image sampling and quantization noise models spatial filtering operations histograms smoothing filters sharpening filters fuzzy techniques for spatial filtering spatial filters for noise removal

UNIT II FREQUENCY DOMAIN PROCESSING 9

Frequency domain Review of Fourier Transform (FT), Discrete Fourier Transform (DFT), and Fast Fourier Transform (FFT) filtering in frequency domain image smoothing image sharpening selective filtering frequency domain noise filters wavelets Haar Transform multiresolution expansions wavelet transforms wavelets based image processing

UNIT III SEGMENTATION AND EDGE DETECTION 9

Thresholding techniques region growing methods region splitting and merging adaptive thresholding threshold selection global valley histogram concavity edge detection template matching gradient operators circular operators differential edge operators hysteresis thresholding Canny operator Laplacian operator active contours object segmentation

UNIT IV INTEREST POINTS, MORPHOLOGY, AND TEXTURE 9

Corner and interest point detection template matching second order derivatives median filter based detection Harris interest point operator corner orientation local invariant feature detectors and descriptors morphology dilation and erosion morphological operators grayscale morphology noise and morphology texture texture analysis co-occurrence matrices Laws' texture energy approach Ade's eigen filter approach

UNIT V COLOR IMAGES AND IMAGE COMPRESSION 9

Color models pseudo colors full-color image processing color transformations smoothing and sharpening of color images image segmentation based on color noise in color images. Image Compression redundancy in images coding redundancy irrelevant information in images image compression models basic compression methods digital image watermarking.

TOTAL : 45 PERIODS

REFERENCES:

1. E. R. Davies, Computer & Machine Vision, Fourth Edition, Academic Press, 2012.

2. W. Burger and M. Burge, Digital Image Processing: An Algorithmic Introduction

using Java, Springer, 2008.

3. John C. Russ, The Image Processing Handbook, Sixth Edition, CRC Press, 2011.

4. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Third Edition, Pearson,2008.

5. Mark Nixon and Alberto S. Aquado, Feature Extraction & Image Processing for

Computer Vision, Third Edition, Academic Press, 2012.

6. D. L. Baggio et al., Mastering OpenCV with Practical Computer Vision Projects,

Packt Publishing, 2012.

7. Jan Erik Solem, Programming Computer Vision with Python: Tools and algorithms for analyzing images, O'Reilly Media, 2012. Image Processing And Analysis

Unit-1

Spatial Domain Processing

Image:

An Image may be defined as a two dimensional function f (x,y) where x & y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x,y) is called intensity or gray level of the image at that point. When x,y and the amplitude values of f are all finite, discrete quantities we call the image as Digital Image.

Image Processing:Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image.

What is Image Processing:

Image processing is the study of any algorithm that takes an image as input and returns an image as output. It includes,

Image display and printing

Image editing and manipulation

Image enhancement

Feature detection

Image compression.

Image Processing Basic Three Steps:

1. Importing the image with optical scanner or by digital photography.

2. Analyzing and manipulating the image which includes data compression and image enhancement and spotting patterns that are not to human eyes like satellite photographs.

3. Output is the last stage in which result can be altered image or report that is based on image analysis.

Image Processing Techniques:

Before going to processing an image, it is converted into a digital form. Digitization includes sampling of image and quantization of sampled values. After converting the image into bit information, processing is performed. This processing technique may be

Image enhancement

Image restoration

Image compression.

Image enhancement:It refers to accentuation, or sharpening, of image features such as boundaries, or contrast to make a graphic display more useful for display & analysis. This process does not increase the inherent information content in data. It includes gray level & contrast manipulation, noise reduction, edge crispening and sharpening, filtering, interpolation and magnification, pseudo coloring, and so on.

Image restoration:It is concerned with filtering the observed image to minimize the effect of degradations.[1] Effectiveness of image restoration depends on the extent and accuracy of the knowledge of degradation[2] process as well as on filter design. Image restoration differs from image enhancement in that the latter is concerned with more extraction or accentuation of image features.

Image compression:It is concerned with minimizing the number of bits required to represent an image. Application of compression are in broadcast TV, remote sensing via satellite, military communication via aircraft, radar, teleconferencing, facsimile transmission, for educational & business documents, medical images that arise in computer tomography, magnetic resonance imaging and digital radiology, motion, pictures, satellite images, weather maps, geological surveys and so on.

Text compression CCITT GROUP3 & GROUP4

Still image compression JPEG

Video image compression MPEG

Working Principles of Image Processing:

1. Improving picture quality

2. Machine perception of visual information as used in robotics.

Working Of Image Processing:

To apply image-processing methods, we first digitize a photograph or other picture into an image file. Then digital methods can be applied to rearrange picture parts, to enhance color separations, or to improve the quality of shading. An example of the application of image-processing methods is to enhance the quality of a picture. These techniques are used extensively in commercial art applications that involve the retouching and rearranging of sections of photographs and other artwork. Similar methods are used to analyze satellite photos of the earth and photos of galaxies.Purpose of Image processing:

The purpose of image processing is divided into 5 groups. They are:

1.Visualization - Observe the objects that are not visible.

2.Image sharpening and restoration - To create a better image.

3.Image retrieval - Seek for the image of interest.

4. Measurement of pattern Measures various objects in an image.

5.Image Recognition Distinguish the objects in an image.

Image Processing Types:

1.Analog Image Processing:

Analog or visual techniques of image processing can be used for the hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these visual techniques. The image processing is not just confined to area that has to be studied but on knowledge of analyst. Association is another important tool in image processing through visual techniques. So analysts apply a combination of personal knowledge and collateral data to image processing.

2.Digital Image Processing:

Digital Processing techniques help in manipulation of the digital images by using computers. As raw data from imaging sensors from satellite platform contains deficiencies. To get over such flaws and to get originality of information, it has to undergo various phases of processing. The three general phases that all types of data have to undergo while using digital technique are Pre- processing, enhancement and display, information extraction.

Applications Of Image Processing:

1.Intelligent Transportation Systems -

This technique can be used in Automatic number plate recognition and Traffic sign recognition.

2.Remote Sensing -

For this application, sensors capture the pictures of the earths surface in remote sensing satellites or multi spectral scanner which is mounted on an aircraft. These pictures are processed by transmitting it to the Earth station. Techniques used to interpret the objects and regions are used in flood control, city planning, resource mobilization, agricultural production monitoring, etc.

3.Moving object tracking -

This application enables to measure motion parameters and acquire visual record of the moving object. The different types of approach to track an object are:

Motion based tracking

Recognition based tracking

4.Defense surveillance

Aerial surveillance methods are used to continuously keep an eye on the land and oceans.This application is also used to locate the types and formation of naval vessels of the ocean surface. The important duty is to divide the various objects present in the water body part of the image. The different parameters such as length, breadth, area, perimeter, compactness are set up to classify each of divided objects. It is important to recognize the distribution of these objects in different directions that are east, west, north, south, northeast, northwest, southeast and south west to explain all possible formations of the vessels. We can interpret the entire oceanic scenario from the spatial distribution of these objects.

5.Biomedical Imaging techniques -

For medical diagnosis, different types of imaging tools such as X- ray, Ultrasound, computer aided tomography (CT) etc are used. The diagrams of X- ray, MRI, and computer aided tomography (CT) are given below.

Some of the applications of Biomedical imaging applications are as follows:

Heart disease identification The important diagnostic features such as size of the heart and its shape are required to know in order to classify the heart diseases.To improve the diagnosis of heart diseases, image analysis techniques are employed to radiographic images.

Lung disease identification In X- rays, the regions that appear dark contain air while region that appears lighter are solid tissues. Bones are more radio opaque than tissues. The ribs, the heart, thoracic spine, and the diaphragm that separates the chest cavity from the abdominal cavity are clearly seen on the X-ray film.

Digital mammograms This is used to detect the breast tumour.Mammograms can be analyzed using Image processing techniques such as segmentation, shape analysis, contrast enhancement, feature extraction, etc.

6.Automatic Visual Inspection System

This application improves the quality and productivity of the product in the industries.

Automatic inspection of incandescent lamp filaments This involves examination of the bulb manufacturing process. Due to no uniformity in the pitch of the wiring in the lamp, the filament of the bulb gets fused within a short duration. In this application, a binary image slice of the filament is created from which the silhouette of the filament is fabricated. Silhouettes are analyzed to recognize the non uniformity in the pitch of the wiring in the lamp. This system is being used by the General Electric Corporation.

Automatic surface inspection systems In metal industries it is essential to detect the flaws on the surfaces. For instance, it is essential to detect any kind of aberration on the rolled metal surface in the hot or cold rolling mills in a steel plant. Image processing techniques such as texture identification, edge detection, fractal analysis etc are used for the detection.

Faulty component identification This application identifies the faulty components in electronic or electromechanical systems. Higher amount of thermal energy is generated by these faulty components. The Infra-red images are produced from the distribution of thermal energies in the assembly. The faulty components can be identified by analyzing the Infra-red images.

Image Modalities:

Medical image Modalities are

1. X-ray

2. MRI

3. CT

4. Ultrasound.

1. X-Ray:

X stands for unknown. X-ray imaging is also known as

- radiograph

- Rntgen imaging.

Calcium in bones absorbs X-rays the most

Fat and other soft tissues absorb less, and look gray

Air absorbs the least, so lungs look black on a radiograph

2.CT(Computer Tromography):

Computed tomography (CT) is an integral component of the general radiography department. Unlike conventional radiography, in CT the patient lies on a couch that moves through into the imaging gantry housing the x-ray tube and an array of specially designed "detectors". Depending upon the system the gantry rotates for either one revolution around the patient or continuously in order for the detector array to record the intensity of the remnant x-ray beam. These recordings are then computer processed to produce images never before thought possible. The familiar radiograph lacks a third dimension; it can only show us a two-dimensional view of the human body. CT on the other hand reconstructs images in a variety of body planes the most usual being the axial or cross sectional plane. The image created displays CT numbers which mainly reflect the physical properties of the tissues being investigated. Because of the large range of the CT number scale and the fact that the image is digital, it is possible to manipulate the display to show the underlying soft tissues with enhanced contrast as well as the bony structures.

Characteristics:

80 x 80 image

3 mm pixels

13 mm thick slices

Two simultaneous slices!!!

80 sec scan time per slice

80 sec recon time

512 x 512 image

preserve high frequency

Band pass filter->preserve frequencies within band

Band reject filter-> reject frequencies within band

The Mechanics of Spatial Filtering:

Spatial filter consists of 1.neighborhood: For each pixel in the origin image, the outcome is written on the same location at the target image.

2. predefined operation .

The concept of filtering has its roots in the use of the Fourier transform for signal processing in the so-called frequency domain. Filtering creates a new pixel with coordinates equal to the coordinates of the center of the neighborhood,and whose value is the result of the filtering operation,a processed image is generated as the center of the filter visits each pixel in the input image. If the operation performed on the image pixels is linear ,then the filter is called a linear spatial filter. Otherwise,the filter is nonlinear

The mechanics of linear spatial filtering process consists simply of moving the filter mask from point to point in an image. At each point (x, y), the response of the filter at that point is calculated using a predefined relationship. For linear spatial filtering the response is given by a sum of products of the filter coefficients and the corresponding image pixels in the area spanned by the filter mask.For the 3*3

g(x,y)=w(-1,-1)f(x-1,y-1)+w(-1,0)f(x-1,y)+

+w(0,0)f(x,y)+....+w(1,1)f(x+1,y+1)

the result (or response), g, of linear filtering with the filter mask at a point (x, y) in the image is which we see is the sum of products of the mask coefficients with the corresponding pixels directly under the mask. Note in particular that the coefficient w(0, 0) coincides with image value f(x, y), indicating that the mask is centered at (x, y)

In general, linear filtering of an image f of size M*Nwith a filter mask of size m*n is given by the expression:

Spatial Correlation and Convolution :

Correlation:Correl;ation is the process of moving a filter mask over the image and computing the sum of product at each location

Convolution: The mechanics of convolution are the same,except the filter is rotated by 180 degree.

Note: if w(x,y) is symmetric, that is w(x,y)=w(-x,-y), then convolution is equivalent to correlation

The below diagram represents both for convolution and correlation:

Explations are given below:

(a) shows a one-dimensional function, f, and a mask w, the origin of f is assumed to be the leftmost point. To perform the correlation of the two functions, we move w so that its rightmost point coincides with the origin of f, as shown in (b). Note that there are points between the two functions that do not overlap. The most common way to handle this problem is to pad f with as many 0s as are necessary to guarantee that there will always be corresponding points for the full excursion of w past f. this situation is shown in figure ( c).

Now to perform the correlation. The first value of correlation is the sum of products of the two functions in the position shown in (c ). The sum of products is 0 in this case. Next we move w to one location to the right, and repeat the process, (d). The sum of products again is 0. after four shifts, (e), we encounter the first nonzero value of the correlation, which is (2)(1) = 2. if we proceed in this manner until w moves completeley past f, we would get the result in (g).

Now, to perform convolution, we rotate w by 180 degree, and place its rightmost point at the origin of f, as shown in (j). We then repeat the sliding/computing process employed in correlation, as illustrated in figs (k) through (n).

The preceding concepts extend easily to images, The origin is at the top, left corner of image f(x,y). To perform correlation, we place the bottom, rightmost point of w(x,y) so that it coincides with the origin of f(x,y), as illustrated in (c ). Note the use of 0 padding for the reasons mentioned in the previous example. To perform correlation, we move w(x,y) in all possible locations so that at least one of its pixels overlaps a pixel in the original image f(x,y).

For convolution, we simply rotate w(x,y) by 180 degrees, and proceed in the same manner as in correlation .

Vector representation of Linear filtering:

When interest lies on the response,R, of an m*nmask at any point (x, y), and not on the mechanics of implementing mask convolution, it is common practice to simplify the notation by using the following expression:

where the ws are mask coefficients, the zs are the values of the image gray levels corresponding to those coefficients, and mn is the total number of coefficients in the mask.

For the 3*3 general mask shown in Fig. 3.33 the response

at any point (x, y) in the image is given by

Histogram Processing

In Statistics, Histogram is a graphical representation showing a visual impression of the distribution of data.

An Image Histogram is a type of histogram that acts as a graphical representation of the lightness/color distribution in a digital image. It plots the number of pixels for each value.

The (intensity or brightness) histogram shows how many times a particular grey level (intensity) appears in an image.For example, 0 - black, 255 white

01124

21002

52004

11241

The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function h(rk) = nk, where rk is the kth gray level and nk is the number of pixels in the image having gray level rk. It is common practice to normalize a histogram by dividing each of its values by the total number of pixels in the image, denoted by n. Thus, a normalized histogram is given by p(rk) = nk / n, for k = 0, 1, , L -1.

Thus, p(rk) gives an estimate of the probability of occurrence of gray level rk. Note that the sum of all components of a normalized histogram is equal to 1.

Why Histogram?:

Histograms are the basis for numerous spatial domain processing techniques

Histogram manipulation can be used effectively for image enhancement

Histograms can be used to provide useful image statistics

Information derived from histograms are quite useful in other image processing applications, such as image compression and segmentation.

Histogram Equalization:

Transforms the intensity values so that the histogram of the output image approximately matches the flat (uniform) histogram.

As for the discrete case the following formula applies:

k = 0,1,2,...,L-1

L: number of grey levels in image (e.g., 255)

nj: number of times j-th grey level appears in image

n: total number of pixels in the image

Assuming that r represents the intensity of an input image in

the range [0, L-1] (black to white), we consider the intensity

mapping

s = T(r) 0 r L 1

such that

a) T(r) is monotonically increasing in [0, L-1] to

prevent reversal intensity artifacts;

b) T(r) being in [0, L-1] for r in [0, L-1] to ensure the

range of output intensity being the same as for input intensity.

The image intensity levels can be viewed as random variables in [0, L-1]. Let pr(r) and p s(s) represent pdfs of r and s. We need a transformation that would produce the output image with uniform p s(s) for an input image with an arbitrary pr(r).

Note, histograms are approximations of pdfs! The desired transformation is

s =T( r) = L pr (w) dw

It can be shown that this transformation leads to the rv s

uniformly distributed in [0,L-1].

The discrete form of the equalizing transformation is

This mapping is called a histogram equalization or

histogram linearization. Its not necessary one-to-one mapping.

A simple 3-bit (L=8) 64 by 64 image has the intensity distribution

Intensity levels are integers in [0, 7].

Values of the histogram equalization transform function are

Similarly:

and s2 = 4.55, s3 = 5.67, s4 = 6.23, s5 = 6.65, s6 = 6.86, s7 = 7.00.

we notice that the transformation function has fractional values

that need to be rounded to obtain a quantized histogram:

Dividing these values by MN, we obtain a new histogram, No new intensity levels are allowed. Therefore, perfectly uniform results cannot be achieved for digital images

Example of Histograms:

As an introduction to the role of histogram processing in image enhancement, consider the below figure shown in four basic gray-level characteristics: dark, light, low contrast, and high contrast.

The right side of the figure shows the histograms corresponding to these images.

The horizontal axis of each histogram plot corresponds to gray level values, rk.

The vertical axis corresponds to values of h(rk)=nk or p(rk)=nk/n if the values are normalized.

Thus, as indicated previously, these histogram plots are simply plots of h(rk)=nk versus rk or p(rk)=nk/n versus rk.

SHAPE \* MERGEFORMAT

We note in the dark Image that the components of the histogram are concentrated on the low (dark) side of the gray scale. Similarly, the components of the histogram of the bright image are biased toward the high side of the gray scale. An image with low contrast has a histogram that will be narrow and will be centered toward the middle of the gray scale. For a monochrome image this implies a dull, washed-out gray look. Finally, we see that the components of the histogram in the high-contrast image cover a broad range of the gray scale and, further, that the distribution of pixels is not too far from uniform, with very few vertical lines being much higher than the others. Intuitively, it is reasonable to conclude that an image whose pixels tend to occupy the entire range of possible gray levels and, in addition, tend to be distributed uniformly, will have an appearance of high contrast and will exhibit a large variety of gray tones.

Histogram matching(specification):

Histogram equalization automatically determines a transformation function seeking to produce an output image with a uniform histogram.

Another method is to generate an image having a specified histogram histogram matching:

1. Find the histogram pr(r) of the input image and determine its

equalization transformation

2. Use the specified pdf p z(r) of the output image to obtain the

transformation function:

3. Find the inverse transformation z = G-1(s) the mapping

from s to z:

4. Obtain the output image by equalizing the input image first; then for each pixel in the equalized image, perform the inverse mapping to obtain the corresponding pixel of the output image

A large concentration of pixel values near zero in the original image leads to unsatisfactory results of histogram equalization (washed out appearance).

Histogram matching may lead to much better results However, this technique is a trial-and-error process. There are no step-by-step recipes

Local histogram processing:

Sometimes, it is desired to apply histogram processing to a

portion (portions) of am image

Histogram statistics in image

Enhancement

Let the intensity in an image is represented by a discrete rv r

in [0, L-1] and let p(ri) is the normalized histogram estimate

of pdf for the intensity.

The nth statistical moment is

For image intensities, a sample mean:

and sample variance:

Smoothing Spatial Filters

Smoothing filters are used for blurring and for noise reduction. Blurring is used in preprocessing steps, such as removal of small details from an image prior to (large) object extraction, and bridging of small gaps in lines or curves. Noise reduction can be accomplished by blurring with a linear filter and also by nonlinear filtering.

Reduces effect of high frequency variation in functional imaging data, blurring sharp edges

SHAPE \* MERGEFORMAT

Why do you want to do it?

Increases signal-to-noise ratio

Enables averaging across subjects

Allows use of Gaussian Field Theory for thresholding.

Increases signal-to-noise ratio:

Depends on relative size of smoothing kernel and effects to be detected

Matched filter theorem: smoothing kernel = expected signal

Practically, rule of thumb: FWHM 3 x voxel size

May consider varying kernel size if interested in different brain regions, e.g. hippocampus vs. parietal cortex

Enables averaging across subjects:

Reduces influence of functional and/or anatomical differences between subjects

Even after realignment and normalisation, residual between-subject variability may remain

Smoothing data improves probability of identifying commonalities in activation between subjects, but trade-off with anatomical specificity

Allows use of Gaussian Field Theory for thresholding:

Assumes error terms are roughly Gaussian in form

Requires FWHM to be substantially greater than voxel size

Enables hypothesis testing and dealing with multiple comparison problem in functional imaging

How is it done?

Typically in functional imaging, a Gaussian smoothing kernel is used

Shape similar to normal distribution bell curve

Width usually described using full width at half maximum (FWHM) measureSmoothing Linear Filters:

The output (response) of a smoothing, linear spatial filter is simply the average of the pixels contained in the neighborhood of the filter mask. These filters sometimes are called averaging filters. The idea behind smoothing filters is straightforward. By replacing the value of every pixel in an image by the average of the gray levels in the neighborhood defined by the filter mask, this process results in an image with reduced sharp transitions in gray levels.

Figure shows two 3*3 smoothing filters. Use of the first filter yields the standard average of the pixels under the mask

which is the average of the gray levels of the pixels in the 3*3 neighborhood

defined by the mask. Note that, instead of being 1/9, the coefficients of the filter are all 1s.The idea here is that it is computationally more efficient to have coefficients valued 1. At the end of the filtering process the entire image is divided by 9. A spatial averaging filter in which all coefficients are equal is sometimes called a box filter. The second mask shown in Fig. 3.34 is a little more interesting. This mask yields a so-called weighted average, terminology used to indicate that pixels are multiplied by different coefficients, thus giving more importance (weight) to some pixels at the expense of others.

With reference to Eq,,the general implementation for filtering an

M*Nimage with a weighted averaging filter of size m*n (m and n odd) is

given by the expression

Order-Statistics Filters:

Order-statistics filters are nonlinear spatial filters whose response is based on ordering (ranking) the pixels contained in the image area encompassed by

the filter, and then replacing the value of the center pixel with the value determined by the ranking result.The best-known example in this category is the median filter, which, as its name implies, replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel (the original value of the pixel is included in the computation of the median). Median filters are quite popular because, for certain types of random noise, they provide excellent noise-reduction capabilities, with considerably less blurring than linear smoothing filters of similar size. Median filters are particularly effective in the presence of impulse noise, also called salt-and-pepper noise because of its appearance as white and black dots superimposed on an image.

Figure (a) shows an X-ray image of a circuit board heavily corrupted by

salt-and-pepper noise.To illustrate the point about the superiority of median filteringover average filtering in situations such as this, we show in Fig (b) the result of processing the noisy image with a 3*3 neighborhood averaging mask, and in Fig (c) the result of using a 3*3 median filter.The image processed with the averaging filter has less visible noise, but the price paid is significant blurring.The superiority in all respects of median over average filtering in this case is quite evident. In general, median filtering is much better suited than averaging for the removal of additive salt-and-pepper noise. _

Image Sharpening:

Human perception is highly sensitive to edges and fine details of an image, and since they are composed primarily by high frequency components, the visual quality of an image can be enormously degraded if the high frequencies are attenuated or completed removed. In contrast, enhancing the high-frequency components of an image leads to an improvement in the visual quality. Image sharpening refers to any enhancement technique that highlights edges and fine details in an image. Image sharpening is widely used in printing and photographic industries for increasing the local contrast and sharpening the images.

In principle, image sharpening consists of adding to the original image a signal that is proportional to a high-pass filtered version of the original image. Figure (1.1) illustrates this procedure, often referred to an unsharp masking on a one-dimensional signal. As shown in Fig (1.1), the original image is first filtered by a high-pass filter that extracts the high-frequency components, and then a scaled version of the high-pass filter output is added to the original image, thus producing a sharpened image of the original. Note that the homogeneous regions of the signal, i.e., where the signal is constant, remain unchanged. The sharpening operation can be represented by

whereis the original pixel value at the coordinateis the high-pass filter,is a tuning parameter greater that or equal zero, andis the sharpened pixel at the coordinate. The value taken bydepends on the grade of sharpness desired. Increasingyields a more sharpened image.

If color images are usedandare three-component vectors, whereas if gray-scale images are usedandare single-component vectors. Thus the process described here can be applied to either gray-scale or color images, with the only difference being that vector filters have to be used in sharpening color images whereas single-component filters are used with gray-scale images.

The key point in the effective sharpening process lies in the choice of the high-pass filtering operation. Traditionally, linear filters have been used to implement the high-pass filter, however, linear techniques can lead to unacceptable results if the original image is corrupted with noise. A tradeoff between noise attenuation and edge highlighting can be obtained if a weighted median filter with appropriated weights is used. To illustrate this, consider a WM filter applied to a gray-scale image where the following filter mask is used.

(5.8.8)

Figure 1.1

Because of the weight coefficients in the equation, for each position of the moving window, the output is proportional to the difference between the center pixel and the smallest pixel around the center pixel. Thus, the filter output takes relatively large values for prominent edges in an image, and small values in regions that are fairly smooth, being zero only in regions that have a constant gray level.

Although this filter can effectively extract the edges contained in an image, the effect that this filtering operation has over negative-slope is different from that obtained for positive-Slope edges .

Note :[ A change from a gray level to a lower gray level is referred to as a negative-slope edge, whereas a change from a gray level to a higher gray level is referred to as a positive-slop edge.]

Since the filer output is proportional to thedifference between the center pixel and the small pixel around the center, for negative-slope edges, the center pixel small values producing small values at the filter output. Moreover, the filter output is zero if the smallest pixel around the center pixel and the center pixel have the same values.

This implies that negative-slope edges are not extracted in the same way as positive-slope edges. To overcome this limitation the basic image sharpening structure shown in Figure (5.33) must be modified such that positive-slope edges as well as negative-slope edges are highlighted in the same proportion. A simple way to accomplish that is: (a) extract the positive-slope edges, and then filter the preprocessed image with the filter described above; (c) combine appropriately the original image, the filtered version of the original image, and the filtered version of the preprocessed image to form the sharpened image.

Figure 1.2

Thus both positive-slope edges and negative-slope are equally highlighted. This procedure is illustrated in Figure (5.34), where the top branch extracts the positive-slope edges and the middle branch extracts the negative-slope edges. In order to understand the effects of edge sharpening, a row of a test image is plotted in Figure (1.3) together with a row of the sharpened image when only the positive-slope edges are highlighted, Figure (1.3), only the negative-slope edges are highlighted, Figure (1.3), and both positive-slope and negative-slope edges are jointly highlighted, Figure (1.3)

In Figure (1.2).andare tuning parameters that control the amount of sharpness desired in the positive-slope direction and in the negative-slope direction, respectively. The values ofandare generally selected to be equal. The output of the prefiltering operation is defined as

(5.34)

with M equal to the maximum pixel value of the original image. This prefiltering operation can be thought of as a flipping and a shifting operation of the values of the original image such that the negative-slope edges are converted in positive-slope edges. Since the original image and the pre-filtered image are filtered by the same WM filter, the positive-slope edges and negative-slopes edges are sharpened in the same way.

Figure 1.3

Fuzzy Techniques for spatial filtering:

Fuzzy Techniques for Intensity

Transformations and Spatial Filtering2 Contents

Topics covered:

General principles of fuzzy set theory

Intensity transformations using fuzzy sets

Spatial filtering using fuzzy sets

3 Introduction

Fuzzy sets provide a framework to incorporate human logic in problems with imprecise concepts.

Set membership Crisp sets: the membership function assigns values of 0 or 1 (the element belongs to the set or not).

Fuzzy sets: the membership function has a gradual transition between 0 and 1 (the element has a degree of membership).

4 Introduction (cont.)

Example: let Z be the set of all people and we

want to define a subset A, the set of young people.

Crisp set Fuzzy set

We may make statements as: young, relatively young, not so young...

It is not a probability!

5 Principles of fuzzy set theory

Let Z= {z} be a set of elements with a generic element denoted by z.

A fuzzy set A in Z is characterized by a membership function A(z) that associates to each element of z a real number in [0,1], the grade of membership.

A fuzzy set is an ordered pair

A = {z, ( z) | z Z }

6 Principles of fuzzy set theory (cont.) Empty fuzzy set:

A ( z) = 0, z Z Equality:

A = B if and only if A ( z) = B ( z), z Z Complement (NOT):

( z) = 1 ( z) Subset:

A B if and only if A ( z) B ( z), z Z Union (OR): U = A B : U ( z) = max [A ( z), B ( z)] Intersection (AND):

I = A B : I ( z) = min [A ( z), B ( z)]7 Principles of fuzzy set theory (cont.)

8 Common membership functions

9 Using fuzzy sets

Example: Use colour to categorize fruit into three groups: verdant, half-mature and mature.

Observations at various stages of maturity led to the conclusions:

A verdant fruit is green

A half mature fruit is yellow

A mature fruit is red.

The colour is a vague description and has to be expressed in fuzzy format.

Linguistic variable (colour) with a linguistic value (e.g. red) is fuzzified through the membership function.

10 Using fuzzy sets (cont.)

11 Using fuzzy sets (cont.)

The problem specific knowledge may be formalized in the form of fuzzy IF-THEN rules: R1: IF the color is green, THEN the fruit is

verdant.OR R2: IF the color is yellow, THEN the fruit is half- mature.

OR R3: IF the color is red, THEN the fruit is mature.

12 Using fuzzy sets (cont.) The next step is to perform inference or implication, that is, to use the inputs and the knowledge (IF-THEN rules) to obtain the output.

As the input is fuzzy, the output (maturity) is, in general, also fuzzy.

13 Using fuzzy sets (cont.)

For the sake of clarity, lets see it through R3:

IF the color is red, THEN the fruit is mature. Red AND mature is the intersection (AND) of the membership functions red (z) and mat (v).

Notice that the independent variables are different (z and v) and the result will be two-dimensional (2D).

The intersection corresponds to the minimum:

3 ( z, v) = min {red ( z), mat (v)}

14 Using fuzzy sets (cont.)3 ( z, v) = min {red ( z), mat (v)}

15 Using fuzzy sets (cont.) In general, we are interested in a specific input (e.g. a value of red z0). To find the output variable, we perform the AND operation between red (z0)=c and the general 2D result 3(z,v):Q3 (v) = min {red ( z0 ), 3 ( z0 , v)}

16 Using fuzzy sets (cont.) Equivalently, for the other rules:

Q2 (v) = min {yellow ( z0 ), 2 ( z0 , v)}Q1 (v) = min {green ( z0 ), 1 ( z0 , v)} The complete fuzzy output is given by:

Q = Q1 OR Q2 OR Q3which is the union (OR) of the three individual fuzzy sets. Because OR is defined as the max operator:

Q(v) = max {min {s ( z0 ), r ( z0 , v)}}r sr = {1, 2, 3}, s = {green, yellow, red}17 Using fuzzy sets (cont.)

Input of the membership functions to colour z0.

Individual output for each rule

the clipped cross-sections discussed previously

Union of the outputs18 Using fuzzy sets (cont.)

We have the complete output corresponding to a specific input (colour z0).

To obtain a crisp value for the maturity of that colour (defuzzification), one way is to compute the center of gravity:

v = v =1

vQ(v)Q(v)v =1

19 Using fuzzy sets (cont.) We may combine more than one inputs.

Contrast enhancement using fuzzy20sets The problem may be stated using the following rules:

IF a pixel is dark, THEN make it darker IF a pixel is gray, THEN make it gray IF a pixel is bright, THEN make it brighter Both input and output are fuzzy termsContrast enhancement using fuzzy21sets (cont.) We are dealing with constants in the output in this example, membership and the expression is simplified:

( z ) v + ( z ) v + ( z ) vv = dark 0 d gray 0 g bright 0 bdark ( z0 ) + gray ( z0 ) + bright ( z0 )Contrast enhancement using fuzzy22sets (cont.) Notice the difference in the hair and forehead with respect to histogram equalization.

Contrast enhancement using fuzzy23sets (cont.) The histogram expanded but its main characteristics were kept contrary to histogram equalization.

24 Spatial filtering using fuzzy sets

A boundary extraction algorithm may have the rules

If a pixel belongs to a uniform region, then make it white Else make it black Uniform region, black and white are fuzzy sets and we have to define their their membership functions

25 Spatial filtering using fuzzy sets A simple set of rules:

IF d2 is zero AND d6 is zero THEN z5=white IF d6 is zero AND d8 is zero THEN z5=white IF d8 is zero AND d4 is zero THEN z5=white IF d4 is zero AND d2 is zero THEN z5=white ELSE z5=black26 Spatial filtering using fuzzy sets Membership functions (for input: zero, output:

black and white) and fuzzy rules

27 Spatial filtering using fuzzy sets Membership functions (for input: zero, output:

black and white) and fuzzy rules

Spatial filters for noise removal:

Understanding Sources of Noise in Digital Images

Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process that result in pixel values that do not reflect the true intensities of the real scene. There are several ways that noise can be introduced into an image, depending on how the image is created. For example:

If the image is scanned from a photograph made on film, the film grain is a source of noise. Noise can also be the result of damage to the film, or be introduced by the scanner itself.

If the image is acquired directly in a digital format, the mechanism for gathering the data (such as a CCD detector) can introduce noise.

Electronic transmission of image data can introduce noise.

To simulate the effects of some of the problems listed above, the toolbox provides theimnoisefunction, which you can use toaddvarious types of noise to an image. The examples in this section use this function.

Removing Noise By Linear Filtering

You can use linear filtering to remove certain types of noise. Certain filters, such as averaging or Gaussian filters, are appropriate for this purpose. For example, an averaging filter is useful for removing grain noise from a photograph. Because each pixel gets set to the average of the pixels in its neighborhood, local variations caused by grain are reduced.

SeeDesigning and Implementing Linear Filters in the Spatial Domainfor more information about linear filtering usingimfilter.

Removing Noise By Median Filtering

Median filtering is similar to using an averaging filter, in that each output pixel is set to an average of the pixel values in the neighborhood of the corresponding input pixel. However, with median filtering, the value of an output pixel is determined by themedianof the neighborhood pixels, rather than the mean. The median is much less sensitive than the mean to extreme values (calledoutliers). Median filtering is therefore better able to remove these outliers without reducing the sharpness of the image. Themedfilt2function implements median filtering.

Note Median filtering is a specific case oforder-statistic filtering, also known asrank filtering.For information about order-statistic filtering, see the reference page for theordfilt2function.

The following example compares using an averaging filter andmedfilt2to removesalt and peppernoise. This type of noise consists of random pixels' being set to black or white (the extremes of the data range). In both cases the size of the neighborhood used for filtering is 3-by-3.

1. Read in the image and display it.

2. I = imread('eight.tif');

imshow(I)

3. Add noise to it.

4. J = imnoise(I,'salt & pepper',0.02);

figure, imshow(J)

5. Filter the noisy image with an averaging filter and display the results.

6. K = filter2(fspecial('average',3),J)/255;

figure, imshow(K)

7. Now use a median filter to filter the noisy image and display the results. Notice thatmedfilt2does a better job of removing noise, with less blurring of edges.

8. L = medfilt2(J,[3 3]);

figure, imshow(L)

Removing Noise By Adaptive Filtering

Thewiener2function applies a Wiener filter (a type of linear filter) to an imageadaptively,tailoring itself to the local image variance. Where the variance is large,wiener2performs little smoothing. Where the variance is small,wiener2performs more smoothing.

This approach often produces better results than linear filtering. The adaptive filter is more selective than a comparable linear filter, preserving edges and other high-frequency parts of an image. In addition, there are no design tasks; thewiener2function handles all preliminary computations and implements the filter for an input image.wiener2, however, does require more computation time than linear filtering.

wiener2works best when the noise is constant-power ("white") additive noise, such as Gaussian noise. The example below applieswiener2to an image of Saturn that has had Gaussian noise added.

1. Read in an image. Because the image is a truecolor image, the example converts it to grayscale.

2. RGB = imread('saturn.png');

I = rgb2gray(RGB);

3. The example then add Gaussian noise to the image and then displays the image. Because the image is quite large, the figure only shows a portion of the image.

4. J = imnoise(I,'gaussian',0,0.025);

imshow(J)

Portion of the Image with Added Gaussian Noise

5. Remove the noise, using thewiener2function. Again, the figure only shows a portion of the image

6. K = wiener2(J,[5 5]);

figure, imshow(K)

Portion of the Image with Noise Removed by Wiener Filter

A A

Ob i i f i l d

t

knowledge (IF

T fi d h i bl f h AND

K

K

0

0

EMBED Unknown

f(i,j)

w(i,j)

Output

Image

EMBED Unknown

_1441950668.unknown

_1441950547.unknown