A NEW EFFICIENT ALGORITHM FOR REMOVING OF HIGH DENSITY SALT AND
PEPPER NOISE THROUGH MODIFIED DECISION BASED UNSYMMETRIC TRIMMED
MEDIAN FILTER FOR VEDIO RESTORATION
A Project Report Submitted in partial fulfillment of the
Requirement for the award of the Degree ofMaster of Technology
INCommunication Engineering & Signal ProcessingBySIDDABATHUNI
RAMATULASI(Y11MTEC813)Under the guidance ofP.SIVA PRASAD, M.Tech
Assistant professor
Department of Electronics & Communication EngineeringR.V.R.
& J.C.COLLEGE OF ENGINEERINGAUTONOMOUS(Approved by
A.I.C.T.E)(Affiliated to Acharya Nagarjuna University)
Chandramoulipuram GUNTUR 522 019, Andhra Pradesh,
INDIA[2011-2013]Department of Electronics & Communication
Engineering
CERTIFICATE
This is to certify that the project report entitled THE NEW
EFFICIENT ALGORITHM FOR REMOVING OF HIGH DENSITY SALT AND PEPPER
NOISE THROUGH MODIFIED DECISION BASED UNSYMMETRIC TRIMMED MEDIAN
FILTER FOR VIDEO RESTORATION that is being submitted by
Siddabathuni Ramatulasi (Y11MTEC813) in partial fulfillment for the
award of the Degree of Master of Technology in Communication
Engineering & Signal Processing to the Acharya Nagarjuna
University is a record of bonafide work carried out by her under my
guidance and supervision. The results embodied in this project
report have not been submitted to any other University or Institute
for the award of any degree or diploma.
Date:
Signature of GuideSignature of HODP.SIVA PRASAD,
M.TechDr.V.V.K.D.V.PRASAD, Ph.DAssistant ProfessorProfessor &
Head
ACKNOWLEDGEMENTI would like to express my sincere gratitude to
my project guide, P.SIVA PRASAD, for his advice, encouragement and
his help to solve many practical problems and his patience to
answer my many questions. His deep understanding and immense
knowledge helped me solve many difficult problems. He guided me
throughout this project, even during his extremely busy schedule. I
am also grateful to my Head of Department Dr.V.V.K.D.V.PRASAD, for
his support and supervision. And also I sincerely thank to
Dr.A.SUDHAKAR, Principal for providing all facilities to done the
project successfully.This master thesis project has been a very
valuable learning experience. It has given me the chance to learn
better ways of achieving goals from more experienced personnel.
Above all, the most important asset I have taken from this
experience is the willingness to learn. The working atmosphere and
especially the nice persons of this department have encouraged me
in my work. I would like to thank them all for their hospitality. I
would like to thank all the lecturers and technical staff in
Electronics and Communications Department for their warm hearted
support during difficult times. Last but not least to my parents, I
extend my deepest love. They have always motivated me to continue
my higher studies.
TABLE OF CONTENTSChapter NoDescriptionPage NoAbstractiList of
FiguresiiList of SymbolsvList of AbbreviationsviChapter
1INTRODUCTION11.1 Introduction21.2 Literature Survey3Chapter
2DIGITAL IMAGE PROCESSING72.1 Introduction 82.2 Digital image8
2.2.1 Types of digital images102.2.1.1 Black and white images10
2.2.1.2 Color images102.3 Image file sizes112.4 Image file
formats112.5 Digital image processing122.6 Advantages of Digital
Image Processing132.7 Disadvantages of Digital Image
Processing132.8 Fundamental steps in digital image
processing142.8.1 Image acquisition14 2.8.2 Image
enhancement152.8.3 Image restoration162.9 Color image
processing172.9.1 Segmentation172.9.2 Image compression182.10
Project description 19Chapter 3DIGITAL VIDEO213.1 Introduction223.2
History253.3 Advantages of Digital video273.4 Frame
separation293.4.1 Number of Frames per Second293.4.2 Interlaced Vs
Progressive303.5 Video quality31Chapter 4NOISE324.1
Introduction334.2 Noise334.3 Types of Noise344.3.1 Short
Noise354.3.2 Gaussian Noise354.3.3 White Noise364.3.4 Additive
White Gaussian Noise364.3.5 Poison Noise374.3.6 Speckle
Noise374.3.7 Salt and Pepper Noise384.4 Noise Generation384.4.1
Signal to Noise Ratio394.4.2 Peak Signal to Noise Ratio404.5
Detector Noise414.6 Crimmins Speckle Removal42 Chapter 5IMAGE
RESTORATION445.1 Introduction455.2 Restoration455.3 Requirements
for Restoration465.4 Degradation465.4.1 Image Degradation475.4.2
Image Degradation for Blur Parameters475.5 Thresholding49Chapter
6REMOVING OF HIGH DENSITY SALT AND PEPPER NOISE USING DIFFERENT
FILTERS506.1 Introduction516.2 Salt and Pepper Noise516.3
Disadvantages of Salt and Pepper Noise526.4. Mean Filter526.5
Median Filter536.5.1 Advantages of Median Filter556.5.2
Disadvantage of the Median Filter566.5.3 Comparison between the
Median Filter and the Average filter586.6 Removing of Salt and
Pepper Noise using Different Filters596.6.1 Standard Median
Filter596.6.2 Adaptive Median Filter606.6.3 Tolerance Based
Switched Median Filter616.6.4 Decision Based Algorithm626.6.5
Unsymmetric Trimmed Median Filter626.6.6 Decision Based Unsymmetric
Trimmed Median Filter636.6.7 Modified Decision based Unsymmetric
Trimmed Median Filter 64 Chapter 7APPLICATIONS 667.1 Photoshop
677.2 Satellites687.3 Medical line process69Chapter 8SIMULATION
RESULTS70Chapter 9CONCLUSIONS81BIBLIOGRAPHY 83
ABSTRACT
It is important to remove or minimize the degradations, noises
in valuable ancient blurred color images. The traditional available
filtering methodologies are applicable for fixed widow dimensions
only these are not applicable for varying scale images. In this
project we propose a new technique for digital image restoration,
in this the noise free and noisy pixels are classified based on
empirical multiple threshold values. Then the median filtering
technique is applied. So that noise free pixels are getting
preserved and only noisy pixels get restored. In this project, a
novel decision-based filter, called the multiple thresholds
switching (MTS) filter, is proposed to restore images corrupted by
salt-pepper impulse noise. The filter is based on a
detection-estimation strategy. The impulse detection algorithm is
used before the filtering process, and therefore only the
noise-corrupted pixels are replaced with the estimated central
noise-free ordered mean value in the current filter window. The new
impulse detector, which uses multiple thresholds with multiple
neighborhood information of the signal in the filter window, is
very precise, while avoiding an undue increase in computational
complexity. For impulse noise suppression without smearing fine
details and edges in the image, extensive experimental results
demonstrate that our scheme performs significantly better than many
existing, well-accepted decision-based methods. The performance of
our proposed algorithm will be analyzed based PSNR and MSE
values.
LIST OF FIGURES
Figure NoFigure TitlePage NoFigure 2.1 Normal and Pesedo-photo
graph Image8Figure 2.2 Pixel value of a Image9Figure 2.3Color
values form 32-bit table10Figure 2.4Metafile format Images12Figure
2.5Digital Image Processing block diagram14Figure 2.6Digital Camera
15Figure 2.7Scanners15Figure 2.8Enhancement Image16Figure 2.9Image
Restoration16Figure 2.10Black and white to color image17Figure
2.11Image Segmentation17Figure 2.12Block diagram of proposed
scheme20Figure 3.1Example of frame separation29Figure 4.11D
Gaussian distributions with mean 0 and Standard deviation 1
41Figure 5.1Blur Length48Figure 5.2Blur Angle49Figure 6.133
averaging kernel often used in mean filtering53Figure
6.2Illustrates an example of median filtering54Figure
6.3Calculating the median value of a pixel neighborhood 55Figure
6.4The original image and the same image of median filter56Figure
6.5Comparison of the nonlinear median filter and the linear mean
filter. 57Figure 6.6Standard median filter59Figure 6.7Adaptive
median filter61Figure 6.8Tolerance based switched median
filter62Figure 6.9Decision based algorithm 62Figure 6.10Modified
decisions based un-symmetric trimmed median Filter65Figure
8.1Project execution first step71Figure 8.2Select the video
sample72Figure 8.3 Browsing the video sample72Figure 8.4Frame
separation73Figure 8.5Frame selection73Figure 8.6Selecting the
frame74Figure 8.7Generating the noise in first level74Figure
8.8Generating the noise in second level75Figure 8.9Generating the
noise in third level75Figure 8.10 Generating the noise in fourth
level76Figure 8.11Generating the noise in fifth level76Figure
8.12The output of the standard median filter77Figure 8.13The output
of the adaptive median filter77Figure 8.14The output of the
tolerance based switched median filter78Figure 8.15The output of
the decision based algorithm78Figure 8.16The output values of the
existing methods79Figure 8.17The output of the decision based
unsymmetric trimmed Median filter79Figure 8.18The output of the
decision based unsymmetric trimmed Median filter for video
Restoration80Figure 8.19The output value of the modified
method80
LIST OF SYMBOLS
Noise Variance
Alpha
KNoisy Approximation
IImage
LIST OF ABBREVIATIONS
CWM: Center Weighted Median FilterPSM:Progressive Switching
Median FilterPNG:Portable Network GraphicsJPEG:Joint Photographic
Experts GroupGIF:Graphics Interchange
FormatTV:TelevisionCT:Computed TomographyMRI:Magnetic Resonance
ImagingPSNR:Peak Signal-to-Noise RatioIEF:Image Enhancement
FactorMSE:Mean Square ErrorFPS:Frames per SecondCD:Compact
DiscNLE:Non-linear Editing WorkstationHDTV:High Definition
TelevisionMPEG:Moving Picture Experts GroupDVD:Digital Video
DiscCPU:Central Processing UnitTBC: Time Base CorrectorsNEC:Nippon
Electric CorporationDVE:Digital Video EffectsADO:AmpexDigital
OpticsVTR:Video Tape RecordersEFP:Electronic Field
ProductionHDV:High Definition VideoPAL:Phase Alternating
LineSECAM:Sequential Couleur Avec MemoireNTSC:National Television
Standards CommitteeCRT:Cathode Ray TubeLCD:Liquid Crystal
DisplayAWGN:Additive White Gaussian NoiseSNR:Signal to Noise
RatioPSNR:Peak Signal to Noise RatioPSF:Point Spread
FunctionDBA:Decision Based AlgorithmATMF:Alpha Trimmed
MidpointSMF:Standard Median FilterUTMF:Un-symmetric Trimmed Median
FilterAMF:Adaptive Median FilterMDBA:Modified Decision Based
AlgorithmMDBUTMF:Modified Decision Based Unsymmetric Trimmed Median
FilterFNRM:Fuzzy Noise Reduction MethodPSD:Photoshop
DocumentPSB:Photoshop BigRAM:Random Access MemoryROM:Read only
MemoryATM:Adaptive Two-pass Median FilterSVM:Support Vector
Machines
CHAPTER-1INTRODUCTION
1.1 Introduction:In image processing it is usually necessary to
perform high degree of noise reduction in an image before
performing higher-level processing steps, such as edge detection.
The median filter is a non-linear digital filtering technique,
often used to remove noise from images or other signals. The idea
is to examine a sample of the input and decide if it is
representative of the signal. This is performed using a window
consisting of an odd number of samples. The values in the window
are sorted into numerical order; the median value, the sample in
the center of the window, is selected as the output. The oldest
sample is discarded, a new sample acquired, and the calculation
repeats.Median filtering is a common step in image processing. It
is particularly useful to reduce speckle noise and salt and pepper
noise. Its edge-preserving nature makes it useful in cases where
edge blurring is undesirableImage synthesis is the process of
creating new images from some form of image description. The kinds
of images that are typically synthesized include Test Patterns -
scenes with simple two dimensional geometric shapes. Image Noise -
images containing random pixel values usually generated from
specific parameterized distributions. Computer Graphics - scenes or
images based on geometric shape descriptions. Often the models are
three dimensional, but may also be two dimensional. Synthetic
images are often used to verify the correctness of operators by
applying them to known images. They are also often used for
teaching purposes, as the operator output on such images is
generally `clean', whereas noise and uncontrollable pixel
distributions in real images make it harder to demonstrate
unambiguous results. The images could be binary, grey level or
color. 1.2 Literature Survey: Median filters based on fuzzy rules
and its application to image restoration:A novel median-type filter
controlled by fuzzy rules is proposed in order to remove impulsive
noises on signals such as images. Median filter is well known for
removing impulsive noises but this filter distorts the fine
structure of signals as well. The filter proposed here is obtained
as a weighted sum of the input signal and the output of the median
filter, and the weight is set based on fuzzy rules concerning the
states of the input signal sequence. Moreover, this weight is
obtained optimally by a learning method, so that the mean square
error of the filter output for some training signal data can be the
minimum. Some results of image processing show the high performance
of this filter. Moreover, the influences of the training signal on
the filter performance. Selective Removal of Impulse Noise Based on
Homogeneity Level Information:We propose a decision-based, signal
adaptive median filtering algorithm for removal of impulse noise.
Our algorithm achieves accurate noise detection and high SNR
measures without smearing the fine details and edges in the image.
The notion of homogeneity level is defined for pixel values based
on their global and local statistical properties. The co-occurrence
matrix technique is used to represent the correlations between a
pixel and its neighbors, and to derive the upper and lower bound of
the homogeneity level. Noise detection is performed at two stages:
noise candidates are first selected using the homogeneity level,
and then a refining process follows to eliminate false detections.
The noise detection scheme does not use a quantitative decision
measure, but uses qualitative structural information, and it is not
subject to burdensome computations for optimization of the
threshold values. Empirical results indicate that our scheme
performs significantly better than other median filters, in terms
of noise suppression and detail preservation. A new efficient
approach for the removal of impulse noise from highly corrupted
images:In this paper, a novel adaptive filter, called the adaptive
two-pass median (ATM) filter based on support vector machines
(SVMs), is proposed to preserve more image details while
effectively suppressing impulse noise for image restoration. The
proposed filter is composed of a noise decision-maker and two pass
median filters. Our new approach basically uses an SVM impulse
detector to judge whether the input pixel is noise or not. If a
pixel is detected as a corrupted pixel, the noise-free reduction
median filter will be triggered to replace it. Otherwise, it keeps
unchanged. Then, to improve the quality of the restored image, a
decision impulse filter is put to work in the second pass filtering
procedure. As for the noise suppressing on both fixed-valued and
random-valued impulses without degrading the quality of the fine
details, the results of our extensive experiments demonstrate that
the proposed filter outperforms earlier median-based filters in the
literature. In addition, our new filter also provides excellent
robustness at various percentages of impulse noise. Application of
partition-based median type filters for suppressing noise in
images:
An adaptive median based filter is proposed for removing noise
from images. Specifically, the observed sample vector at each pixel
location is classified into one of M mutually exclusive partitions,
each of which has a particular filtering operation. The observation
signal space is partitioned based an the differences defined
between the current pixel value and the outputs of CWM (center
weighted median) filters with variable center weights. The estimate
at each location is formed as a linear combination of the outputs
of those CWM filters and the current pixel value. To control the
dynamic range of filter outputs, a location-invariance constraint
is imposed upon each weighting vector. The weights are optimized
using the constrained LMS (least mean square) algorithm. Recursive
implementation of the new filter is then addressed. The new
technique consistently outperforms other median based filters in
suppressing both random-valued and fixed-valued impulses, and it
also works satisfactorily in reducing Gaussian noise as well as
mixed Gaussian and impulse noise.
A noise-filtering method using a local information measure:A
nonlinear-noise filtering method for image processing, based on the
entropy concept is developed and compared to the well-known median
filter and to the center weighted median filter (CWM). The
performance of the proposed method is evaluated through subjective
and objective criteria. It is shown that this method performs
better than the classical median for different types of noise and
can perform better than the CWM filter in some cases. Progressive
Switching Median Filter for the Removal of Impulse Noise from
Highly Corrupted Images:A new median-based lter, progressive
switching median (PSM) lter, is proposed to restore images
corrupted by saltpepper impulse noise. The algorithm is developed
by the following two main points: Switching scheme an impulse
detection algorithm is used before ltering, thus only a proportion
of all the pixels will be ltered. Progressive methods both the
impulse detection and the noise ltering procedures are
progressively applied through several iterations. Simulation
results demonstrate that the proposed algorithm is better than
traditional median-based lters and is particularly effective for
the cases where the images are very highly corrupted.
CHAPTER-2 DIGITAL IMAGE PROCESSING
2.1 Introduction:When using digital equipment to capture, store,
modify and view photographic images, they must first be converted
to a set of numbers in a process called digitiza- tion or scanning.
Computers are very good at storing and manipulating numbers, so
once your image has been digitized you can use your computer to
archive, examine, alter, display, transmit, or print your
photographs in an incredible variety of ways.2.2 Digital Image:An
image is a two-dimensional picture, which has a similar appearance
to some subject usually a physical object or a person. Image is a
two-dimensional, such as a photograph, screen display, and as well
as a three-dimensional, such as a statue. They may be captured by
optical devices such as cameras, mirrors, lenses, telescopes,
microscopes, etc. and natural objects and phenomena, such as the
human eye or water surfaces.
Fig 2.1 Normal and Pesedo-photo graph Image
The word image is also used in the broader sense of any
two-dimensional figure such as a map, a graph, a pie chart, or an
abstract painting. In this wider sense, images can also be rendered
manually, such as by drawing, painting, carving, rendered
automatically by printing or computer graphics technology, or
developed by a combination of methods, especially in a
pseudo-photograph. An image is a rectangular grid of pixels. It has
a definite height and a definite width counted in pixels. Each
pixel is square and has a fixed size on a given display. However
different computer monitors may use different sized pixels. The
pixels that constitute an image are ordered as a grid (columns and
rows); each pixel consists of numbers representing magnitudes of
brightness and color.
Fig 2.2 Pixel value of a ImageEach pixel has a color. The color
is a 32-bit integer. The first eight bits determine the redness of
the pixel, the next eight bits the greenness, the next eight bits
the blueness, and the remaining eight bits the transparency of the
pixel.
Fig 2.3 Colour values form 32-bit table
2.2.1 Types of Digital Images:For photographic purposes, there
are two important types of digital images color and black and
white. Color images are made up of colored pixels while black and
white images are made of pixels in different shades of gray.2.2.1.1
Black and White:A black and white image is made up of pixels each
of which holds a single number corresponding to the gray level of
the image at a particular location. These gray levels span the full
range from black to white in a series of very fine steps, normally
256 different grays. Since the eye can barely distinguish about 200
different gray levels. Assuming 256 gray levels, each black and
white pixel can be stored in a single byte (8 bits) of
memory.2.2.1.2 Color Images:A color image is made up of pixels each
of which holds three numbers corresponding to the red, green, and
blue levels of the image at a particular location. Red, green, and
blue (sometimes referred to as RGB) are the primary colors for
mixing lights these so called additive primary colors are different
from the subtractive primary colors used for mixing paints (cyan,
magenta, and yellow). Any color can be created by mixing the
correct amounts of red, green, and blue light. Assuming 256 levels
for each primary, each color pixel can be stored in three bytes (24
bits) of memory. This corresponds to roughly 16.7 million different
possible colors.
2.3 Image File Sizes:Image file size is expressed as the number
of bytes that increases with the number of pixels composing an
image, and the color depth of the pixels. The greater the number of
rows and columns, the greater the image resolution, and the larger
the file. Also, each pixel of an image increases in size when its
color depth increases, an 8-bit pixel (1 byte) stores 256 colors, a
24-bit pixel (3 bytes) stores 16 million colors, the latter known
as true color.Image compression uses algorithms to decrease the
size of a file. High resolution cameras produce large image files,
ranging from hundreds of kilobytes to megabytes, per the camera's
resolution and the image-storage format capacity. High resolution
digital cameras record 12 megapixel (1MP = 1,000,000 pixels / 1
million) images, or more, in true color. For example, an image
recorded by a 12 MP camera; since each pixel uses 3 bytes to record
true color, the uncompressed image would occupy 36,000,000 bytes of
memory, a great amount of digital storage for one image, given that
cameras must record and store many images to be practical. Faced
with large file sizes, both within the camera and a storage disc,
image file formats were developed to store such large images.2.4
Image File Formats:Image file formats are standardized means of
organizing and storing images. This entry is about digital image
formats used to store photographic and other images. Image files
are composed of either pixel or vector (geometric) data that are
rasterized to pixels when displayed (with few exceptions) in a
vector graphic display. Including proprietary types, there are
hundreds of image file types. The PNG, JPEG, and GIF formats are
most often used to display images on the Internet.
Fig 2.4 Metafile format ImagesIn addition to straight image
formats, Metafile formats are portable formats which can include
both raster and vector information. The metafile format is an
intermediate format. Most Windows applications open metafiles and
then save them in their own native format.2.5 Digital Image
Processing:The field of digital image processing refers to
processing digital images by means of a digital computer. Note that
a digital image is composed of a finite number of elements, each of
which has a particular location and value. These elements are
referred to as picture elements, image elements, pels, and pixels.
Pixel is the term most widely used to denote the elements of a
digital image. Vision is the most advanced of our senses, so it is
not surprising that images play the single most important role in
human perception. Digital image processingis a subcategory or field
ofdigital signal processing, digital image processing has many
advantages overanalog image processing. It allows a much wider
range of algorithms to be applied to the input data and can avoid
problems such as the build-up of noise and signal distortion during
processing. Since images are defined over two dimensions (perhaps
more) digital image processing may be modeled in the form
ofmultidimensional systems.2.6 Advantages of Digital Image
Processing:
The processing of images is faster and more cost-effective. One
needs less time for processing, as well as less film and other
photographing equipment. When shooting a digital image, one can
immediately see if the image is good or not. By changing the image
format and resolution, the image can be used in a number of media.
Digital image processing made digital image can be noise free.
Digital imaging is the ability of the operator to post process the
image. It means manipulate the pixel shades to correct the image
density and contrast. Digital imaging allows the electronic
transmission of images to third party providers. The expensive
reproduction is faster and cheaper.2.7 Disadvantages of Digital
Image Processing: The initial cost can be high depending on the
system used. Misuse of copyright is now easier than it earlier was.
For instance, images can be copied from the Internet just by
clicking the mouse a couple of times. Work has become more
technical, which may not be a disadvantage for everyone. If
computer is crashes then pics that have not been printed and filed
into book albums that are lost.
2.8. Fundamental Steps in Digital Image Processing:
Fig 2.5 Digital Image Processing block diagram2.8.1 Image
Acquisition:Image Acquisition is to acquire a digital image. To do
so requires an image sensor and the capability to digitize the
signal produced by the sensor. The sensor could be monochrome or
color TV camera that produces an entire image of the problem domain
every 1/30 sec. the image sensor could also be line scan camera
that produces a single image line at a time. In this case, the
objects motion past the line.
Fig 2.6 Digital camera Scanner produces a two-dimensional image.
If the output of the camera or other imaging sensor is not in
digital form, an analog to digital converter digitizes it. The
nature of the sensor and the image it produces are determined by
the application. Fig 2.7 Scanners
2.8.2 Image Enhancement:Image enhancement is among the simplest
and most appealing areas of digital image processing. Basically,
the idea behind enhancement techniques is to bring out detail that
is obscured, or simply to highlight certain features of interesting
an image. A familiar example of enhancement is when we increase the
contrast of an image because it looks better. It is important to
keep in mind that enhancement is a very subjective area of image
processing.
Fig 2.8 Enhancement Image
2.8.3 Image restoration:Image restoration is an area that also
deals with improving the appearance of an image. However, unlike
enhancement, which is subjective, image restoration is objective,
in the sense that restoration techniques tend to be based on
mathematical or probabilistic models of image degradation.
Fig 2.9 Image Restoration
Enhancement, on the other hand, is based on human subjective
preferences regarding what constitutes a good enhancement result.
For example, contrast stretching is considered an enhancement
technique because it is based primarily on the pleasing aspects it
might present to the viewer, where as removal of image blur by
applying a deblurring function is considered a restoration
technique.2.9 Colour Image Processing:The use of colour in image
processing is motivated by two principal factors. First, colour is
a powerful descriptor that often simplifies object identification
and extraction from a scene. Second, humans can discern thousands
of color shades and intensities, compared to about only two dozen
shades of gray. This second factor is particularly important in
manual image analysis.
Fig 2.10 Black and white to colour image
2.9.1 Segmentation:Segmentation procedures partition an image
into its constituent parts or objects. In general, autonomous
segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a
long way toward successful solution of imaging problems that
require objects to be identified individually.
Fig 2.11 Image Segmentation
On the other hand, weak or erratic segmentation algorithms
almost always guarantee eventual failure. In general, the more
accurate the segmentation, the more likely recognition is to
succeed.Digital image is defined as a two dimensional function f(x,
y), where x and y are spatial (plane) coordinates, and the
amplitude of f at any pair of coordinates (x, y) is called
intensity or grey level of the image at that point. The field of
digital image processing refers to processing digital images by
means of a digital computer. The digital image is composed of a
finite number of elements, each of which has a particular location
and value. The elements are referred to as picture elements, image
elements, pels, and pixels. Pixel is the term most widely
used.2.9.2 Image Compression:
Digital Image compression addresses the problem of reducing the
amount of data required to represent a digital image. The
underlying basis of the reduction process is removal of redundant
data. From the mathematical viewpoint, this amounts to transforming
a 2D pixel array into a statically uncorrelated data set. The data
redundancy is not an abstract concept but a mathematically
quantifiable entity. If n1 and n2 denote the number of
information-carrying units in two data sets that represent the same
information, the relative data redundancy [2] of the first data set
(the one characterized by n1) can be defined as,
Where called as compression ratio [2]. It is defined as
= In image compression, three basic data redundancies can be
identified and exploited: Coding redundancy, inter pixel
redundancy, and phychovisal redundancy. Image compression is
achieved when one or more of these redundancies are reduced or
eliminated. The image compression is mainly used for image
transmission and storage. Image transmission applications are in
broadcast television; remote sensing via satellite, air-craft,
radar, or sonar; teleconferencing; computer communications; and
facsimile transmission. Image storage is required most commonly for
educational and business documents, medical images that arise in
computer tomography (CT), magnetic resonance imaging (MRI) and
digital radiology, motion pictures, satellite images, weather maps,
geological surveys, and so on.2.10 Project Description:According to
recent literatures introduced, a modified decision based
unsymmetrical trimmed median filter for the video restoration of
gray scale, and color images [10]. This filter removes the high
density salt and pepper noise and restores the gray scale, and
color frames that are highly corrupted by salt and pepper noise.
This filter gives better Peak Signal-to-Noise Ratio (PSNR) and
Image Enhancement Factor (IEF).The Filter can remove the high
density noise, its computational speed is also higher than existing
filters, it will take the feedback from noisy or corrupted pixels,
and tested against Different color video frames and it gives better
Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE).
Block Diagram:
Input Video
Frame Separation
Input Frame
Addition of Noise
Application of Diff Filters
Cal of MSE PSNR
Fig 2.12 Block diagram of proposed scheme
CHAPTER-3DIGITAL VIDEO
3.1 Introduction of Digital Video:Digital video refers to the
capturing, manipulation, and storage of moving images that can be
displaced on computer screens. This requires that the moving images
be digitally handled by the computer. The word digital refers to a
system based on discontinuous events, as opposed to analog, a
continuous event. Computers are digital systems; they do not
process images the way the human eye does. Before the Digital Era,
to display analog video images on a computer monitor, the video
signal had to first be converted from analog to digital form. A
special video digitalizing overlay board or hardware on the
motherboard had to be installed in your computer to take the video
signal and convert it to digital information. To do this, however,
required a very powerful computer to be able to read and digitalize
every frame repetitively. So the next step in digital video
evolution was to eliminate the analog videotape. Thus, the entire
procedure, including the capturing of video, is in digital
form.First, a camera and a microphone capture the picture and sound
of a video session and send analog signals to a video-capture
adapter board. The board only captures half of the number of frames
per second that movies use in order to reduce the amount of data to
be processed. Second, there is an analog-to-digital converter chip
on the video-capture adapter card, and it converts the analog
signals to digital patterns (0s and 1s). Third, a
compression/decompression chip or software reduces the data to a
minimum necessary for recreating the video signals. In this
procedure, no analog was involved, making the process more
efficient. Digital video comprises a series of orthogonal bitmap
digital images displayed in rapid succession at a constant rate. In
the context of video these images are called frames. We measure the
rate at which frames are displayed in frames per second (FPS).Since
every frame is an orthogonal bitmap digital image it comprises a
raster of pixels. If it has a width of W pixels and a height of H
pixels we say that the frame size is WxH. Pixels have only one
property, their colour. The colour of a pixel is represented by a
fixed number of bits. The more bits the more subtle variations of
colours can be reproduced. This is called the colour depth (CD) of
the video.Digital video cameras come in two different image capture
formats: interlaced and deinterlaced progressive scan. Interlaced
cameras record the image in alternating sets of lines: the
odd-numbered lines are scanned, and then the even-numbered lines
are scanned, then the odd-numbered lines are scanned again, and so
on. One set of odd or even lines is referred to as a "field", and a
consecutive pairing of two fields of opposite parity is called a
frame. Deinterlaced cameras records each frame as distinct, with
all scan lines being captured at the same moment in time. Thus,
interlaced video captures samples the scene motion twice as often
as progressive video does, for the same number of frames per
second. Progressive-scan camcorders generally produce a slightly
sharper image. However, motion may not be as smooth as interlaced
video which uses 50 or 59.94 fields per second, particularly if
they employ the 24 frames per second standard of film.Digital video
can be copied with no degradation in quality. No matter how many
generations of a digital source is copied, it will still be as
clear as the original first generation of digital footage. However
a change in parameters like frame size as well as a change of the
digital format can decrease the quality of the video due to new
calculations that have to be made. Digital video can be manipulated
and edited to follow an order or sequence on an NLE or non-linear
editing workstation, a computer based device intended to edit video
and audio. More and more, videos are edited on readily available,
increasingly affordable consumer-grade computer hardware and
software. However, such editing systems require ample disk space
for video footage. The many video formats and parameters to be set
make it quite impossible to come up with a specific number for how
many minutes need how much time.Digital video has a significantly
lower cost than 35mm film. The tape stock itself is very
inexpensive. Digital video also allows footage to be viewed on
location without the expensive chemical processing required by
film. Also physical deliveries of tapes and broadcasts do not apply
anymore. Digital television (including higher quality HDTV) started
to spread in most developed countries in early 2000s. Digital video
is also used in modern mobile phones and video conferencing
systems. Digital video is also used for Internet distribution of
media, including streaming video and peer-to-peer movie
distribution. However even within Europe are lots of TV-Stations
not broadcasting in HD, due to restricted budgets for new equipment
for processing HD.Many types of video compression exist for serving
digital video over the internet and on optical disks. The file
sizes of digital video used for professional editing are generally
not practical for these purposes, and the video requires further
compression with codecs such as Sorenson, H.264 and more recently
Apple ProRes especially for HD. Probably the most widely used
formats for delivering video over the internet are MPEG4,
Quicktime, Flash and Windows Media, while MPEG2 is used almost
exclusively for DVDs, providing an exceptional image in minimal
size but resulting in a high level of CPU consumption to
decompress.As of 2011, the highest resolution demonstrated for
digital video generation is 35 megapixels (8192 x 4320). The
highest speed is attained in industrial and scientific high speed
cameras that are capable of filming 1024x1024 video at up to 1
million frames per second for brief periods of recording.3.2
History:Starting in the late 1970s to the early 1980s, several
types of video production equipment were introduced, such as time
base correctors (TBC) and digital video effects (DVE) units (one of
the former being the Thomson-CSF 9100 Digital Video Processor, an
internally all-digital full-frame TBC introduced in 1980, and two
of the latter being the Ampex ADO, and the Nippon Electric
Corporation (NEC) (DVE). They operated by taking a standard analog
composite video input and digitizing it internally. This made it
easier to either correct or enhance the video signal, as in the
case of a TBC, or to manipulate and add effects to the video, in
the case of a DVE unit. The digitized and processed video
information from these units would then be converted back to
standard analog video. Later on in the 1970s, manufacturers of
professional video broadcast equipment, such as Bosch (through
their Fernseh division), RCA, and Ampex developed prototype digital
videotape recorders (VTR) in their research and development labs.
Bosch's machine used a modified 1" Type B transport, and recorded
an early form of CCIR 601 digital video. Ampex's prototype digital
video recorder used a modified 2" Quadruplex VTR (an Ampex AVR-3),
but fitted with custom digital video electronics, and a special
"octaplex" 8-head headwheel (regular analog 2" Quad machines only
used 4 heads). The audio on Ampex's prototype digital machine,
nicknamed by its developers as "Annie", still recorded the audio in
analog as linear tracks on the tape, like 2" Quad. None of these
machines from these manufacturers were ever marketed commercially,
however.Digital video was first introduced commercially in 1986
with the Sony D-1 format, which recorded an uncompressed standard
definition component video signal in digital form instead of the
high-band analog forms that had been commonplace until then. Due to
its expense, D-1 was used primarily by large television networks.
It would eventually be replaced by cheaper systems using video
compression, most notably Sony's Digital Betacam (still heavily
used as an electronic field production (EFP) recording format by
professional television producers) that were introduced into the
network's television studios.One of the first digital video
products to run on personal computers was PACo the PICS Animation
Compiler from The Company of Science & Art in Providence, RI,
which was developed starting in 1990 and first shipped in May 1991.
PACo could stream unlimited-length video with synchronized sound
from a single file on CD ROM. Creation required a Mac; playback was
possible on Macs, PCs, and Sun Sparcstations. In 1992, Bernard
Luskin, Philips Interactive Media, and Eric Doctorow, Paramount
Worldwide Video, successfully put the first fifty videos in digital
MPEG 1 on CD, developed the packaging and launched movies on CD,
leading to advancing versions of MPEG, and to DVD.QuickTime, Apple
Computer's architecture for time-based and streaming data formats
appeared in June, 1991. Initial consumer-level content creation
tools were crude, requiring an analog video source to be digitized
to a computer-readable format. While low-quality at first, consumer
digital video increased rapidly in quality, first with the
introduction of playback standards such as MPEG-1 and MPEG-2
(adopted for use in television transmission and DVD media), and
then the introduction of the DV tape format allowing recording
direct to digital data and simplifying the editing process,
allowing non-linear editing systems (NLE) to be deployed cheaply
and widely on desktop computers with no external playback/recording
equipment needed. The widespread adoption of digital video has also
drastically reduced the bandwidth needed for a high-definition
video signal (with HDV and AVCHD, as well as several commercial
variants such as DVCPRO-HD, all using less bandwidth than a
standard definition analog signal) and tapeless camcorders based on
flash memory and often a variant of MPEG-4.3.3 Advantages of
Digital Video:What is it about digital video that makes it so
attractive? Isnt videotape good enough? Here are three of many
reasons that explain why digital videos are becoming more popular
than ever. Ease of Manipulation is the difference between analog
and digital is like comparing a typewriter with a word processor.
Just like the cut and paste function is much easier and Faster with
a word processor, editing is easier and faster with a digital
video. Also, many effects that were exclusive for specialized post
production houses are now easily achieved by bringing in files from
Photoshop, Flash, and Sound Edit as components in a video mix. In
addition, the ability to separate sound from image enables editing
one without affecting the other. Preservation of Data is not true
that DV is better simply because it is digital. Big screen films
are not digital and are still highly esteemed as quality images.
However, it is easier to maintain the quality of a digital video.
Traditional tapes are subject to wear and tear more so than DVD or
hard drive disks. Also, once done, a digital video can be copied
over and over without losing its original information. Analog
signals can be easily distorted and will lose much of the original
data after a few transfers. Internet is a digital video can be sent
via the Internet to countless end users without having to make a
copy for every viewer. It is easy to store, retrieve, and publish.
Compression of a digital video files can be very large. For
example, one single frame from a television image with a resolution
of 720 x 576 pixels and a color depth of 16 bits has a size of 1.35
MB (Fisher & Schroeder, 1999). Multiply that by 25 frame per
second and then by the duration of a movie! It is not
practicalsometimes impossibleto have videos of this size. Thus
compression, the process of reducing file size by eliminating
unnecessary data for reconstruction purposes, is a must.There are
two types of compression, lossless and lossy. The lossless
compression retains the original data so that the individual image
sequences remain the same. It saves space by removing image areas
that use the same color. The compression rate is usually no better
than 3:1 (Fisher & Schroeder.). The low rate makes most
lossless compression less desirable. The lossy compression methods
remove image and sound information that is unlikely to be noticed
by the viewer. Some information is lost, but since it is not
differentiated by the human perception, the quality perceived is
still the same, while the volume is dramatically decreased.There
are many compression formats. Here are the few most widely used.3.4
Frame Separation:Frame processing is the main step in the modified
decision based unsymmetric trimmed median filter. Frame rate(also
known asframe frequency) is thefrequency(rate) at which an imaging
device produces unique consecutive images calledframes. The term
applies equally well tofilmand videocameras,computer graphics,
andmotion capturesystems. Frame rate is most often expressed in
frames per second (FPS) and is also expressed inprogressive
scanmonitors ashertz(Hz). After frame separation we can get number
frames in our data base. We have to select one frame from that
number of frames. For that particular we have to generate the noise
to measure the performance.
Fig 3.1 Example of frame separation3.4.1 Number Of Frames Per
Second:Frame rate, the number of still pictures per unit of time of
video, ranges from six or eight frames per second (frames) for old
mechanical cameras to 120 or more frames per second for new
professional cameras.PAL(Europe, Asia, Australia, etc.)
andSECAM(France, Russia, parts of Africa etc.) standards specify 25
frame/s, whileNTSC(USA, Canada, Japan, etc.) specifies 29.97
frame/s. Film is shot at the slower frame rate of 24 photograms,
which complicates slightly the process of transferring a cinematic
motion picture to video. The minimum frame rate to achieve the
illusion of amoving imageis about twelve to fifteen frames per
second.
3.4.2 Interlaced Vs Progressive Video can
beinterlacedorprogressive. Interlacing was invented as a way to
reduce flicker in earlymechanicalandCRTvideo displays without
increasing the number of completeframes per second, which would
have required sacrificing image detail in order to remain within
the limitations of a narrowbandwidth. The horizontalscan linesof
each complete frame are treated as if numbered consecutively and
captured as twofields anodd field(upper field) consisting of the
odd-numbered lines and aneven field(lower field) consisting of the
even-numbered lines.Analog display devices reproduce each frame in
the same way, effectively doubling the frame rate as far as
perceptible overall flicker is concerned. When the image capture
device acquires the fields one at a time, rather than dividing up a
complete frame after it is captured, the frame rate for motion is
effectively doubled as well, resulting in smoother, more lifelike
reproduction (although with halved detail) of rapidly moving parts
of the image when viewed on an interlaced CRT display, but the
display of such a signal on a progressive scan device is
problematic.Inprogressive scansystems, each refresh period updates
all of the scan lines of each frame in sequence. When displaying a
natively progressive broadcast or recorded signal, the result is
optimum spatial resolution of both the stationary and moving parts
of the image. When displaying a natively interlaced signal,
however, overall spatial resolution will be degraded by simpleline
doublingand artifacts such as flickering or "comb" effects in
moving parts of the image will be seen unless special signal
processing is applied to eliminate them. A procedure known as
deinterlacingcan be used to optimize the display of an interlaced
video signal from an analog, DVD or satellite source on a
progressive scan device such as anLCD Television, digital video
projector or plasma panel. Deinterlacing cannot, however,
producevideo qualitythat is equivalent to true progressive scan
source material.3.5 Video QualityVideo qualityis a characteristic
of avideopassed through a video transmission/processing system, a
formal or informal measure of perceived video degradation
(typically, compared to the original video). Video processing
systems may introduce some amounts of distortion or artefacts in
the video signal, sovideo quality evaluationis an important
problem.Since the time when the world's first video sequence was
recorded, many video processing systems have been designed. In the
ages ofanalogvideo systems, it was possible to evaluate quality of
a video processing system by calculating the system'sfrequency
responseusing some traditional test signal (for example, a
collection of color bars and circles). Nowadays,digital
videosystems are replacing analog ones, and evaluation methods have
changed. Performance of a digital video processing system can vary
significantly and depends on dynamic characteristics of input video
signal (e.g. amount of motion or spatial details).
CHAPTER-4NOISE
4.1 Introduction Incommunication systems, the noise is an error
or undesired random disturbance of a useful information signal,
introduced before or after the detector and decoder. The noise is a
summation of unwanted or disturbing energy from natural and
sometimes man-made sources. Noise is, however, typically
distinguished frominterference, (e.g.cross-talk,
deliberatejammingor other unwanted electromagnetic interferencefrom
specific transmitters), for example in thesignal-to-noise
ratio(SNR),signal-to-interference ratio(SIR) andsignal-to-noise
plus interference ratio(SNIR) measures. Noise is also typically
distinguished fromdistortion, which is an unwanted alteration of
the signal waveform, for example in thesignal-to-noise and
distortion ratio(SINAD). In a carrier-modulated passband analog
communication system, a certaincarrier-to-noise ratio(CNR) at the
radio receiver input would result in a certainsignal-to-noise
ratioin the detected message signal. While noise is generally
unwanted, it can serve a useful purpose in some applications, such
asrandom number generationordithering.4.2 NoiseIn common use the
word noise means unwanted sound or noise pollution. In electronics
noise can refer to the electronic signal corresponding to acoustic
noise (in an audio system) or the electronic signal corresponding
to the (visual) noise commonly seen as 'snow' on a degraded
television or video image. In signal processing or computing it can
be considered data without meaning; that is, data that is not being
used to transmit a signal, but is simply produced as an unwanted
by-product of other activities. In Information Theory, however,
noise is still considered to be information. In a broader sense,
film grain or even advertisements in web pages can be considered
noise. Noise can block, distort, or change the meaning of a message
in both human and electronic communication.In many of these areas,
the special case of thermal noise arises, which sets a fundamental
lower limit to what can be measured or signaled and is related to
basic physical processes at the molecular level described by well
known simple formulae.4.3 Types of noises Noise is random,
undesirable electrical energy that enters the communications system
via the communicating medium and interferes with the transmitted
message. However, some noise is produced in the receiver. Noise can
be classified two categories1. External noises: Noise whose sources
are external. External noise may be classified into the following
three types Atmospheric noises. Extraterrestrial noises. Man-made
noises or industrial noises.2. Internal noise: In communication,
i.e. noises which get, generated within the receiver or
communication system. Internal noise may be put into the following
four categories. Thermal noise or white noise or Johnson noise.
Shot noise. Transit time noise. Miscellaneous internal
noise.External noise cannot be reduced except by changing the
location of the receiver or the entire system. Internal noise on
the other hand can be easily evaluated mathematically and can be
reduced to a great extent by proper design. As already said,
because of the fact that internal noise can be reduced to a great
extent, study of noise characteristics is a very important part of
the communication engineering.4.3.1 Shot Noise:Short noiseis a type
ofelectronic noisewhich originates from thediscrete natureof
electric charge. The term also applies to photon counting in
optical devices, where shot noise is associated with theparticle
natureof light.Shot noise exists because phenomena such as light
and electric current consist of the movement of discrete (also
called "quantized") 'packets'. Consider light a stream of discrete
photons coming out of a laser pointer and hitting a wall to create
a visible spot. The fundamental physical processes that govern
light emission are such that these photons are emitted from the
laser at random times; but the many billions of photons needed to
create a spot are so many that the brightness, the number of
photons per unit time, varies only infinitesimally with time.
However, if the laser brightness is reduced until only a handful of
photons hit the wall every second, the relative fluctuations in
number of photons, i.e., brightness, will be significant, just as
when tossing a coin a few times. These fluctuations are shot
noise.4.3.2 Gaussian Noise:Gaussian noise is statistical noise that
has a probability density function of the normal distribution (also
known as Gaussian distribution). In other words, the values that
the noise can take on are Gaussian-distributed. It is most commonly
used as additive white noise to yield additive white Gaussian noise
(AWGN).
4.3.3 White noise:Insignal processing,white noiseis a
randomsignalwith a flat (constant)power spectral density. In other
words, a signal that contains equal power within anyfrequency
bandwith a fixedwidth. white noise refers to a statistical model
for signals and signal sources, rather than to any specific
signal.The term is also used for adiscrete signalwhosesamplesare
regarded as a sequence ofserially uncorrelatedrandom variableswith
zeromeanand finitevariance. Depending on the context, one may also
require that the samples beindependentand have the sameprobability
distribution. The samples of a white noise signal may be sequential
in time, or arranged along one or more spatial dimensions.
Indigital image processing, the samples (pixels) of awhite noise
imageare typically arranged in a rectangular grid, and are assumed
to be independent random variables withuniform probability
distributionover some interval. A random signal is considered
"white noise" if it is observed to have a flat spectral power
density over the visible band.
4.3.4 Additive White Gaussian Noise(AWGN):AWGN is achannel
modelin which the only impairment to communication is a linear
addition ofwidebandorwhite noisewith a constantspectral
density(expressed as watts per hertzofbandwidth) and aGaussian
distributionof amplitude. The model does not account for fading,
frequency selectivity, interference, nonlinearity or dispersion.
However, it produces simple and tractable mathematical models which
are useful for gaining insight into the underlying behaviour of a
system before these other phenomena are considered.WidebandGaussian
noisecomes from many natural sources, such as the thermal
vibrations of atoms in conductors (referred to asthermal
noiseorJohnson-Nyquist noise),shot noise,black body radiationfrom
the earth and other warm objects, and fromcelestial sourcessuch as
theSun.The AWGN channel is a good model for manysatelliteand deep
space communication links. It is not a good model for most
terrestrial links because of multipath, terrain blocking,
interference, etc. However, for terrestrial path modelling, AWGN is
commonly used to simulate background noise of the channel under
study, in addition to multipath, terrain blocking, interference,
ground clutter and self interference that modern radio systems
encounter in terrestrial operation.4.3.5 Poisson Noise:Poisson
noise has a probability density function of a Poisson distribution.
That expresses the probability of a given number of events
occurring in a fixed interval of time and/or space if these events
occur with a known average rate andindependentlyof the time since
the last event.The Poisson distribution can also be used for the
number of events in other specified intervals such as distance,
area or volume. 4.3.6 Speckle noise:Speckle noise is a granular
noise that inherently exists in and degrades the quality of images.
Speckle noise is a multiplicative noise, i.e. it is in direct
proportion to the local grey level in any area. The signal and the
noise are statistically independent of each other.
4.3.7 Salt & Pepper Noise:It represents itself as randomly
occurring white and black pixels. An effective noise reduction
method for this type of noise involves the usage of a median
filter. Salt and pepper noise creeps into images in situations
where quick transients, such as faulty switching, take place. The
image after distortion from salt and pepper noise looks like the
image attached.4.4 Noise Generation:Noises are random background
events which have to be dealt with in every system processing real
signals. They are not part of the ideal signal and may be caused by
a wide range of sources, e.g. variations in the detector
sensitivity, environmental variations, the discrete nature of
radiation, transmission or quantization errors, etc. The
characteristics of noise depend on their source, as does the
operator which best reduces their effects. Many image processing
packages contain operators to artificially add noise to an image.
Deliberately corrupting an image with noise allows us to test the
resistance of an image processing operator to noise and assess the
performance of various noise filters. Noise can generally be
grouped in two classes independent noise, and Noise which is
dependent on the image data.Image independent noise can often be
described by an additive noise model, where the recorded image
f(i,j) is the sum of the true image s(i,j) and the noise
n(i,j).
The recorded image is give by the equation 4.1. f(i,j) = s(i,j)
+ n(i,j) (4.1)The noise n(i,j) is often zero-mean and described by
its variance . The impact of the noise on the image is often
described by the signal to noise ratio (SNR), which is given by the
equation 4.2SNR = = -1 (4.2)Where and are the variances of the true
image and the recorded image, respectively. In many cases, additive
noise is evenly distributed over the frequency domain (i.e. white
noise), whereas an image contains mostly low frequency information.
Hence, the noise is dominant for high frequencies and its effects
can be reduced using some kind of lowpass filter. This can be done
either with a frequency filter or with a spatial filter. In the
second case of data dependent noise, (e.g. arising when
monochromatic radiation is scattered from a surface whose roughness
is of the order of a wavelength, causing wave interference which
results in image speckle), it can be possible to model noise with a
multiplicative, or non-linear, model. These models are
mathematically more complicated, hence, if possible, the noise is
assumed to be data independent. 4.4.1 Signal to Noise
Ratio:Signal-to-noise ratiois a measure used in science and
engineering that compares the level of a desiredsignalto the level
of backgroundnoise. It is defined as the ratio of signal power to
the noise power. A ratio higher than 1:1 indicates more signal than
noise.
4.4.2 Peak Signal to Noise Ratio:Peak signal-to-noise ratio,
often abbreviatedPSNR, is an engineering term for the ratio between
the maximum possible power of asignaland the power of
corruptingnoisethat affects the fidelity of its representation.
Because many signals have a very widedynamic range, PSNR is usually
expressed in terms of thelogarithmicdecibelscale.PSNR is most
commonly used to measure the quality of reconstruction of lossy
compressioncodecs(e.g., forimage compression). The signal in this
case is the original data, and the noise is the error introduced by
compression. When comparing compression codecs, PSNR is
anapproximationto human perception of reconstruction quality.
Although a higher PSNR generally indicates that the reconstruction
is of higher quality, in some cases it may not. One has to be
extremely careful with the range of validity of this metric; it is
only conclusively valid when it is used to compare results from the
same codec (or codec type) and same content. PSNR is most easily
defined via themean squared error(MSE). Given a noise free mn
monochrome image I and its noisy approximationK,MSEis defined as
equation 4.3.
MSE = (4.3)
The PSNR is defined as equation 4.4.
PSNR in dB = 10 (4.4)
4.5 Detector Noise:One kind of noise which occurs in all
recorded images to a certain extent is detector noise. This kind of
noise is due to the discrete nature of radiation, i.e. the fact
that each imaging system is recording an image by counting photons.
Allowing some assumptions (which are valid for many applications)
this noise can be modeled with an independent, additive model -
where the noise n(i,j) has a zero-mean Gaussian distribution
described by its standard deviation (), or variance. (The 1-D
Gaussian distribution has the form shown in Figure 1.) This means
that each pixel in the noisy image is the sum of the true pixel
value and a random, Gaussian distributed noise value.
Fig 4.1 1D Gaussian distribution with mean 0 and standard
deviation 1
4.6 Crimmins Speckle Removal:Crimmins Speckle Removal reduces
speckle from an image using the Crimmins complementary hulling
algorithm. The algorithm has been specifically designed to reduce
the intensity of salt and pepper noise in an image. Increased
iterations of the algorithm yield increased levels of noise
removal, but also introduce a significant amount of blurring of
high frequency details. Crimmins Speckle Removal works by passing
an image through a speckle removing filter which uses the
complementary hulling technique to reduce the speckle index of that
image. The algorithm uses a non-linear noise reduction technique
which compares the intensity of each pixel in an image with those
of its 8 nearest neighbours and, based upon the relative values,
increments or decrements the value of the pixel in question such
that it becomes more representative of its surroundings. The noisy
pixel alteration (and detection) procedure used by Crimmins is more
complicated than the ranking procedure used by the non-linear
median filter. It involves a series of pairwise operations in which
the value of the `middle' pixel within each neighbourhood window is
compared, in turn, with each set of neighbours (N-S, E-W, NW-SE,
NE-SW) in a search for intensity spikes. For each iteration and for
each pair of pixel neighbours, the entire image is sent to a Pepper
Filter and Salt Filter. In the example case, the Pepper Filter is
first called to determine whether the each image pixel is darker
than i.e. by more than 2 intensity levels its northern neighbours.
Comparisons where this condition proves true cause the intensity
value of the pixel under examination to be incremented twice
lightened, otherwise no change is affected. Once these changes have
been recorded, the entire image is passed through the Pepper Filter
again and the same series of comparisons are made between the
current pixel and its southern neighbour. This sequence is repeated
by the Salt Filter, where the conditions lighter than and darken
are, again, instantiated using 2 intensity levels. Over several
iterations, the effects of smoothing in this way propagate out from
the intensity spike to infect neighboring pixels. In other words,
the algorithm smoothes by reducing the magnitude of a locally
inconsistent pixel, as well as increasing the magnitude of pixels
in the neighborhood surrounding the spike. It is important to
notice that a spike is defined here as a pixel whose value is more
than 2 intensity levels different from its surroundings. This means
that after 2 iterations of the algorithm, the immediate neighbors
of such a spike may themselves become spikes with respect to pixels
lying in a wider neighborhood.
CHAPTER-5IMAGE RESTORATION
5.1 Introduction:Image restorationis the operation of taking a
corrupted/noisy image and estimating the clean original image.
Corruption may come in many forms such asmotion blur,noise, and
camera misfocus. Image restoration is different from image
enhancement in that the latter is designed to emphasize features of
the image that make the image more pleasing to the observer, but
not necessarily to produce realistic data from a scientific point
of view. Image enhancement techniques (like contrast stretching or
de-blurring by a nearest neighbour procedure) provided by "Imaging
packages" use no a priori model of the process that created the
image [9].Image enhancement noise can effectively be removed by
sacrificing some resolution, but this is not acceptable in many
applications. In a Fluorescence Microscope resolution in the
z-direction is bad as it is. More advanced image processing
techniques must be applied to recover the object. Deconvolution is
an example of image restoration method. It is capable of Increasing
resolution, especially in the axial direction removing noise
increasing contrast [9].5.2 Restoration:There are many definitions
of restoring a photo. You may want to fix the color, the tone, the
contrast, or you may need to repair some damage on an old family
memory. All of these can make a photo look new, or at lease restore
the appearance that you want. Although some color and tonal
corrections are relatively ease to achieve, repairing damage can
take long hours of slow, methodical work. Most of the restorations
that you might want to do to a photo will be successful if you take
the necessary time [9].
5.3 Requirements for Restoration:The successful restoration of
blurred image requires accurate estimation of PSF parameters. In
our project, we deal with images, which are blurred by the relative
motion between the imaging system and the original scene. Thus,
given a motion blurred and noisy image, the task is to identify the
point spread function parameters and apply the restoration filter
to get an approximation to the original scene. Parameter estimation
is based on the observation that image characteristics along the
direction of motion are different than the characteristics in other
directions. The PSF of motion blur is characterized by two
parameters namely, blur direction and blur length [9].5.4
Degradation: It is used to improve the appearance of an image by
application of a restoration process that uses a mathematical model
for image degradation.Types of degradation Blurring caused by
motion or atmospheric disturbance Geometric distortion caused by
imperfect lenses Superimposed interference patterns caused by
mechanical systems Noise from electronic sources.5.4.1 Image
Degradation:Image restoration suppressing image degradation using
knowledge about its nature. Most image restoration methods are
based on convolution applied globally to the whole images.Causes of
image degradation defects of optical lenses, nonlinearity of the
electro-optical sensor, graininess of the film material, relative
motion between an object and camera wrong focus, atmospheric
turbulence in remote sensing or astronomy, The objective of image
restoration is to reconstruct the original image from its degraded
version.5.4.2 Degradation of Blur Parameters: The problem of
restoration of images blurred by relative motion between the camera
and the object scene is important in a large number of
applications. The solution proposed here identifies important
parameters with which to characterize the point spread function
(PSF) of the blur, given only the blurred image itself. This
identification method is based on the concept that image
characteristics along the direction of motion are different from
the characteristics in other directions. Depending on the PSF
shape, the homogeneity and the smoothness of the blurred image in
the motion direction are greater than in other directions.
Furthermore, in this direction correlation exists between the
pixels forming the blur of the original unblurred objects. By
filtering the blurred image we emphasize the PSF characteristics at
the expense of the image characteristics. The method proposed here
identifies the direction and the extent of the PSF of the blur and
evaluates its shape which depends on the type of motion during the
exposure. Correct identification of the PSF parameters permits fast
high resolution restoration of the blurred image.Blur
parameters:When the intensity of the observed point image is spread
over several pixels, this is known as the Point Spread Function
(PSF).
Length: Blur Length is the number of pixels by which the image
is degraded. It is the number of pixel positions by which a pixel
is shifted from its original position. ->
OriginalDegraded
Fig 5.1 Blur Length
Angle: Blur Angle is the angle at which the image is
degraded.
->
OriginalDegraded
Fig 5.2 Blur Angle
5.5 Thresholding:The simplest property that pixels in a region
can share is intensity. So, a natural way to segment such regions
is through thresholding, the separation of light and dark regions.
Thresholding creates binary images from grey-level ones by turning
all pixels below some threshold to zero and all pixels about that
threshold to one [8].The major problem with thresholding is that We
consider only the intensity, not any relationships between the
pixels. There is no guarantee that the pixels identied by the
thresholding process are contiguous. We can easily include
extraneous pixels that arent part of the desired region, and we can
just as easily miss isolated pixels within the region. These
effects get worse as the noise gets worse, simply because its more
likely that pixels intensity doesnt represent the normal intensity
in the region. We typically have to play with it, sometimes losing
too much of the region and sometimes getting too many extraneous
background pixels.
CHAPTER-6REMOVING OF HIGH DENSITY SALT AND PEPPER NOISE USING
DIFFERENT FILTERS
6.1 Introduction:Images and videos are often corrupted by
impulse noises during acquisition and transmission. This impulse
noise present in an image due to bit errors in transmission or
introduced during the signal acquisition stage. There are two types
of impulse noise, they are salt and pepper noise and random valued
noise. This is based on the noise values. The noise which is
easier-to-restore is called salt- and-pepper noise and the noise
more difficult random valued is called impulse noise
[1].Salt-and-pepper noise (also called impulse noise, shot noise or
spike noise) typically caused by malfunctioning pixel element in
camera sensors, faulty memory locations, or timing errors in
digitization process. Images and videos are often corrupted by
impulse noises during acquisition and transmission. This impulse
noise present in an image due to bit errors in transmission or
introduced during the signal acquisition stage. There are two types
of impulse noise, they are salt and pepper noise and random valued
noise. This is based on the noise values. The noise which is easier
to restore is called salt and pepper noise and the noise more
difficult random valued is called impulse noise [2].6.2 Salt and
Pepper Noise:Impulse noise in images is present due to bit errors
in transmission or introduced during the signal acquisition stage.
There are two types of impulse noise, they are salt and pepper
noise and random valued noise. Salt and pepper noise can corrupt
the images where the corrupted pixel takes either maximum or
minimum gray level. Several nonlinear lters have been proposed for
restoration of images contaminated by salt and pepper noise [1].
Among these standard median lter has been established as reliable
method to remove the salt and pepper noise without damaging the
edge details [2]. 6.3 Disadvantages of Salt and Pepper Noise: Salt
and pepper noise can corrupt the images where the corrupted pixel
takes either maximum or minimum gray levels. Several nonlinear
filters have been proposed for restoration of images contained by
salt and pepper noise.
6.4 Mean Filter:Mean filtering is a simple, intuitive and easy
to implement method ofsmoothingimages,i.e.reducing the amount of
intensity variation between one pixel and the next. It is often
used toreduce noise in images.The idea of mean filtering is simply
to replace each pixel value in an image with the mean (`average')
value of its neighbors, including itself. This has the effect of
eliminating pixel values which are unrepresentative of their
surroundings. Mean filtering is usually thought of as aconvolution
filter. Like other convolutions it is based around akernel, which
represents the shape and size of the neighborhood to be sampled
when calculating the mean. Often a 33 square kernel is used, as
shown in Figure 6.1, although larger kernels (e.g.55 squares) can
be used for more severe smoothing.
Fig 6.133 averaging kernel often used in mean filteringThe two
main problems with mean filtering which are A single pixel with a
very unrepresentative value can significantly affect the mean value
of all the pixels in its neighbourhood. When the filter
neighbourhood straddles an edge, the filter will interpolate new
values for pixels on the edge and so will blur that edge. This may
be a problem if sharp edges are required in the output.Both of
these problems are tackled by themedian filter, which is often a
better filter for reducing noise than the mean filter, but it takes
longer to compute.6.5 Median Filter:The median filter is normally
used toreduce noise in an image, somewhat like themean filter.
However, it often does a better job than the mean filter of
preserving useful detail in the image. This class of filter belongs
to the class of edge preserving smoothing filters which are
non-linear filters. This means that for two images A(x) and B(x)
.This is shown by the below equation 6.1. Median [A(x) + B(x)]
Median [A(x)] + Median [B(x)] (6.1)These filters smoothes the data
while keeping the small and sharp details. The median is just the
middle value of all the values of the pixels in the neighbourhood
[7]. The median has half the values in the neighbourhood larger and
half smaller. The median is a stronger "central indicator" than the
average. In particular, the median is hardly affected by a small
number of discrepant values among the pixels in the neighbourhood.
Consequently, median filtering is very effective at removing
various kinds of noise.
Fig 6.2 Illustrates an example of median filteringLike the mean
filter, the median filter considers each pixel in the image in turn
and looks at its nearby neighbors to decide whether or not it is
representative of its surroundings. Instead of simply replacing the
pixel value with the mean of neighboring pixel values, it replaces
it with the median of those values.
Fig 6.3 calculating the median value of a pixel neighborhood
The median is calculated by first sorting all the pixel values
from the surrounding neighborhood into numerical order and then
replacing the pixel being considered with the middle pixel value.
(If the neighborhood under consideration contains an even number of
pixels, the average of the two middle pixel values is used.) Figure
7.3 illustrates an example calculation [7].6.5.1 Advantages of
Median Filter:By calculating the median value of a neighborhood
rather than themean filter, the median filter has two main
advantages over the mean filter The median is a more robust average
than the mean and so a single very unrepresentative pixel in a
neighborhood will not affect the median value significantly. Since
the median value must actually be the value of one of the pixels in
the neighborhood, the median filter does not create new unrealistic
pixel values when the filter straddles an edge. For this reason the
median filter is much better at preserving sharp edges than the
mean filter.6.5.2 Disadvantage of the Median Filter:Although median
filter is a useful non-linear image smoothing and enhancement
technique. It also has some disadvantages. These are The median
filter removes both the noise and the fine detail since it can't
tell the difference between the two. Anything relatively small in
size compared to the size of the neighbourhood will have minimal
affect on the value of the median, and will be filtered out. In
other words, the median filter can't distinguish fine detail from
noise.
a)Original image; b)Added Impulse Noisy at 10%
Fig 6.4 The original image and the same image of median
filter
As Figure 6.4 shown below are the original image and the same
image after it has been corrupted by impulse noise at 10%. This
means that 10% of its pixels were replaced by full white pixels.
Also shown are the median filtering results using 3x3 and 5x5
windows three iterations of 3x3 median filter applied to the noisy
image and finally for comparison, the result when applying a 5x5
mean filter to the noisy image.
a)3x3 Median Filtered b)5x5 Median Filtered
a)3x3 Median Filtered applied 3 times b)5x5 Average Filter
Fig 6.5 Comparison of the nonlinear Median filter and the linear
Mean filter.
6.5.3 Comparison between the median filter and the average
filter:Sometimes we are confused by median filter and average
filter, thus lets do some comparison between them [7]. The median
filter is a non-linear tool, while the average filter is a linear
one.
In smooth, uniform areas of the image, the median and the
average will differ by very little. The median filter removes
noise, while the average filter just spreads it around evenly. The
performance of median filter is particularly better for removing
impulse noise than average filter.
6.6 Removing Of Salt and Pepper Noise Using Different
Filters:The restoration of gray scale, and colour images that are
highly corrupted by salt and pepper noise. The pixel values of 0s
and 255s are present in the selected window. The noise pixel is
replaced by mean value of all the elements present in the selected
window. Different types of filters are used to remove the salt and
pepper noise and to give a better Peak signal to noise Ratio and
Image Enhancement Factor.6.6.1 Standard Median Filter:The standard
median filter has been established as reliable method to remove the
salt and pepper noise without damaging the edge details. It returns
the median value of the pixels in a neighborhood is non linear. It
is similar to a uniform blurring filter which returns the mean
value of the pixels in a neighborhood of a pixel unlike a mean
value filter the median tends to preserve step edges [13].
Fig 6.6 Standard median filterThe Major drawback of standard
median filter is The filter is effective only at low noise
densities. When the noise level is over 50%, the edge details of
the original image will not be preserved by the standard median
filter.
6.6.2 Adaptive Median Filter:The adaptive median filtering has
been applied widely as an advanced method compared with standard
median filtering [7]. The Adaptive Median Filter performs spatial
processing to determine which pixels in an image have been affected
by impulse noise. The Adaptive Median Filter classifies
Fig 6.7 Adaptive Median Filter
The pixels as noise by comparing each pixel in the image to its
surrounding neighbour pixels. The size of the neighbourhood is
adjustable, as well as the threshold for the comparison. A pixel
that is different from a majority of its neighbours, as well as
being not structurally aligned with those pixels to which it is
similar, is labelled as impulse noise. These noise pixels are then
replaced by the median pixel value of the pixels in the
neighbourhood that have passed the noise labelling test. The
purpose of the adaptive median filter is the Remove impulse noise.
Smoothing of other noise. Reduce distortion like excessive thinning
or thickening of object boundaries.Comparison of standard median
filter over adaptive median filter The standard median filter does
not perform well when impulse noise is greater than 0.2, while the
adaptive median filter can better handle these noises. The adaptive
median filter preserves detail and smooth non-impulsive noise,
while the standard median filter does not.6.6.3 Tolerance Based
Switched Median Filter:The decision is based on a predefined
threshold value. The major drawback of switching median filter is
defining a robust decision, it is difficult. The edge may not be
recovered satisfactorily especially when the noise level is high.
To overcome the above drawback we can use the decision based
algorithm [5].
Fig 6.8 Tolerance Based Switched Median Filter
6.6.4 Decision Based Algorithm (DBA): The image is de-noised by
using a 3 X 3 window. If the processing pixel value is 0 or 255 it
is processed or else it is left unchanged. At high noise density
the median value will be 0 or 255 which is noisy [6]. In such case,
neighbouring pixel is used for replacement. This repeated
replacement of neighbouring pixel produces streaking effect. To
overcome this problem decision based unsymmetric trimmed median
filter is used.
Fig 6.9 Decision Based Algorithm
6.6.5 Unsymmetric Trimmed Median Filter:The crux behind the
above filter is to eliminate the outliers inside the current
window. Certain type of non linear filters such as Alpha trimmed
mean filter (ATMF), Alpha trimmed midpoint (ATMP) etc., works on
the above principle. These filters use a parameter called which
decides the number of pixels to be eliminated. It was found that
when is increased, the filter fared well. For high noise densities
it does not preserve the image information due to the elimination
of outlier values [11].
The idea behind a trimmed filter is to reject the noisy pixel
from the selected 3 X 3 window. Alpha Trimmed Mean Filtering (ATMF)
is a symmetrical filter where the trimming is symmetric at either
end. In this procedure, even the uncorrupted pixels are also
trimmed. This leads to loss of image details and blurring of the
image. In order to overcome this drawback, an Un-symmetric Trimmed
Median Filter (UTMF) is proposed. In this UTMF, the selected 3 3
window elements are arranged in either increasing or decreasing
order. Then the pixel values 0s and 255s in the image (i.e., the
pixel values responsible for the salt and pepper noise) are removed
from the image. 6.6.6 Decision Based Un-symmetric Trimmed Median
Filter:Digital images are contaminated by impulse noise during
image acquisition or transmission due to malfunctioning pixels in
camera sensors, faulty memory locations in hardware, or
transmission in a noisy channel. Salt and pepper noise is one type
of impulse noise which can corrupt the image, where the noisy
pixels can take only the maximum and minimum gray values in the
dynamic range. The linear filter like mean filter and related
filters are not effective in removing impulse noise. Non-linear
filtering techniques like Standard Median Filter (SMF), Adaptive
Median Filter (AMF) are widely used to remove salt and pepper noise
due to its good de-noising power and computational efficiency. SMF
is effective only at low noise densities. Several methods have been
proposed for removal of impulse noise at higher noise densities.
The window size used in these methods is small which results in
minimum computational complexity. However, small window size leads
to insufficient noise reduction. Switching based median filtering
has been proposed as an effective alternative for reducing
computational complexity. Recent methods like Decision Based
Algorithm (DBA), Modified Decision Based Algorithm (MDBA), are one
of the fastest and efficient algorithms capable of impulse noise
removal at noise densities as high as 80%. A major drawback of this
algorithm is streaking effect at higher noise densities. To
overcome this drawback, Modified Decision Based Unsymmetric Trimmed
Median Filter (MDBUTMF) is used to remove salt and pepper noise at
very high densities as 80 -90%. In this algorithm, at high noise
density, the processing pixel is replaced by the mean value of
elements within the window. This will lead to blurring of fine
details in the image. To avoid this problem, we have introduced
fuzzy thresholding is used to preserve the edges and fine details
in this paper. These filters are removing the salt and pepper noise
at medium noise variance 50- 60%. 6.6.7 Modified Decision Based
Un-symmetric Trimmed Median Filter:At high noise densities, if the
selected window contains all 0s or 255s or both then, trimmed
median value cannot be obtained. So this algorithm does not give
better results at very high noise density that is at 80% to 90%.
The proposed Modified Decision Based Un-symmetric Trimmed Median
Filter (MDBUTMF) algorithm removes this drawback at high noise
density and gives better Peak Signal-to-Noise Ratio (PSNR) and
Image Enhancement Factor (IEF) values than the existing algorithm.
In the proposed method first the noisy image is read then based on
some decision salt and pepper noise detection takes place. At the
end of the detection stage the noisy and noise-free pixels get
separated. The noise-free pixel is left unchanged and the noisy
pixel is given to the Modified Decision Based Unsymmetric Trimmed
Median Filter (MDBUTMF). The MDBUTMF produces an image as its
throughput that is a partially noise removed one. And it is further
processed by Fuzzy Noise Reduction Method (FNRM). Finally the FNRM
provides a restored image that is fully free from noise. The
Modified Decision Based Un-symmetric Trimmed Median Filter provides
the final output image with higher PSNR value and less Mean square
error [16].
Fig 6.10 Modified Decisions Based Un-symmetric Trimmed Median
Filter
CHAPTER-7APPLICATIONS
7.1 Photo shop:Photoshop files have default file extension
asPSD, which stands for "Photoshop Document." A PSD file stores an
image with support for most imaging options available in Photoshop.
These include layers withmasks, transparency, text,alpha
channelsandspot colors,clipping paths, andduotonesettings. This is
in contrast to many other file formats (e.g. .JPG or .GIF) that
restrict content to provide streamlined, predictable functionality.
A PSD file has a maximum height and width of 30,000 pixels, and a
length limit of 3 Gigabytes.Photoshop files sometimes have the file
extension.PSB, which stands for "Photoshop Big" (also known as
"large document format"). A PSB file extends the PSD file format,
increasing the maximum height and width to 300,000 pixels and the
length limit to around 4Exabyte. The dimension limit was apparently
chosen arbitrarily by Adobe, not based on computer arithmetic
constraints (it is not close to a power of two, as is 30,000) but
for ease of software testing. PSD and PSB formats are documented.
Because of Photoshop's popularity, PSD files are widely used and
supported to some extent by most competing software. The .PSD file
format can be exported to and from Adobe's other apps likeAdobe
Illustrator,Adobe Premiere Pro, andAfter Effects, to make
professional standard DVDs and providenon-linear editingand special
effects services, such as backgrounds, textures, and so on, for
television, film, and the web. Photoshop's primary strength is as
apixel-basedimage editor, unlikevector-basedimage editors.
Photoshop also enables vector graphics editing through its Paths,
Pen tools, Shape tools, Shape Layers, Type tools, Import command,
and Smart Object functions. These tools and commands are convenient
to combine pixel-based and vector-based images in one Photoshop
document, because it may not be necessary to use more than one
program. To create very complex vector graphics with numerous
shapes and colours, it may be easier to use software that was
created primarily for that purpose, such asAdobe
IllustratororCorelDraw. Photoshop's non-destructive Smart Objects
can also import complex vector shapes.7.2 Satellites:In the context
ofspaceflight, asatelliteis an object which has been placed
intoorbitby human endeavour. Such objects are sometimes
calledartificial satellitesto distinguish them fromnatural
satellitessuch as theMoon.The world's first artificial satellite,
theSputnik 1, was launched by the Soviet Union in 1957. Since then,
thousands of satellites have been launched into orbit around
theEarth. Some satellites, notablyspace stations, have been
launched in parts and assembled in orbit. Artificial satellites
originate from more than 50 countries and have used the satellite
launching capabilities of ten nations. A few hundred satellites are
currently operational, whereas thousands of unused satellites and
satellite fragments orbit the Earth asspace debris. A few spacehave
been placed into orbit around other bodies and become artificial
satellites to the Moon,Mercury,Venus,Mars,Jupiter, Saturn, and
theSun.Satellites are used for a large number of purposes. Common
types include military and civilian Earth observation satellites,
communications, navigation satellites, weather satellites, and
research satellites.Space stationsand humanspacecraftin orbit are
also satellites. Satellite orbits vary greatly, d