IMAGE ENHANCEMENT TECHNIQUES FOR FORENSIC CRIME SCENE INVESTIGATIONS A Project report Submitted in partial fulfillment of the Requirement for the award of the degree of BACHELOR OF TECHNOLOGY(B.Tech) In ELECTRONICS AND COMMUNICATION ENGINEERING By A.SRIKANTH O. YUGANDHAR (Y5EC281) (Y5EC301) N.RAM PAVAN T.KARTEEK (Y5EC307) (L6EC333) Under the Esteemed Guidance of Mr..M.S.G.PRASAD Asst .Professor Department of 1 | Page
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
IMAGE ENHANCEMENT TECHNIQUES FOR
FORENSIC CRIME SCENE INVESTIGATIONS
A Project report
Submitted in partial fulfillment of the
Requirement for the award of the degree of
BACHELOR OF TECHNOLOGY(B.Tech)
In
ELECTRONICS AND COMMUNICATION
ENGINEERING
By
A.SRIKANTH O. YUGANDHAR
(Y5EC281) (Y5EC301)
N.RAM PAVAN T.KARTEEK
(Y5EC307) (L6EC333)
Under the Esteemed Guidance of
Mr..M.S.G.PRASAD
Asst .Professor
Department of
ELECTRONICS AND COMMUNICATION ENGINEERING
KONERU LAKSHMAIAH COLLEGE OF ENGINEERING
(AUTONOMOUS)
GREEN FIELDS, VADDESWARAM
AFFILIATED TO ACHARYA NAGARJUNA UNIVERSITY,
1 | P a g e
APPROVED BY AICTE.
Department of
ELECTRONICS AND COMMUNICATION ENGINEERING
KONERU LAKSHMAIAH COLLEGE OF ENGINEERING
CERTIFICATE
This is to certify that the project report entitled IMAGE
ENHANCEMENT TECHNIQUES FOR FORENSIC CRIME
SCENE INVESTIGATIONS is a bonafide record of work done by
SRIKANTH, O.YUGANDHAR, N.RAM PAVAN, T.KARTHIK
in partial fulfillment of the requirements of the award of Bachelor
of Technology in “Electronics and Communications Engineering ”
during the period 2005-09.
Guide Head Of The Department
Mr.M.S.G.PRASAD Dr. P. Siddaiah
Asst professor, Professor and Head,
ECE, KLCE ECE, KLCE
Acknowledgement
2 | P a g e
The austerity and satisfaction that one gets on completing a
project cannot be fulfilled without mentioning the people who made it
possible with gratitude. I’m very much grateful to the Almighty who
helped us all the way through out the project and who has molded us
into what we are today.
It is great pleasure to acknowledge my profound sense of
gratitude to my project guide Asst. Prof. Mr.M.S.G.PRASAD for his
valuable and inspiring guidance, comments, suggestions and
encouragement throughout this project.
I would also like to thank all the staff members of the
department of electronics and communication of KLCE for their sound
support. And we are greatly thankful to Dr.P.SIDDAIAH,
Dr. K.S.RAMESH, Dr.S.LAKSHMI NARAYANA, Dr.HABIBULAH
KHAN, for their valuable suggestions for the development of this
report.
A.SRIKANTH O.YUGANDHAR
(Y5EC281) (Y5EC301)
N.RAM PAVAN T. KARTEEK
(Y5EC307) (L6EC333)
3 | P a g e
ABSTRACT
The problem with collecting forensic evidence at a crime scene is that the
evidence is often masked behind backgrounds. This makes it difficult for extracting key
components from the evidence. Types of evidence that this can occur on is with finger
prints and shoe prints at the crime scene. To correct this problem, image enhancement
techniques can be used to obtain the relevant information that is needed for the
investigators.
FINGER PRINT SHOE PRINT
In this project we enhance these forensic images such as finger prints and foot
prints to have a clear image view by removing the unwanted background patterns using
different enhancement techniques such as Fast Fourier Transform (FFT), Image
Subtraction, Gamma Correction, Contrast Stretching and Histogram equalization. In
addition we develop Matlab codes for these enhancement techniques.
4 | P a g e
IMAGE ENHANCEMENT TECHNIQUES FOR FORENSIC
CRIME SCENE INVESTIGATIONS
5 | P a g e
CONTENTS:
CHAPTER 1
INTRODUCTION TO DIGITAL IMAGE PROCESSING
1.1 Basics of Digital Image Processing
1.2 Fundamentals steps in digital image processing
1.3 Components of an image processing system
CHAPTER 2
INTRODUCTION TO MATLAB
CHAPTER 3
FAST FOURIER TRANSFORM(FFT)
CHAPTER 4
IMAGE SUBTRACTION
CHAPTER 5
GAMMA CORRECTION
CHAPTER 6
CONTRAST STRETCHING
CHAPTER 7
HISTOGRAM EQUALIZATION
MATLAB CODES
6 | P a g e
CHAPTER l
INTRODUCTION TO DIGITAL IMAGE PROCESSING
7 | P a g e
Introduction
1.1 Basics of Digital Image Processing:
An image may be defined as a 2-dimensional function, f(x, y),
where x & y are spatial (plane) coordinates and the amplitude of ‘f’ at any pair of
coordinates of (x, y) is called the intensity or gray level of the image at that point. When
x, y are the amplitude values of ‘f’ are an finite, discrete quantities, we call the image a
digital image. The field of digital image processing refers to processing digital images
by means of a digital computer. Note that a digital image is composed of a finite number
of elements, each of which has a particular location and these elements are referred to as
picture elements, image elements, pels, and pixels. Pixel is the term most widely used to
denote the elements of a digital image.
Vision is the most advanced of our senses, so it is not surprising
that images play the single most important role in human perceptions. However, unlike
humans, who are limited to the visual band of the electromagnetic spectrum, imaging
machines cover almost the entire EM spectrum, ranging from gamma to. radio. waves.
They can operate on images generated by sources that humans are not accustomed to
associating with images. These include ultrasound, electron microscopy, and computer-
generated images. Thus, digital image processing encompasses a void and varied field of
applications..
There is no general agreement among authors regarding where
image processing stops and other related areas, such as image analysis and computer
vision, start. Some times distinction is made by defining image processing as a
discipline in which both the input and output of a process are images. We believe this to
be a limiting and somewhat artificial boundary. For example, under this definition, even
8 | P a g e
the trivial task of computing the average intensity of an image would not be considered
an image processing operation. On the other hand, there are fields such as computer
vision whose ultimate goal is to use computers to emulate human vision, including
learning and being able to make inferences and take action based on visual inputs. This
area itself is a branch of a artificial intelligence whose objective is to emulate human
intelligence. The field of AI is in its earliest stages of infancy in terms of development,
with progress having been much slower than originally anticipated. The area of image
analysis in between image processing and computer vision.
There are no clear cut boundaries in the continuum from image
processing at one end to computer vision at the other. However, one useful paradigm is
to consider the three types of computerized processes in this continuum: low-mid, high-
level processes. Low level processes involve primitive I/O operations such as image pre-
processing to reduce noise, contrast enhancement and image sharpening. A low level
process is characterized by the fact that both its inputs and outputs are images. Mid-level
processing on images involves tasks such as segmentation, description of those objects
to reduce them to a form suitable for computer processing, and classifications of
individual objects. Amid-level process is characterized by the fact that its inputs
generally are images, but its outputs are attributes extracted from those images (e.g.
edges, contours). Higher level processing involves "making sense" of an ensemble of
recognized objects, as in image analysis and at the far end of the continuum, performing
the cognitive functions normally associated with vision.
1.2 Fundamentals steps in digital image processing:
Image acquisition is the first process shown in Figure 1.1 image
acquisition could be as simple as being given an image that is already in digital
form .Generally image acquisition involves pre-processing such as scaling.
Image enhancement is among the simplest and most appealing
areas of Digital image processing. Basically, the idea behind the enhancement
techniques is to bring out detail that is obscured, or simply to highlight certain features
of interest in an image. A familiar example of enhancement is when we increase the
contrast of an image because "it looks better". It is important to keep in mind that
9 | P a g e
enhancement is a very subjective area of image processing. Image restoration is an area
that also deals with improving the appearance of an image. However, unlike
enhancement, which is subjective, image restoration is objective, in the sense that
restoration techniques tend to be based on the other hand, is based on human subjective
preferences regarding what constitutes a "good" enhancement result.
Figure 1.1: Fundamental Steps in Digital Image Processing
Color image processing is an area that has been gaining in
importance because of the significant increase in the use of digital images over the
Internet.
Wavelets are the foundations for representing images in various
degrees of resolution., In particular, this material is used for image data compression and
10 | P a g e
for pyramidal representation, in which images are subdivided into smaller regions.
Compression, as the name implies, that deals with techniques for reducing
the storage required to save an image, or the bandwidth required to transmit it. Although
storage technology has improved significantly over the past decade, the same cannot be
said for transmission capacity. This is true particularly in uses of the Internet, which are
characterized by significant pictorial content. Image compression is familiar to most
users of computers in the form of image file extension, such as the jpg file extension
used in the JPEG (Joint Photographic Experts Group) image compression standard.
Morphological processing deals with tools for extracting image
components that are useful in the representation and description of shape. Segmentation
procedures partition an image into its constituent parts or objects. In general,
autonomous segmentation is one of the most difficult tasks in digital image processing.
A rugged segmentation procedure brings the process a long way towards successful
solution of imaging problems' that require objects algorithms almost always guarantee
eventual failure. In general, the more accurate the segmentation, the more likely
recognition is to succeed.
Representation and description almost always follow the output of
segmentation stage, which usually is raw pixel data, constituting either the boundary of a
region or all the points in the region itself. In either case, converting data to a form
suitable for computer processing. The decision that must be made is whether the data
should be represented as a boundary or as a complete region. Boundary representation is
appropriate when the focus is on external shape characteristics, such as corners and
inflections. Regional representation is appropriate when we focus on internal properties
such as texture or skeletal shape. In some applications these representations complement
each other. Choosing a representation is only part of the solution for transforming raw
data into a form suitable for subsequent computer processing. A method must also be
specified for describing the data so that features of interest or highlighted. Description,
also called feature selection, deals with extracting attributes that result in some
quantitative information of interest or are basic for differentiating one class of objects
from one another.
11 | P a g e
Recognition is that process that assigns a label (e.g., "vehicle") to an
object based on its descriptors. We conclude our coverage of digital image processing
with the development methods for recognition of individual objects.
So far we have said nothing about the need for prior knowledge or about
the interaction between the knowledge base and the processing modules in Figure 1.1.
Knowledge about a problem domain is coded into an image processing system in the
form of a knowledge database. This knowledge may be as simple as detailing regions of
an image where the information of interest is known to be located, thus limiting the
search that has to be conducted in seeking that information. The knowledge base also
can be quite complex, such as an interrelated list of all major possible defects in a
material inspection problem or an image database containing high-resolution satellite
images of a region in 'connection with change-detection applications. In addition to
guiding the operation of each processing modules, the knowledge base also controls the
interaction between modules. This distinction is made in Figure 1.1 by the use of double-
headed arrows between the processing modules and the knowledge base, as opposed to
single headed arrows linking the processing modules.
Although we do not discuss image display explicitly at this point, it is
important to keep in mind that viewing the result of image processing can take place at
the output of any stage in Figure 1.1. We also note that not all image-processing
applications require the complexity of interaction implied by Figure 1.1. Intact, not even
all those modules are needed in some cases. For example image enhancement for human
visual interpretation seldom requires use of any of the other stages in Figure 1.1. In
general, however, as the complexity of an image processing task increases so does the
number of process required to solve the problem.
1.3 Components of an image processing system:
As recently as the mid 1980s, numerous models of image processing
systems being sold throughout the world were rather substantial peripheral device that
attached to equally substantial host computer. Late in the 1980s and early in the 1990s,
the market shifted to image processing hardware in the form of single boards designed to
be compatible with industry standard buses and to fit into engineering workstation
12 | P a g e
cabinets and personal computers. In addition to lowering costs, this market shift also
served as a catalyst for a significant number of companies whose. Specialty is the
development of software written specially for image processing.
Although large-scale image processing still are being sold for massive
imaging applications, such as processing of satellite images, the trend continues towards
miniaturizing and blending of general-purpose small computers. With specialized image
processing hardware. Figure 1.2 shows the basic components comprising a typical
general-purpose system used for digital image processing. The function of each
component is followed
According to sensing, two elements are required to acquire digital
images. The first is a physical device i.e., sensitive to the energy radiated by the object
of the given image. The second called a digitizer is a device for converting the output of
the physical sensing device into digital form. For instance, in a digital video camera, the
sensors produce an electrical output proportional to light intensity. The digitizer converts
these outputs to digital data.
Specialized image processing hardware usually consists of the digitizer
and hardware that performs other primitive operations such as arithmetic logic unit
(ALU), which performs arithmetic and logical operations in parallel an entire images.
This type of hardware sometimes is called a front-end subsystem, and its most
distinguishing characteristic is speed. In other words this unit performs functions that
require fast data throughputs (e.g. Digitizing and averaging video images at 30
frames/sec) that the typical main computer cannot handle.
13 | P a g e
Figure 1.2 : Components of an Image Processing System
The computer in an image processing system is a general-purpose
computer and can range from a PC to a super computer. In dedicated applications, some
times specially designed computers are used to achieve to a required level of
performance, but our interest here is on general-purpose image processing systems. In
this system, almost any well-equipped PC-type machine is Suitable for offline image
processing tasks.
Software for image processing consists of specialized modules that
perform specific tasks. A well-designed package also includes the capability for the user
to write code that, as a minimum, utilizes the specialized modules. More sophisticated
software packages allow the integration of those modules and general-purpose software
commands from at least one computer language.
Mass storage capability is a must in image processing applications. An
image of size 1024xl024 pixels, in which the intensity of each pixel is an 8-bit quantity,
14 | P a g e
requires one mega byte of storage space if the image is not compressed. When dealing
with thousands, or even millions, of images, providing adequate storage in an image
processing system can be a challenge. Digital storage for image processing applications
falls into principle categories: (I) short term storage for use during processing, (2) online
storage for relatively fast recall, (3) archival storage, characterized by infrequent access.
Storage is measured in bytes, Kilobytes, Megabytes, Gigabytes, Terabytes.
One method of providing short-term storage is computer memory
another is by specialized boards called frame buffers that store one or more images and
can be accessed rapidly, usually at video rates. The later method allows virtually
instantaneous image zoom, as well as scroll and pan. Frame buffers usually are housed
in the specialized image processing hardware unit shown in Figure 1.2. On line storage
generally takes the form of magnetic discs or optical media storage. The key factor
characterizing on line storage is frequent access to the stored data. Finally, archival
storage is characterized by massive storage requirements but infrequent need for access.
Magnetic tapes & optical disc housed in 'jukeboxes' are the usual media for archival
applications.
Image displays in use today's are mainly color TV monitors. Monitors
are driven by the outputs of image & graphics display cards that are an integral part of
the computer system. Seldom are there requirements for image display applications that
can't be met by display cards available commercially as part of the computer system. In
some cases, it is necessary to have stereo displays, & these are implemented in the form
of headgear containing two small displays embedded in goggles won by the user.
Hard copy devices for recording images include laser printers, film
cameras, heat sensitive devices, ink jet units, digital units such as optical and cd-rom
discs. Film provides the highest possible resolution, but paper is the obvious medium
choice for written material. For presentations, images are displayed on film
transparencies or in a digital medium if image projection equipment is used. The later
approach is gaining acceptance as the standard for image presentations.
15 | P a g e
Networking is almost a default function any computer system in use
today; Because of the large amount of data inherent in image processing applications,
the key consideration in image transmission is bandwidth. in dedicated networks, this
typically is not a problem ,but communications with remote sites via the internet are not
always as efficient. Fortunately, this situation is improving quickly as a result of optical
fibre and other broadband technologies.
16 | P a g e
CHAPTER-2
INTRODUCTION TO MATLAB
17 | P a g e
2.1 Basic Matlab Operations
MATLAB was originally developed as a MATrix LABoratory in the late
seventies.Today, it is much more powerful and remains convenient and fast in numeric matrices.
Hence, it is also powerful for graphics. MATLAB has many such as optimization, signal
processing and wavelet transforms.
MATLAB is case sensitive, e.g., pi is 3.14159 ... , but Pi isn't. To get help information such as
diff, you can use either the help menu, or type help diff or help(‘diff’).
% starts a comment in a line
; (the end of a statement) suppresses the display of the result
..... (at the end of a line) continues the statement to the next line
\ Left division, e.g., 3\15 equals 15/3 and A\b gives A^(-1) b
if A is invertible. A\b is also meaningful when A is singular.
ans default variable name of the result available to the next statement
i (or) j sqrt(-1)
eps machine precison
realmin the smallest positive floating point number
realmax the largest positive floating point number
inf infinity, e.g., 1/0
NaN Not-a-Number, e.g., 0/0
2.1.1 Numbers
18 | P a g e
A number can be displayed in various formats in MATLAB. For example, the
answer to x = 100/7 is usually 14.2857 which is in the default display format short. However if
it is stored with 16 digits (default precision). If you type format longx you will see x =
14.28571428571429. But all later outputs will also be displayed in format long. If you want to
resume the original format, just type format short % or simply format , format rat % returns the
rational expression 100/7. help format % more information about format. The functions
round(x), floor(x), cei1(x), fix(x) return integer approximations to a floating point number x.
2.1.2 Arrays and Matrices
To create a one-dimensional array of five elements -2, 3, 0, 4.5, -1.5 (also known
as a row vector or a 1-by-5 matrix) named v, you can use
V = [-2 3 0 4.5 -1.5] % or
v = [-2, 3, 0,4.5, -1.5]
v(1) % the first element, -2
v(2:4) % the sub-array consisting of v(2), v(3) and v(4)
v(4 : -1 : 2) % the subarray consisting of v(4), v(3) and v(2)
4 : -1 : 2 % array 4, 3, 2 with increment -1
8 : 1 : 12 % array 8, 9, 10, 11, 12
8 : 12 % same as 8 : 1 : 12
x = linspace( -pi, pi, 21) % 21 is the number of elements in the row vector
A = [1 2 3; 4 5 6] % a 2-by-3 matrix
A = [12 3 % the same as above
A(1,2) % the element in row 1 and column 2
A(:,2) % the second column
A(2, :) % the second row
19 | P a g e
A(2,1:2) % a row vector with two elements A(2, 1) and A(2, 2).
A(:) % all elements in A as a single column
% If A(:) is on the left side of an assignment, it fills A
% and the size of A remains the same as before.
A+B % matrix addition
A - B % matrix subtraction
2 * B % scalar multiplication
A * B % matrix multiplication
A . * B % element-by-element multiplication
A ./ B % element-by-element division
A .\ B % element-by-element left division
A.^B % element-by-element power
A' % complex conjugate transpose
A.' % transpose; when A is real, A.' = At
det(A) % determinant
A^(-1) % inverse
inv(A) % inverse
A =[1 2 3; 2 5 3; 1 0 8] % a square matrix
b = [2; 1; 0] % a column vector
x = inv(A) * b % solve A x = b if A is non-singular
x = A \ b % a better way to solve A x = b
20 | P a g e
x = A \ b is better because it uses the LU factorization (a modification of Gaussian elimination)
which is much more efficient when the matrix is large, and because it can give least squares
solutions when A is singular or when A is not a square matrix.
[V, D] = eig(A) % returns the eigenvectors of A in the matrix V
% and the eigenvalues as the diagonal elements of
% diagonal matrix D
[V, D] = eig(A) % same as
[V D] = eig(A)
size(A)
length(A)
rank(A)
norm(A) % 2-norm, same as norm(A, 2)
norm(A, 1) % 1-norm
norm(A, int) % infinity norm
poly(A) % characteristic polynomial of matrix A
diag(v) % change a vector v to a diagonal matrix
diag(A) % get the diagonal elements of the matrix A
eye(n) % identity matrix of order n
zeros(m, n) % form an m-by-n zero matrix
ones(m, n) % form an m-by-n matrix with all entries equal 1
2.1.3 Control Flow Examples
21 | P a g e
sum = 0; factorial = 1; % an example of loop
for n = 1 : 10
sum = sum +n;
factorial = factorial * n;
end
sum = 0; factorial = 1; n = 1; % an example of while-loop
while n <= 10
sum = sum + n; factorial = factorial * n; n = n + 1;
end
if x > 0 % an example of if-else-if-structure
disp('x is positive')
elseif x< 0
disp('x is negative')
else
disp('x is neither positive nor negative')
end
d = eig(A);
if A==A' & all(d > 0)
disp('A is positive definite')
end
2.1.4 Loading and saving variables in Matlab
22 | P a g e
This section explains how to load and save variables in Matlab. Once you have
read a file, you probably convert it into an intensity image (a matrix) and work with this matrix.
Once you are done you may want to save the matrix representing the image in order to continue
to work with this matrix at another time. This is easily done using the commands save and load.
Loading and saving variables
Operation: Matlab command
Save the variable x Save x
Load the variable x Save x
2.1.5 M-file Scripts
To solve complicated problems, you don't have to use MATLAB
interactively. Instead, you can type all commands in an ASCII file named with .m extension.
It is called a script file or an M-file. For instance, you can type the while-loop example in a
file called example.m using any text editor or using the Open M-file item in the File menu.
In MATLAB, use cd path to change to the directory including example.m, then just type
example, and the commands in this file will be executed. You can also use pwd in
MATLAB to see your present working directory. When you choose a filename for an Mfile,
avoid the variable names you may use and the names of MATLAB built-in functions. You
can use the command who to see all variables you have used in a session, and use help to see
if a name is a built-in function.
2.1.6 M-file Functions
Function y = function name (x 1, x2, x3)% with one output; or function [y1, y2]
= function name (x 1, you can write your own functions in an M-file starting with a line such as
x2, x3) % with two outputs yl and y2. The function name must be the same as the M-file name.
The comment lines immediately following the first line can be seen with help function name
Variables inside a to certain function are local if they do not appear in the first line. All output
variables should be assigned values. The number of arguments passed to a function is stored in
the variable nargin. The following is an example M-file function and it should be stored in a file
called circle.m.
function [c, area] = circle(r) % [c, area] = circle(r) returns the circumference and
interior area % of a circle with radius r.
23 | P a g e
if nargin ~= 1 % Error checking
error(‘There should be one input argument.'),
end
if r<0
error(‘Radius should be non-negative'),
end
c = 2 * pi * r;
area = pi * r^ 2;
2.1.7 Input/Output
The following is an example to read ten double precision numbers from an ASCII file named
in.dat to a column vector v.
»fid = fopen(‘in.dat', 'rt') % open a text file for reading
»v = fscanf(fid, '%lg', 10) % read 10 numbers from the file
»fclose(fid) % close the file
You may use fprintf( ) for output. When fid is 1 or is omitted in fprintf( ), it outputs to the
standard output, i.e., the screen
2.2 Image Processing Toolbox In Matlab
This is an introduction on how to handle images in Matlab. When working with
images in Matlab, there are many things to keep in mind such as loading an image, using the
right format, saving the data as different data types, how to display an image, conversion
between different image formats, etc. This worksheet presents some of the commands designed
for these operations. Most of these commands require you to have the Image processing tool box
installed with Matlab. To find out if it is installed, type ver at the Matlab prom))t.This gives you
a list of what tool boxes that- are installed on your system. For further reference on image
handling in Matlab use Matlab's help browser. There is an extensive (and quite good) on-line
24 | P a g e
manual for the Image processing tool box that can be accessed via Matlab's help browser.
A digital Image is composed of pixels which can be thought of as small dots on
the screen. A digital image is an instruction of how to color each pixel. A typical size of an
image is 512-by-512 pixels. It is convenient to let the dimensions of the image to be a power of
2. For example, 20=512. In the general case we say that an image is of size m-by-n if it is
composed of m pixels in the vertical direction and n pixels in the horizontal direction.
Let us say that we have an image on the format 512-by-1024 pixels. This means
that the data for the image must contain information about 524288 pixels, which requires a lot
memory! Hence, compressing images is essential for efficient image processing.
When you store image, you should store it as a unit8 image since this requires far
less memory than double. When you are processing an image ( that is performing mathematical
operations on an image) you should convert it into double. Converting back and forth between
classes is easy.
2.2.1 Image formats supported by Matlab
The following image formats are supported by Matlab
BMP, HDF, JPEG, PCX, TIFF, XWB
Most images you find on the Internet are JPEG-images which is the name for
one of the most widely used compression standards for images. If you have stored an image you
can usually see from the suffix what format it is stored in. For example, an image named
myimage.jpg is stored in the JPEG format and we will see later on that we can load an image of
this format into MatIab. If an image is stored as a JPEG-image on your disc we first read it into
MatIab. However, in order to start working with an image, for example perform a wavelet
transform on the image, we must convert it into a different format. This section explains four
common formats.
a) Intensity image (gray scale image)
This is the equivalent to a "gray scale image" and this is the image we will mostly
work with in this project. It represents an image as a matrix where every element has a value
corresponding to how bright/dark the pixel at the corresponding position should be colored.
There are two ways to represent the number that represents the brightness of the pixel: The
double class (or data type). This assigns a floating number ("a number with decimals") between
25 | P a g e
0 and 1 to each pixel. The value 0 corresponds to black and the value 1 corresponds to white.
The other class is called uint8 which assigns an integer between o and 255 to represent the
brightness of a pixel. The value 0 corresponds to black and 255 to white. The class uint8 only
requires roughly 1/8 of the storage compared to the class double. On the other hand, many
mathematical functions can only be applied to the double 'class.
b) Binary Image
This image format also stores an image as a matrix but can only color a pixel
black or 'white (and nothing in between). It assigns a 0 for black and a 1 for white.
c) Indexed Image
This is a practical way of representing color images. An indexed image stores an
image as two matrices. The first matrix has the same size as the image and one number for each
pixel. The second matrix is called the color map and its size may be different from the image.
The numbers in the first matrix is an instruction of what number to use in the color map matrix.
d) RGB Image
This is another format for color images. It represents an image with three
matrices of sizes matching the image format. Each matrix corresponds to one of the colors red,
green or blue and gives an instruction of how much of each of these colors a certain pixel should
use.
The following table shows how to convert between the different formats given in section
2.3
Image Format Conversion
Operation Matlab Command
Convert between intensity/indexed/RGB format to binary format. dither( )
Convert between intensity format to indexed format. gray2ind( )
Convert between indexed format to intensity format. ind2gray( )
Convert between indexed format to RGB format. ind2rgb( )
Convert a regular matrix to intensity format by scaling. mat2gray( )
Convert between RGB format to intensity format. rgb2gray( )
Convert between RGB format to indexed format. rgb2ind( )
26 | P a g e
The command mat2gray is useful if you have a matrix representing an image but the values
representing the gray scale range between, let's say, 0 and 1000. The command mat2gray
automatically re scales all entries so that they fall within 0 and 255 (if you use the uint8 class) or
0 and 1 (if you use the double class).
2.4 Reading Image Files
When you encounter an image you want to work with, it is usually in form of a file (for
example, if you download an image from the web, it is usually stored as a JPEGfile). Once we
are done processing an image, we may want to write it back to a JPEG-file so that we
can, for example, post the processed image on the web. This is done using the imread
and imwrite commands. These commands require the Image processing tool box!
Reading and writing image files
Operation Matlab Command
Read an image. (Within the parenthesis you type the name of the image file you wish to read. Put the file name within single quotes ‘ ‘.)
imread( )
Write an image into a file.
(As the first argument within the parenthesis you type the name of the image you have worked with.
As a second argument within the parenthesis you type the file name of the file and format that you want to write the image to.
Put the file name within single quotes ‘ ‘.)
imwrite( )
Make sure to use semi-colon; after these commands, otherwise you will get LOTS OF number scrolling on you screen.
2.5 Displaying Images in MATLAB
Here are a couple of basic Matlab commands (do not require any tool box) for displaying an image.
27 | P a g e
Displaying an image given on matrix form
Operation Matlab command
Display an image represented as a matrix X.imagesc(X)
Zoom in (using the left and right mouse button). brighten(s)
Turn off the zoom function. colormap(gray)
Sometimes your image may not be displayed in gray scale even though you might have converted it into a gray scale image. You can then use the command colormap(gray) to "force'" Matlab to use a gray scale when displaying an image.
If you are using Matlab with an Image processing tool box installed, we mostly use the command imshow to display an image.
Displaying an image given on matrix form (with Image Processing Toolbox)
Operation Matlab command
Display an image represented as a matrix X.imshow(X)
Zoom in (using the left and right mouse button). zoom on
Turn off the zoom function. zoom off
2.6 Commands in image processing tool box.
colorbar - Display colorbar (MATLAB Toolbox).
getimage - Get image data from axes.
image - Create and display image object (MATLAB Toolbox).
imagesc - Scale data and display as image (MATLAB Toolbox).
immovie - Make movie from multiframe image.
imshow - Display image.
montage - Display multiple image frames as rectangular montage.
movie - Play recorded movie frames (MATLAB Toolbox).
subimage - Display multiple images in single figure.
28 | P a g e
truesize - Adjust display size of image.
warp - Display image as texture-mapped surface.
a) Image file I/O.
dicominfo - Read metadata from a DICOM message.
dicomread - Read a DICOM image.
dicomwrite - Write a DICOM image.
dicom-dict.txt - Text file containing DICOM data dictionary.
imfinfo - Return information about image file (MATLAB Toolbox).
imread - Read image file (MATLAB Toolbox).
irnwrite - Write image file (MATLAB Toolbox).
b) Image arithmetic.
imabsdiff - Compute absolute difference of two images ..
imadd - Add two images, or add constant to image.
imcomplement - Complement image.
imdivide - Divide two images, or divide image by constant.
imIincomb - Compute linear combination of images.
immultiply - Multiply two images, or multiply image by constant.
imsubtract - Subtract two images, or subtract constant from image.
c) Geometric transformations
checkerboard - Create checkerboard image.
findbounds - Find output bounds for geometric transformation.
fliptforrn - Flip the input and output roles of a TFORM struct.
imerop - Crop image.
imresize - Resize image.
imrotate - Rotate image.
imtransform - Apply geometric transformation to image.
density levels, e.g. ) is single-valued and monotonically increasing (as is
the case in histogram equalization) so that it is possible to define the inverse law
. An example of such a transfer function is illustrated in Figure 1.
Figure 1 A histogram transformation function.
All pixels in the input image with densities in the region to will have
their pixel values re-assigned such that they assume an output pixel density value in the
range from to . The surface areas and will
therefore be equal, yielding:
where .
This result can be written in the language of probability theory if the histogram h is
regarded as a continuous probability density function p describing the distribution of the
(assumed random) intensity levels:
66 | P a g e
In the case of histogram equalization, the output probability densities should all be an
equal fraction of the maximum number of intensity levels in the input image (where
the minimum level considered is 0). The transfer function (or point operator) necessary
to achieve this result is simply:
Therefore,
where is simply the cumulative probability distribution (i.e. cumulative
histogram) of the original image. Thus, an image which is transformed using its
cumulative histogram yields an output histogram which is flat!
A digital implementation of histogram equalization is usually performed by defining a
transfer function of the form:
where N is the number of image pixels and is the number of pixels at intensity level k
or less.
In the digital implementation, the output image will not necessarily be fully equalized
and there may be `holes' in the histogram (i.e. unused intensity levels). These effects are
likely to decrease as the number of pixels and intensity quantization levels in the input
image are increased.
67 | P a g e
7.3.Common Variants
Histogram Specification
Histogram equalization is limited in that it is capable of producing only one result: an
image with a uniform intensity distribution. Sometimes it is desirable to be able to
control the shape of the output histogram in order to highlight certain intensity levels in
an image. This can be accomplished by the histogram specialization operator which
maps a given intensity distribution into a desired distribution using a
histogram equalized image as an intermediate stage.
The first step in histogram specialization, is to specify the desired output density
function and write a transformation g(c). If is single-valued (which is true when
there are no unfilled levels in the specified histogram or errors in the process of
rounding off to the nearest intensity level), then defines a mapping from
the equalized levels of the original image, . It is possible to combine
these two transformations such that the image need not be histogram equalized
explicitly:
Local Enhancements
The histogram processing methods discussed above are global in the
sense that they apply a transformation function whose form is based on the
intensity level distribution of an entire image. Although this method can enhance
the overall contrast and dynamic range of an image (thereby making certain
details more visible), there are cases in which enhancement of details over small
areas (i.e. areas whose total pixel contribution to the total number of image
pixels has a negligible influence on the global transform) is desired. The solution
in these cases is to derive a transformation based upon the intensity distribution
in the local neighborhood of every pixel in the image.
68 | P a g e
The histogram processes described above can be adapted for local
enhancement. The procedure involves defining a neighborhood around each pixel and,
using the histogram characteristics of this neighborhood, to derive a transfer function
which maps that pixel into an output intensity level. This is performed for each pixel in
the image. (Since moving across rows or down columns only adds one new pixel to the
local histogram, updating the histogram from the previous calculation with new data
introduced at each motion is possible.) Local enhancement may also define transforms
based on pixel attributes other than histogram, e.g. intensity mean (to control variance)
and variance (to control contrast) are common.
Histogram Equalization
To transfer the gray levels so that the histogram of the resulting image is equalized to be a constant:
The purposes:
to equally use all available gray levels; for further histogram specification.
69 | P a g e
This figure shows that for any given mapping function between the input and output images, the following holds:
i.e., the number of pixels mapped from to is unchanged.
To equalize the histogram of the output image, we let be a constant. In particular, if the gray levels are assumed to be in the ranges between 0 and 1 (
), then . Then we have:
70 | P a g e
i.e., the mapping function for histogram equalization is:
where
is the cumulative probability distribution of the input image, which monotonically increases.
Intuitively, histogram equalization is realized by the following:
If is high, has a steep slope, will be wide, causing to be
low to keep ;
If is low, has a shallow slope, will be narrow, causing to be high.
71 | P a g e
For discrete gray levels, the gray level of the input takes one of the discrete values:
and the continuous mapping function
becoems discrete:
where is the probability for the gray level of any given pixel to be (
):
72 | P a g e
The resulting function is in the range and it needs to be converted to the
gray levels by one of the two following ways: 1.
2.
where is the floor, or the integer part of a real number , and adding is for
proper rounding. Note that while both conversions map to the highest gray
level , the second conversion also maps to 0 to stretch the gray levels of
the output image to occupy the entire dynamic range .
Example: Assume the images have pixels in 8 gray levels. The following table shows the equalization process corresponding to the two conversion methods above:
0/7 790 0.19 0.19 1/7 0.19 0.19 0/7 0.19 0.19
1/7 1023 0.25 0.44 3/7 0.25 0.44 2/7 0.25 0.44
2/7 850 0.21 0.65 5/7 0.21 0.65 4/7 0.21 0.65
3/7 656 0.16 0.81 6/7 5/7 0.16 0.81
4/7 329 0.08 0.89 6/7 0.24 0.89 6/7 0.08 0.89
73 | P a g e
5/7 245 0.06 0.95 7/7 7/7
6/7 122 0.03 0.98 7/7 7/7
7/7 81 0.02 1.00 7/7 0.11 1.00 7/7 0.11 1.00
In the following example, the histogram of a given image is equalized. Although the resulting histogram may not look constant, but the cumulative histogram is a exact linear ramp indicating that the density histogram is indeed equalized. The density histogram is not guaranteed to be a constant because the pixels of the same gray level cannot be separated to satisfy a constant distribution.
74 | P a g e
Programming Hints:
Find histogram of given image:
Build lookup table:
Image Mapping:
Histogram equalization is as a contrast enhancement technique with the objective
to obtain a new enhanced image with an uniform histogram. This can be
achieved by using the normalized cumulative histogram as the grey scale
mapping function.
The intermediate steps of the histogram equalization process are:
1. Take the cumulative histogram of the image to be equalized
2. Normalize the cumulative histogram to 255
3. Use the normalized cumulative histogram as the mapping function of the original
image
75 | P a g e
These intermediate steps are illustrated below.
a) b)
a)original image; b)its histogram
a) b)
a)normalized cumulative histogram of the original image; b)histogram equalized image
76 | P a g e
a) b)
a)histogram of the equalized image; b)cumulative histogram of the equalized image
Due to the discrete nature of the problem, the resultant histogram is not uniform as
desired, but you can see from the cumulative equalized histogram that it does
approximate to a straight line.
You can get a much uniformer histogram if you artificially increase the quantization of
the original image before applying the equalization. You can check this in the "Other
Examples" section.
APPLICATIONS: Histogram Equalization increases the contrast of images
Histogram equalized images
77 | P a g e
The resulting figures for the second task are shown in figure 5 and 6. The
images in figure 5 show image subtraction, with gamma correction and contrast
stretching to further enhance the image. The images in figure 6 go further than the ones
shown in figure four. The images in figure 6 do image subtraction, contrast stretching,
gamma correction, and histogram equalization. The shoeprint that shows up best with
gamma correction occurs at using a correction value of 2.1.
Figure 5: Image Subtraction with gamma correction
78 | P a g e
Figure 6: Image Subtraction, gamma correction and Histogram Equalization
79 | P a g e
MATLAB CODES:
IMAGE SUBTRACTION
80 | P a g e
Appendix A:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Image Subtraction
%
% This will read in two separate pictures, one with and without a finger
% print.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Read in both files
print = imread('treePrint.jpg');
noprint = imread('treeNoPrint.jpg');
white = imread('white_image_1992x1362.jpg');
size(white)
% Convert both pictures to grayscale
grayPrint = rgb2gray(print);
grayNoPrint = rgb2gray(noprint);
grayWhite = rgb2gray(white);
% Subtract the image without a finger print from the image with one.
result = imsubtract(grayPrint, grayNoPrint);
invertResult = imsubtract(grayWhite, result);
% Show the result
%figure1();
subplot(2,2,1), imshow(noprint), title('Original image without a finger print');
subplot(2,2,2), imshow(print), title('Original image with a finger print');
subplot(2,2,3), imshow(result), title('The result of background subtraction');
subplot(2,2,4), imshow(invertResult), title('The inverted result of background
subtraction');
81 | P a g e
INPUTS:
treeNoPrint.jpg treePrint.jpg
White_image_1992*1362.jpg
82 | P a g e
OUTPUTS:
83 | P a g e
MATLAB CODE FOR GAMMA ENHANCEMENT AND
GAMMA ENHANCED HISTOGRAM EQUALIZATION
84 | P a g e
Appendix B:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% General Enhancement
%
% This program does gamma enhancement and gamma enhancement with histogram
%equalization with a number of different parameters to get the best results.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
clear all
%read in the image of footprint
i=imread('project1.jpg');
imshow(i);
%read in the image with all pixel values equal to 255