Top Banner
CONTENTS INDEX Page No. TITLE PAGE ABSTRACT 1. INTRODUCTION TO DIGITAL IMAGE PROCESSING.....................................1 1.1 Fundamental in Image Processing………………..... 3 2. INTRODUCTION TO MATLAB…………………………………………………..9 3. SYSTEM OVERVIEW……………………………………………………………..14 3.1 Specification………………………………………………..15 3.2 Constraints………………………………………………....15 4. SYSTEM DESIGN………………………………………………………………….16 4.1 Image acquisition…………………………………………...17 4.2 Plate Extraction……………………………………………..18 4.2.1 Binarization……………………………………….18 4.2.2 Final Edges………………………………………..19 4.2.3 Hough Transform………………………………… 21 5. CHARACTER SEGMENTATION………………………………………………...24 5.1 Scaling……………………………………………………… 25 5.2 Horizontal Projection………………………………………. 27 5.3 Vertical Projection…………………………………………. 29 6. CHARACTER RECOGNITION…………………………………………………. 33
65
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Number Plate Recognition

CONTENTS

INDEX Page No.

TITLE PAGE

ABSTRACT

1. INTRODUCTION TO DIGITAL IMAGE PROCESSING.....................................1

1.1 Fundamental in Image Processing………………..... 3

2. INTRODUCTION TO MATLAB…………………………………………………..9

3. SYSTEM OVERVIEW……………………………………………………………..14

3.1 Specification………………………………………………..15

3.2 Constraints………………………………………………....15

4. SYSTEM DESIGN………………………………………………………………….16

4.1 Image acquisition…………………………………………...17

4.2 Plate Extraction……………………………………………..18

4.2.1 Binarization……………………………………….18

4.2.2 Final Edges………………………………………..19

4.2.3 Hough Transform………………………………… 21

5. CHARACTER SEGMENTATION………………………………………………...24

5.1 Scaling……………………………………………………… 25

5.2 Horizontal Projection………………………………………. 27

5.3 Vertical Projection…………………………………………. 29

6. CHARACTER RECOGNITION…………………………………………………. 33

6.1 Template Matching………………………………………… 34

6.2 Component Labelling………………………………………. 35

6.3 Vertical projection ………………………………………….36

SNAPSHOTS..………………………………………………………………………….37

EXPERIMENTAL RESULTS & IMPROVEMENTS..…………………………….42

CONCLUSION………………………………………………………………………...44

BIBLIOGRPHY………………………………………………………………………..45

Page 2: Number Plate Recognition

ABSTRACT

Vehicle Number Plate Segmentation and recongnition constitutes an

unambiguous identifier of a vehicle participating in road traffic . An efficient number plate

recognition process may become the core of fully computerized road traffic monitoring

systems , electronic fee collection solutions , surveillance devices and safety supervision

systems.

Page 3: Number Plate Recognition

Commercially available optical character recognition (OCR) systems

generally fail when confronted with textured or poorly contrasted backgrounds, therefore

creating the need for proper image segmentation prior to classification. And those systems are

developed using .Net and other programming languages which requires large source code for

each function (for processing a image file), where as in MATLAB we can directly replace

huge code by single methods (preloaded)

So our aim is to develop a efficient and standardized Vehicle Number plate

Recognition system for low cost using MatLab

Page 4: Number Plate Recognition

INTRODUCTION

TO

DIGITAL IMAGE

PROCESSING

Page | 1

Page 5: Number Plate Recognition

DIGITAL IMAGE PROCESSING

Information carrying function of time is called signal. Real time signals can be

audio or video(image) signals. Still video is called an image. Moving image is called a video.

Difference between digital image processing and signals and systems is that time graph is not

there in DIP. X and Y coordinates in DIP are spatial coordinates.Time graph is not there

because photo doesn’t change with time.

IMAGE : An image is defined as a two dimensional function f(x, y) where x and y are

spatial coordinates and the amplitude ‘f’ at any point (x, y) is known as the intensity of image

at that point.

PIXEL: A pixel (short for picture element) is a single point in a graphic image. Each such

information element is not really a dot, nor a square but an abstract sample. Each element of

the above matrix is known as a pixel where dark = 0 and light = 1. A pixel with only 1 bit

will represent a black and white image. If the numbers of bits are increased then the number

of gray levels will increase and a better picture quality is achieved.

All naturally occurring images are analog in nature. If the number of pixels is more then the

clarity is more. An image is represented as a matrix in DIP. In DSP we use only row

matrices. Naturally occurring images should be sampled and quantized to get a digital image.

A good image should have 1024*1024 pixels which is known as

1k * 1k = 1M pixel.

Page | 2

Page 6: Number Plate Recognition

FUNDAMENTAL STEPS IN DIP:

1. IMAGE ACQUISITION: Digital image acquisition is the creation of digital images

typically from a physical object. A digital image may be created directly from a

physical scene by a camera or similar device. Alternatively it can be obtained from

another image in an analog medium such as photographs, photographic film, or

printed paper by a scanner or similar device. Many technical images acquired with

tomographic equipment, side-looking radar, or radio telescopes are actually obtained

by complex processing of non-image data.

2. IMAGE ENHANCEMENT : The process of image acquisition frequently leads

to image degradation due to mechanical problems, out-of-focus blur, motion,

inappropriate illumination and noise. The goal of image enhancement is to start from

a recorded image and to produce the most visually pleasing image.

3. IMAGE RESTORATION : The goal of image restoration is to start from a

recorded image and to produce the most visually pleasing image. The goal of

enhancement is beauty. The goal of restoration is truth. The measure of success in

restoration is usually an error measure between the original and the estimate image.

No mathematical error function is known that corresponds to human perceptual

assessment of error.

4. COLOUR IMAGE PROCESSING: Colour image processing is based on that

any colour can be obtained by mixing 3 basic colours red, green and blue. Hence 3

matrices are necessary each one representing each colour.

5. WAVELET AND MULTIRESOLUTION PROCESSING: Many times

a particular spectral component occurring at any instant can be of particular interest.

In these cases it may be very beneficial to know the time intervals these particular

spectral components occur. For example, in EEGs the latency of an event-related

potential is of particular interest. Wavelet transform is capable of providing the time

Page | 3

Page 7: Number Plate Recognition

and frequency information simultaneously, hence giving a time-frequency

representation of the signal. Although the time and frequency resolution problems are

results of a physical phenomenon ( the Heisenberg uncertainty principle ) and exist

regardless of the transform used, it is possible to any signal by using an alternative

approach called the multiresolution analysis (MRA). MRA analyzes the signal at

different frequencies with different resolutions. MRA is designed to give good time

resolution and poor frequency resolution at high frequencies and good frequency

resolution and poor time resolution at low frequencies.

6. COMPRESSION : Image compression is the application of data compression on

digital images. Its objective is to reduce redundancy of the image data in order to be

able to store or transmit data in an efficient form.

7. MORPHOLOGICAL PROCESSING: Morphological processing is a

collection of techniques for DIP based on mathematical morphology. Since these

techniques rely only on the relative ordering of pixel values not on their numerical

values they are especially suited to the processing of binary images and grayscale

images.

8. SEGMENTATION : In the analysis of the objects in images it is essential that we

can distinguish between the objects of interest and “the rest”. This latter group is also

referred to as the background. The techniques that are used to find the objects of

interest are usually referred to as segmentation techniques.

9. OBJECT RECOGNITION: The visual perception of familiar objects. In

Computer vision ,it is the task of finding a given object in an image or video

sequence. Humans recognize a multitude of objects in images with little effort, despite

the fact that the image of the objects may vary somewhat in different viewpoints, in

many different sizes / scale or even when they are translated or rotated. Objects can

even be recognized when they are partially obstructed from view. This task is still a

challenge for computer vision systems in general.

Page | 4

Page 8: Number Plate Recognition

The main parts of image sensing are :

1. Sensor( converts optical to electrical energy)

2. Digitizer(It converts analog signal to digital signal by sampling and quantization)

Interesting phenomena : Our eye is capable of differentiating between various levels of

intensity. The range of human eye is from scotopic threshold to glare limit which is about

10^10 lambards. It accomplishes this variation by changes in overall sensitivity and this

phenomenon is called “brightness adaptation”. Subjective brightness( intensity as

perceived by human eye) is a log function of light intensity incident on the eye. A visual

system cannot operate over such a large range simultaneously. Hence our eye cannot see

very bright and very dim images simultaneously.

Light and electromagnetic spectrum : Light is a part of electromagnetic spectrum that can

be seen and sensed with the human eye. Light travels at a speed of 3*10^8 m/s. If any one

can cross this speed they can go into past or future. The visible light can be split up into

VIBGYOR from range of violet ( 0.43micrometres ) to

red( 0.79micrometres). A substance which absorbs all the colours appears as black, no

colour as white. A substance which reflects blue will appear as blue. Colour is the part of

light spectrum which is not absorbed by human eye. Light that is void of colour is called

monochromatic or achromatic light. The only property of this light is intensity or

graylevel.

Properties of light :

1. Radiance : The total energy that flows from a light source is called radiance. Units are

watts. Example of sources are sun, bulb etc.

2. Luminance : The amount of energy that an observer perceives from the source is

called luminance. Units are lumens. Example is seeing the sun with black glasses.

3. Brigthness : Brightness is an attribute of visual perception in which a source appears

to emit a given amount of light. It has no units as it is practically impossible to

measure.

Page | 5

Page 9: Number Plate Recognition

IMAGE SENSING AND ACQUISITION :

In image sensing light energy is converted into voltage. Image acquisition can

be done using 3 principle sensor arrangements :

1. Single sensor

2. Line sensor/strip sensor

3. Array sensor

If something is changing more than 17 times per second or if the frequency is greater

than 17 then we cannot differentiate it.

SIMPLE IMAGE FORMATION MODEL : Images are represented as 2-D

functions f(x,y). The value of f(x,y) at any point (x,y) at any point (x,y) is a positive

quantity.

0 < f(x,y) < infinity

f(x,y) depends on two parameters :

1. Amount of light incident on the scene = i(x,y)

2. Amount of light reflected from the scene = r(x,y)

f(x,y) = i(x,y) * r(x,y)

0 < i(x,y) < infinity

0 < r(x,y) < 1

In the last inequality 0 represents total absorption and no reflection (black) and 1

represents total reflection and no absorption(white).

0 < f(x,y) < infinity

imin < i(x,y) < imax

rmin < r(x,y) < rmax

imin * rmin < f(x,y) < imax * rmax

Lmin < l < Lmax

If you consider monochromatic light L = number of gray levels

If L = 30 then n = 5 (2^5 > 30)

The interval (Lmin – Lmax) is called grayscale which is shifted to (0,L-1) where

l = 0 (black) & l = L-1 (is white)

All intermediate stages are shades of gray.

Page | 6

Page 10: Number Plate Recognition

All real time signals are analog in nature. But we need a digital image. But we acquire an

analog image and digitize it. For this we need an A/D converter. To convert naturally

occurring images into digital form we must digitize both coordinates and amplitudes.

Digitizing coordinate values is called sampling and digitizing amplitude values is called

quantization. Hence the quality of digital image will depend on the number of

samples(sampling) & the number of gray levels(quantization). More the samples better will

be the quality.

REPRESENTING DIGITAL IMAGES:

The result of sampling and quantization of an image is an array of numbers according

to x and y coordinates which is nothing but a matrix of m rows and n columns. Columns and

rows depend on number of samples. Brightness depends on number of bits used to represent

each pixel. For an image of size m rows and n columns total number of pixels = m*n

If each pixel is represented using k bits then total amount of memory(in bits) required to store

the image = m*n*k

A 1M pixel size image requires 1024 rows and 1024 columns. If each pixel is represented

using 8 bits then total amount of memory required to store the image = 1k*1k*1 bytes

Page | 7

Page 11: Number Plate Recognition

RESOLUTION : Resolution is classified into 2 types :

1. spatial

2. graylevel

1.SPATIAL RESOLUTION :

It is the smallest discernable detail in the image. Consider vertical lines of width ‘w’

with spaces between also having width ‘w’. A line pair consists of one such line & its

adjacent space. Hence width of line pair = 2w and there are 1/(2*w) line pairs per unit

distance. Thus spatial resolution is the number of discernable line pairs per unit distance.

Example : 100 line pairs/unit distance.

GRAYLEVEL RESOLUTION :

It is the smallest discernable change in graylevel. The number of graylevels should be

a power of 2. The commonly used number of graylevels is 256.

L = 256

2^k = 256

k = 8 (8 bits per pixel)

Page | 8

Page 12: Number Plate Recognition

INTRODUCTION TO

MATLAB

Page | 9

Page 13: Number Plate Recognition

Matlab Introduction

MATLAB is a high performance language for technical computing .It integrates computation

visualization and programming in an easy to use environment

Mat lab stands for matrix laboratory. It was written originally to provide easy access to

matrix software developed by LINPACK (linear system package) and EISPACK (Eigen

system package) projects.

MATLAB is therefore built on a foundation of sophisticated matrix software in which the

basic element is matrix that does not require pre dimensioning

Typical uses of MATLAB

1. Math and computation

2. Algorithm development

3. Data acquisition

4. Data analysis ,exploration ands visualization

5. Scientific and engineering graphics

The main features of MATLAB

1. Advance algorithm for high performance numerical computation ,especially in the

Field matrix algebra

2. A large collection of predefined mathematical functions and the ability to define one’s

own functions.

3. Two-and three dimensional graphics for plotting and displaying data

4. A complete online help system

5. Powerful, matrix or vector oriented high level programming language for individual

applications.

6. Toolboxes available for solving advanced problems in several application areas

Page | 10

Page 14: Number Plate Recognition

MATLAB

MATLABProgramming language

User written / Built in functions

Graphics2-D graphics3-D graphicsColor and lightingAnimation

ComputationLinear algebraSignal processingQuadratureEtc

External interfaceInterface with C and FORTRANPrograms

Tool boxesSignal processingImage processingControl systemsNeural NetworksCommunicationsRobust controlStatistics

Features and capabilities of MATLAB

Page | 11

Page 15: Number Plate Recognition

The MATLAB System

The MATLAB system consists of five main parts:

Development Environment.

This is the set of tools and facilities that help you use MATLAB functions and files. Many of

these tools are graphical user interfaces. It includes the MATLAB desktop and Command

Window, a command history, an editor and debugger, and browsers for viewing help, the

workspace, files, and the search path.

The MATLAB Mathematical Function Library.

This is a vast collection of computational algorithms ranging from elementary functions, like

sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix

inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms.

The MATLAB Language.

This is a high-level matrix/array language with control flow statements, functions, data

structures, input/output, and object-oriented programming features. It allows both

"programming in the small" to rapidly create quick and dirty throw-away programs, and

"programming in the large" to create large and complex application programs.

Graphics.

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as

annotating and printing these graphs. It includes high-level functions for two-dimensional and

three-dimensional data visualization, image processing, animation, and presentation graphics.

It also includes low-level functions that allow you to fully customize the appearance of

graphics as well as to build complete graphical user interfaces on your MATLAB

applications.

The MATLAB Application Program Interface (API).

This is a library that allows you to write C and Fortran programs that interact with MATLAB.

It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB

as a computational engine, and for reading and writing MAT-files.

Starting MATLAB

On Windows platforms, start MATLAB by double-clicking the MATLAB shortcut icon on

your Windows desktop. On UNIX platforms, start MATLAB by typing mat lab at the

Page | 12

Page 16: Number Plate Recognition

operating system prompt. You can customize MATLAB startup. For example, you can

change the directory in which MATLAB starts or automatically execute MATLAB

statements in a script file named startup.m

MATLAB Desktop

When you start MATLAB, the MATLAB desktop appears, containing tools (graphical user

interfaces) for managing files, variables, and applications associated with MATLAB. The

following illustration shows the default desktop. You can customize the arrangement of tools

and documents to suit your needs. For more information about the desktop tools .

Implementations

1. Arithmetic operations

2. Matrix operations

3. Graphical Representation

Page | 13

Page 17: Number Plate Recognition

SYSTEM OVERVIEW

Page | 14

Page 18: Number Plate Recognition

Image Acquisition

Plate Extraction

Character Segmentation

Character Recognition

Display Number

Image Processing

SPECIFICATION:

This project is an offline application. Car images are taken by digital camera or

traditional camera. Then a program written in MatLab is used to identify the Number Plate

Fig. 3 System Flowchart

CONSTRAINTS:

1. All the pictures of cars are to be taken from fixed angle parallel to the horizon

2. Car is stationary when the image is taken

3. Car number plates should be according to the Central Motor Vehicle Rules 1989.

4. Intensity of images which is too high or too low will not be dealt with.

Page | 15

Page 19: Number Plate Recognition

SYSTEM DESIGN

Page | 16

Page 20: Number Plate Recognition

IMAGE ACQUISATION:

We use the yellow license plate at the back of the car as the input image. We take two sets of

photos with resolution of

1024 * 768 pixels

640 * 480 pixels

Then, the photos are resized to a resolution of 256 * 192 pixels. Although the program does

not take long time to execute, reducing the resolution can reduce the computation time, and at

the same time, the results are still good enough.

192 pixels

256 pixels

PLATE EXTRACTION:

Page | 17

Page 21: Number Plate Recognition

BINARIZATION:

We need to change the color image to binary image first. We change yellow to white and

non-yellow to black.

As RGB color space is greatly affected by light, we cannot use it to determine colors.

Instead, we use CIE-xyz color space [1].

There are two sets of variables:

Set 1 – for brighter / higher resolution image

X = 0.412453 R + 0.357580 G + 0.180423 BY = 0.212671 R + 0.715160 G + 0.072169 BZ = 0.019334 R + 0.119193 G + 0.950227 BSet 2 – for darker / lower resolution image

X = 0.607 R + 0.174 G + 0.201 BY = 0.299 R + 0.587 G + 0.114 BZ = 0.000 R + 0.066 G + 1.117 B

Page | 18

Plate Extraction

Character Segmentation

Character Recognition

Binarization

Find Edges

Hough Transformation

Page 22: Number Plate Recognition

FIND EDGES:

We can use closing operator to outline the four edges of the license plate in the binary image.

1. Canny Edge Detection:

detect gradient change (from black to white; from white to black)

The main criterion of canny edge detection algorithm is:

The first and most obvious is low error rate.

The second criterion is that the edge points be well localized.

A third criterion is to have only one response to a single edge.

2. Dilation:

add a border

3. Filing:

Page | 19

Page 23: Number Plate Recognition

fill in small holes

4. Erosion:

remove most of the dilated pixels (border pixels)

but once the interior holes are filled, they cannot be removed

original size of license plated is unchanged

5. Canny edge detection with threshold

detect gradient change

filter some of the unwanted noise (two dots are removed)

HOUGH TRANSFORMATION:

Page | 20

Page 24: Number Plate Recognition

Hough Transformation is used to detect line. We use two domains to explain it. One

is spatial domain (Fig. 4) and the other is parameter domain (Fig. 5). A line in a spatial

domain is represented by a point in the parameter domain; a point in a spatial domain is

represented by a line in the parameter domain.

Fig. 4 Spatial Domain

Fig. 5 Parameter Domain

We can see that the line in Fig. 4 is represented by a point in Fig. 5. If we take 20

points on the line in Fig. 4, they are represented by 20 lines in Fig. 5. All the 20 lines pass

through one point. We set up an accumulator cell for every pixel (point). For each line

passing the pixel, the accumulator cell is increased by one. Therefore, the accumulator cell is

added up to 20.

Page | 21

Page 25: Number Plate Recognition

Fig. 6 Accumulator Cells

By looking at the accumulator cells, we can see that there is a peak equals to 20. This is the

position of the line in the spatial domain.

A program is written to set up an accumulator array A(r, theta).

r is the longest line possible in the 256 * 192 pixel image, which is given by

floor(sqrt(256^2 + 192^2)) = 320.

360 gives the range of theta we want to examine, begin 1 to 360.

In the image, all the values of x and y are searched through to find any feature points.

Feature points are intensity = 1. That is white in color.

For each (x,y) feature point, all the corresponding (r, theta) points are found.

r negative is not ‘voted for’ in the accumulator cell, otherwise, the accumulator cell is

increased by one.

The maximum value of the A(rm, thetam) array is found and the corresponding (rm, thetam)

values of this maximum value are found.

Then the slope and y-intercept of the found line is calculated.

rm = x cos thetam + y sin thetam [2]

Page | 22

Page 26: Number Plate Recognition

Hough transform will only find the longest / strongest line in an image. To detect the second

longest / strongest line in the image, we need to set the intensity of the found line to 0 first.

That is black in color. And then we can find the next longest / strongest line [3].

Finally, the plate position is found and extracted.

Page | 23

Page 27: Number Plate Recognition

CHARACTER

SEGMENTATION

Page | 24

Page 28: Number Plate Recognition

CHARACTER SEGMENTATION:

3.3.1 Scaling

We use pixel count to remove noise and segment characters. However, the characters

in the photo which taken nearer are larger and contain more pixels (Fig. 8) but the characters

in the photo which taken further are smaller and contain fewer pixels (Fig. 10). Therefore,

we need to scale the extracted license plate to a pre-defined dimension so as to make every

character has roughly the same height and width. As can be seen, after scaling, size of

characters in Fig. 8 is roughly the same as size of characters in Fig. 10. Then we can use

pixel count.

Page | 25

Plate Extraction

Character Segmentation

Character Recognition

Scaling

Horizontal Projection

Vertical Projection

Fine Tuning

Page 29: Number Plate Recognition

Fig. 7 Photo taken nearer

Fig. 8 Extracted license plate

Fig. 9 Photo taken further

Fig. 10Extracted license plate after scaling

Page | 26

Page 30: Number Plate Recognition

HORIZONTAL PROJECTION:

There is always noise around the extracted license plate (Fig. 11), therefore we need

to remove or reduce the noise before segment the characters.

We can use horizontal projection to see the distribution of the pixels (Fig. 12). X-axis is the

height of the license plate (from top to bottom). Y-axis is the pixel count per row of the

license plate [4].

Single Line License Plate:

The noise above and below the characters is usually found less than 15 pixels.

Therefore, we choose 15 as the threshold. Also, there must have more than 10 rows of pixel

which is greater than 15 continuously so as to be considered as useful.

As in Fig. 12, although the first column in the graph (first row in the license plate) is greater

than 15, the second column in the graph (second row in the license plate) is less than 15.

Therefore, these two columns are considered to be noise and discarded.

After removing the noise, the license plate will be cleaner

Fig. 11Extracted license plate with noise

Fig. 12Horizontal Projection

Fig. 13License plate with noise removed

Page | 27

Page 31: Number Plate Recognition

Double Line License Plate:

The noise above the characters is usually found less than 5 pixels. Therefore, we choose 5 as

the threshold. Also, there must have more than 5 rows of pixel which is greater than 5

continuously so as to be considered as useful.

The noise below the characters is usually found less than 15 pixels. Therefore, we choose 15

as the threshold. Also, there must have more than 5 rows of pixel which is greater than 15

continuously so as to be considered as useful.

After removing the noise above and below the plate, we need to separate the top set of

characters and the bottom set of characters.

Each set of characters occupied about half of the plate (Fig. 14, Fig. 15).

To locate the bottom of the top set of characters, we search from the middle and trace left. If

the pixel is more than 5, then it is the bottom of the top set of characters.

To locate the top of the bottom set of characters, we search from the middle and trace right.

If the pixel is more than 15, then it is the top of the bottom set of characters.

After removing the noise, and separate the license plate into two sets, it becomes Fig. 16.

Fig. 14Extracted license plate with noise

Fig. 15Horizontal Projection

Top set of characters with noise removed

Bottom set of characters with noise removed

Page | 28

Page 32: Number Plate Recognition

VERTICAL PROJECTION:

There is always noise around the characters, so we need to distinguish the characters

from the noise.

We can use vertical projection to see the distribution of the pixels (Fig. 18). X-axis is the

width of the license plate (from left to right). Y-axis is the pixel count per column of the

license plate [4].

Ideal Case

There is no noise between the characters (Fig. 17), so there is a sharp valley which pixel

count = 0 (Fig. 18). This is the break point between characters.

To be more precise, we choose 2 as the threshold. Pixel count greater than 2 in a column will

be considered as useful.

By locating these valleys, the positions of the characters are found and segmented (Fig. 19).

Fig. 17No noise between characters

Fig. 18Vertical Projection

Fig. 19Isolated characters

Page | 29

Page 33: Number Plate Recognition

Connected Characters

There is noise between the characters (Fig. 20), so there is no sharp valley and it is difficult to

find the break points.

We need to check the width of each separated characters. If the width of a segmented

character is greater than 20 pixels, it is composed of several characters and need further

segmentation.

Fig. 20Connected characters

Fig. 21Vertical Projection

Fig. 22Isolated characters

Character ‘1’ or noise

Character ‘1’ is easily mixed up with noise. In Fig. 23, the noise at the end of the license

plate may be a character ‘1’.

We need to check the width of each separated characters. We choose 15 as the threshold

If the width of a segmented character is less than 5 pixels, and if the pixel count is greater

than 15, then it is character ‘1’.

If the width of a segmented charter is less than 5 pixels, and if the pixel count is less than 15,

then it is noise and is discarded.

Page | 30

Page 34: Number Plate Recognition

Finally, the positions of the characters are found and segmented (Fig. 25).

Fig. 23Character ‘1’ or noise

Fig. 24Vertical Projection

Fig. 25Isolated characters

Page | 31

G U 8 7 6 7

Page 35: Number Plate Recognition

FINE TUNING

After segmenting the characters, we need further removing the noise and white lines

above and below the characters due to their non parallel arrangement in the photo. Also, we

need to scale every character to the same dimension so that they can be compared with the

templates stored in the data base in the next procedure.

To remove the white lines above and below each character, we use horizontal projection and

set a threshold. If the pixel count is less than the threshold, the row is discarded (characters

‘2’ and ‘5’ in Fig. 26 & characters ‘5’, ‘8’ and ‘4’ in Fig. 28).

To remove the noise on the left and right of the character, we use vertical projection and set a

threshold. If the pixel count is less than the threshold, the column is discarded (character ‘J’

in Fig. 26).

Then the character is scaled to a pre-defined dimension (Fig. 27).

Fig. 26Isolated characters

Fig. 27 Characters after fine tuning

Page | 32

Page 36: Number Plate Recognition

CHARACTER RECOGNITION

Page | 33

Page 37: Number Plate Recognition

CHARACTER RECOGNITION:

TEMPLATE MATCHING:

A list of templates is stored in the databases. There are two databases, one for

alphabet, and the other for numeric characters. These templates are chosen from the isolated

characters. There are two to three templates for each character. They are in high quality and

have outstanding features.

The first two isolated characters are compared with the templates stored in the alphabet

database; the last three to four isolated characters are compared with the templates stored in

the numeric database.

We measure the degrees of matching between the isolated character and each of the stored

templates, and select the highest degree of match. The maximum value of 2D correlation is

equal to 1.

Template characters

Page | 34

Plate Extraction

Character Segmentation

Character Recognition

Template Matching

Component Labeling

Vertical Projection

Page 38: Number Plate Recognition

COMPONENT LABELING:

By using Template Matching, some characters are often mixed up. They are ‘8’ &

‘3’, ‘8’ & ‘6’, ‘8’ & ‘9’ and ‘3’ & ‘9’. Therefore, if a character is matched as ‘3’, ‘6’, ‘8’ or

‘9’ and its correlation value is less than 0.7, it needs to be further recognized by Component

Labeling.

Labeling method

Character ‘3’ has one stroke and no holes – labeled as 1

Character ‘6’ has one stroke and one hole – labeled as 2

Character ‘8’ has one stroke and two holes – labeled as 3

Character ‘9’ has one stroke and one hole – labeled as 2

As can be seen, characters ‘3’ and ‘8’ have different labels, so they can be recognized.

However, characters ‘6’ and ‘9’ have the same label, so they need to be further recognized by

Vertical Projection.

Page | 35

Page 39: Number Plate Recognition

VERTICAL PROJECTION:

To distinguish between characters ‘6’ and ‘9’, we can use Vertical Projection to see

the distribution of the pixels (Fig. 30 and Fig. 31). X-axis is the width of the character (from

left to right). Y-axis is the pixel count per column of the character.

We add up the first four columns and add up the last four columns of the pixels.

If the pixel count of first four columns is greater than the last four columns, it is character ‘6’

(Fig. 30); otherwise, it is character ‘9’ (Fig. 31).

Fig. 30Vertical Projection of character ‘6’

Fig. 31Vertical Projection of character ‘9’

Page | 36

Page 40: Number Plate Recognition

SNAPSHOTS

Page | 37

Page 41: Number Plate Recognition

1. Loading car image

2. Grayscale image with noise

3. Noise removed

Page | 38

Page 42: Number Plate Recognition

4.Binarization performed

5. Edges detetcted.

6.Plate region detected

Page | 39

Page 43: Number Plate Recognition

7.Noise removed

8.Numbered region extracted

9.Charateres are recognized

Page | 40

Page 44: Number Plate Recognition

FINAL OUTPUT:

10.Final result

RESULT:

Text characters are recognized and displayed to the user using word pad.

Page | 41

Page 45: Number Plate Recognition

EXPERIMENTAL RESULTS

& IMPROVEMENTS

Page | 42

Page 46: Number Plate Recognition

IMAGE ACQUISATION:

Photos with higher resolution (1024 * 768 pixels) are better than lower resolution (640 *

480 pixels).

Photos taken nearer are better than photos taken farer. Since photos taken farer are

dimmer, so they have more dark areas on the license plates which produce noise.

Improvement:

Apply Histogram Equalization to the image. It can spread out the pixel intensities so as to

cover a larger range and produce a higher contrast image.

CHARACTER ISOLATION:

Scaling will distort the characters, so it will lower the accuracy.

Single line license plate is better than double line license plate. It is because the top and

bottom set of characters may not be segmented rightly.

Improvement:

Rotate the license plate so as to make it parallel to the horizon.

CHARACTER RECOGNITION:

In higher resolution pictures the accuracy is 88% if Template Matching algorithm is

used.

Page | 43

Page 47: Number Plate Recognition

CONCLUSION:

Our project combines many methods to extract the license plate, segment the

characters and identify the characters. Further improvement should focus on not only the

accuracy of identification, but also the accuracy of license plate extraction, character

segmentation and standardization.

A program coding with MatLab goes through all the stages of the license plate recognition is

built. It is helpful to understand the procedures of license plate recognition step by step.

They are image acquisition, plate extraction, character segmentation and character

recognition. We have successfully demonstrated the Vehicle Number Plate Segmentation

and Recognition System.

Page | 44

Page 48: Number Plate Recognition

BIBLIOGRAPHY:

The MathWorks – Image Processing Toolbox – Demos

http://www.mathworks.com/products/image/demos.jsp#

Optical Character Recognition

http://www.cat.csiro.au/cmst/AC/expertise/Expertise.php?ocr

Template Matching

http://en.wikipedia.org/wiki/Template_matching#cite_note-0

http://www-cs-students.stanford.edu/~robles/ee368/matching.html

Textbooks:

Digital Image Processing by Gonzalez & Woods.

Digital Image Processing Using MATLAB by Woods & Eddins.

Page | 45