Top Banner
A PROJECT REPORT ON Wavelet Based Palmprint Authentication System 1
80
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Wavelet Palm Print

A

PROJECT REPORT

ON

Wavelet Based Palmprint Authentication System

1

Page 2: Wavelet Palm Print

Abstract:

Palmprint based personal verification has quickly entered the biometric family due to its

ease of acquisition, high user acceptance and reliability. This paper proposes a palm print based

identification system using the textural information, employing different wavelet transforms. The

transforms employed have been analyzed for their individual as well as combined performances

at feature level. The wavelets used for the analysis are Biorthogonal, Symlet and Discrete Meyer.

The analysis of these wavelets is carried out on 500 images, acquired through indigenously made

image acquisition sys-tem.

Acknowledgement

2

Page 3: Wavelet Palm Print

We are very grateful to our head of the Department of Electronics and communication

Engineering, Mr. -------, ------------ College of Engineering & Technology for having provided

the opportunity for taking up this project.

We would like to express our sincere gratitude and thanks to Mr. ---------Department of

Computer Science & Engineering, -------College of Engineering & Technology for having

allowed doing this project.

Special thanks to Krest Technologies, for permitting us to do this project work in their esteemed

organization, and also for guiding us through the entire project

We also extend our sincere thanks to our parents and friends for their moral support throughout

the project work. Above all we thank god almighty for his manifold mercies in carrying out the

project successfully

INTRODUCTION:

3

Page 4: Wavelet Palm Print

Biometrics based personal identification is getting wide acceptance in the networked

society, replacing passwords and keys due to its reliability, uniqueness and the ever in-creasing

demand of security. Common modalities being used are fingerprint and face but for face

authentication people are still working with the problem of pose and illumination invariance

where as fingerprint does not have a good psychological effect on the user because of its wide

use in crime investigations. If any biometric modality is to succeed in the future it should have

the traits like uniqueness, accuracy, richness, ease of acquisition, reliability and above all user

acceptance. Palm print based personal identification is a new biometric modality which is getting

wide acceptance and has all the necessary traits to make it a part of our daily life.

This project investigates the use of palm print for personal identification using

combination of different wavelets. Palm print not only has the unique information available as on

the fingerprint but has far more amount of details in terms of principal lines, wrinkles and

creases. Moreover it can easily be combined with hand shape bio-metric so as to form a highly

accurate and reliable biometric based personal identification system.

Palm print based personal verification has become an increasingly active research topic

over the years. The Palm-print is rich in information and has been analyzed for discriminating

features like where wavelet transform has been used for feature extraction has motivated us to

investigate the effectiveness of using combination of multiple wavelets for the textural analysis

of palm print.

Personal identification is ubiquitous in our daily lives. For example, we often have to

prove our identity for getting access to bank account, entering a protected site, drawing cash

from an ATM, logging in to a computer, and so on. Conventionally, we identify ourselves and

gain access by physically carrying passports, keys, access cards or by remembering passwords,

secret codes, and personal identification numbers (PINs).

Unfortunately, passport, keys, access cards can be lost, duplicated, stolen, or forgotten;

and password, secret codes, and personal identification numbers (PINs) can easily be forgotten,

4

Page 5: Wavelet Palm Print

Compromised, shared, or observed. Such loopholes or deficiencies of conventional personal

identification techniques have caused major problems to all concerned. For example, hackers

often disrupt computer networks, credit card fraud is estimated at billions dollars per year

worldwide. The cost of forgotten passwords is high and accounts for 40%-80% of all the IT help

desk calls and resetting the forgotten or compromised passwords costs as much as US$

340/user/year. Therefore, robust, reliable, and foolproof personal identification solutions must be

sought in order to address the deficiencies of conventional techniques, something that could

verify that some one is physically the person he/she claims to be.

A biometric is a unique, measurable characteristic or trait of a human being for

automatically recognizing or verifying identity. By using a biometric identification, the

individual verification can be done by doing the statistical analysis of biological characteristic.

This measurable characteristic can be physical, e.g. eye, face, finger image and hand, or

behavioral, e.g. signature and typing rhythm.

Besides bolstering security, biometric systems also enhance user convenience by

alleviating the need to design and remember multiple complex passwords. No wonder large scale

systems have been deployed in such diverse applications as US-VISIT and entry to Disney Park,

Orlando.

In spite of the fact that automatic biometric recognition systems based on fingerprints

(called AFIS) have been used by law enforcement agencies worldwide for over 40 years,

biometric recognition continues to remain a very difficult pattern recognition problem. A

biometric system has to contend with problems related to noisy images (failure to enroll), lack of

distinctiveness (finite error rate), large intra-class variations (false reject), and spoof attacks

(system security). Therefore, a proper system design is needed to verify a person quickly and

automatically.

In this project, a multibiometric system is proposed for human verification i.e.

authenticating the identity of an individual. The proposed multibiometric system uses both hand

5

Page 6: Wavelet Palm Print

and finger stripe geometry for verification process. Artificial Neural Network (ANN) is applied

for feature learning and verification.

DEVELOPMENT OF IMAGE ACQUISITION PLAT-FORM:

There are two types of systems available for capturing the palmprint of individuals i.e.,

scanners and the pegged systems. Scanners are hygienically not safe whereas the pegged systems

cause considerable inconvenience to the user. Hence both of these systems suffer from low user

acceptability. The attributes of ease of acquisition and hygienic safety are of paramount

importance for any biometric modality. The proposed image acquisition setup satisfies the

mentioned criteria by proposing a contactless, peg free system, Figure 2. It is an enclosed black

box, simple in architecture and employs ring source light for uniform illumination. Two plates

are kept inside the image acquisition setup. The upper plate holds the camera and the light source

while the bottom plate is used to place individual’s hand. The distance between these two plates

is kept constant to avoid any mismatch due to scale invariance. The distance between the two

plates after empirical testing is kept at 14 inches. The Palmprint images have been collected from

50 individuals with 10 images each making a total dataset of 500 images. The dataset contains all

images of males with age distribution between 22 to 56 years, with a high percentage between 22

to 25 years. A low resolution of 72 dpi has been used employing SONY DSC W-35 CY-BER

SHOT for Palmprint images acquisition.

6

Page 7: Wavelet Palm Print

IMAGE REGISTRATION

Our image registration approach follows the technique proposed and is summarized as

follows: The acquired color (RGB) parameters of Palmprint are changed to HSI parameters. The

hue value of skin is same so it was safely neglected along with the less discriminating saturation

value. The Palmprint has been analyzed for its texture using the gray level or intensity values, I

among the HSI values. Gray level images retain all the useful discriminating information

required for personal identification, along with considerable reduction in processing time. The

color images using following are changed to gray level imthe equation:

I= (0.2989 R) + (0.5870 G) + ( B) (1)

The gray level images are normalized and thresholded to get a binary image. Hysteresis

thresholding has been adopted due to its effectiveness in varying illumination conditions and

undesirable background noise.

Although the user training ensures optimal and standard acquisition of palmprint, a

rotational alignment is incorporated in our proposed approach to cater inadvertent rotations. The

longest line in a palm passes through the middle finger, and any rotation is considered with

reference to this line. The second order moment helps analyzing the elongation or eccentricity of

any binary shape. By finding the Ei-gen values and eigenvectors, we can determine the

eccentricity of the shape by analyzing the ratio of the Eigen values. We can determine the

direction of elongation by using the direction of the eigenvector with corresponding highest

Eigen value. The parameters of the best fitting ellipse have been extracted using second order

statistical moments on the binarized palm print corresponding to the longest line. Consequently,

the offset (theta) between the normal axis and the longest line passing through the middle finger

is calculated. The theta is calculated using the following equation

7

Page 8: Wavelet Palm Print

Where a, b and c are the second order normalized moments of the pixels and are

calculated using the following equations:

Palmprint is adversely affected by the variations in illumination. The problem has been

addressed by computing normalized energy of the decomposition blocks so as to minimize

feature variance due to non-uniform illumination. The energy computed from each block for the

three wavelet types is concatenated to form a feature vector of length 27 for an individual

Palmprint. The normalized energy of the ROI image block B associated with subband is defined

as

Where the normalize energy is given as

Where ‘n’ equals the total number of blocks present in the image Matching is performed

by calculating the Euclidean Distance between the input feature vector and template feature

vector. Euclidean distance between two vectors is calculated by squaring the difference between

corresponding elements. For p(x, y) and q(s, t) the Euclidean distance between p and q is defined

as:

8

Page 9: Wavelet Palm Print

A detailed analysis of results revealed that rotation of the palmprint caused considerable

blur in the vertical aligned images due to interpolation.

FEATURE EXTRACTION AND CLASSIFICATION

We obtained ten images of each individual of which five were used for training and the

rest of them were used for validation. The obtained registered palm print image has been

analysed for its texture using different symmetrical wavelet families namely biorthogonal 3.9,

symmelt 8 and demeyer 5. The palm print region 256x256 has been decom-posed into three

scales for each wavelet type.

An intelligent solution to this problem is devised by rotating the axis of region instead of

the palm

9

Page 10: Wavelet Palm Print

A reverse transformation is computed from the affine transform, as follows:

Using the above equations, a rotation invariant region of interest is cropped from the

palm print. The approximation or interpolation error still exists but the results show im-proved

performance and accuracy. The selected wavelets have been analyzed for their individual

performance by formulating similar energy based feature vectors of length 27, using 9 levels

decomposition.

NORMALIZED ENERGY:

We denote the additional energy for performing certain computation-intensive jobs in

each system as the energy unit (EU). The computation-intensive job we chose was the jpeg fdct

islow routine in file jfdctint.c from Independent JPEG Group's implementation of JPEG, which

comes with the Mibench benchmarks. It performs a forward discrete cosine transform (DCT) on

an eight-by-eight block of integers. Three different sets of inputs are randomly chosen from the

large image file included in Mibench. The job is memory-intensive as well since input data are

read from memory each time before a DCT is performed. To obtain the additional energy for

10

Page 11: Wavelet Palm Print

performing one such DCT, we repeat the DCT a total of times over the set of chosen

input data. This is assumed to be the target, which takes our systems about four seconds to

complete. The context in this case simply involves making the system idle. The energy of every

benchmark is measured with a companion measurement of the EU. In most cases, we report

experimental results normalized to the EU thus obtained. This accounts for differences in the

hardware and OS. The EU for the three systems we studied is between and Joules. The

benefits of using the EU are as follows. Experiments were conducted on different days for

different benchmarks. The absolute energy figure for an event varied slightly from day to day.

However, the energy remained quite constant if normalized to the corresponding EU (within

1%). Moreover, since the EU is only dependent on the SOC and memory, the comparison of

non-LCD energy consumption of different systems is fairer after normalization.

CONCEPT OF PALM JUST LIKE FINGER:

Palm identification, just like fingerprint identification, is based on the aggregate of

information presented in a friction ridge impression. This information includes the flow of the

friction ridges, the presence or absence of features along the individual friction ridge paths and

their sequences, and the intricate detail of a single ridge. To understand this recognition concept,

one must first understand the physiology of the ridges and valleys of a fingerprint or palm. When

recorded, a fingerprint or palm print appears as a series of dark lines and represents the high,

peaking portion of the friction ridged skin while the valley between these ridges appears as a

white space and is the low, shallow portion of the friction ridged skin. This is shown in Figure.

Figure: Fingerprint Ridges (Dark Lines) vs. Fingerprint Valleys (White Lines).

11

Page 12: Wavelet Palm Print

Palm recognition technology exploits some of these palm features. Friction ridges do not

always flow continuously throughout a pattern and often result in specific characteristics such as

ending ridges or dividing ridges and dots. A palm recognition system is designed to interpret the

flow of the overall ridges to assign a classification and then extract the minutiae detail — a

subset of the total amount of information available, yet enough information to effectively search

a large repository of palm prints. Minutiae are limited to the location, direction, and orientation

of the ridge endings and bifurcations (splits) along a ridge path. The images in Figure 2 present a

pictorial representation of the regions of the palm, two types of minutiae, and examples of other

detailed characteristics used during the automatic classification and minutiae extraction

processes.

Palm showing two types of minuates and characteristics

TEXTURE ANALYSIS:

In many machine vision and image processing algorithms, simplifying assumptions are

made about the uniformity of intensities in local image regions. However, images of real objects

often do not exhibit regions of uniform intensities. For example, the image of a wooden surface

is not uniform but contains variations of intensities which form certain repeated patterns called

visual texture. The patterns can be the result of physical surface properties such as roughness or

oriented strands which often have a tactile quality, or they could be the result of reflectance

differences such as the color on a surface. We recognize texture when we see it but it is very

12

Page 13: Wavelet Palm Print

difficult to define. This difficulty is demonstrated by the number of different texture definitions

attempted by vision researchers. Coggins has compiled a catalogue of texture definitions in the

computer vision literature and we give some examples here.

• “We may regard texture as what constitutes a macroscopic region. Its structure is simply

attributed to the repetitive patterns in which elements or primitives are arranged according to a

placement rule.”

• “A region in an image has a constant texture if a set of local statistics or other local properties

of the picture function are constant, slowly varying, or approximately periodic.”

• “The image texture we consider is nonfigurative and cellular... An image texture is described

by the number and types of its (tonal) primitives and the spatial organization or layout of its

(tonal) primitives... A fundamental characteristic of texture: it cannot be analyzed without a

frame of reference of tonal primitive being stated or implied. For any smooth gray-tone surface,

there exists a scale such that when the surface is examined, it has no texture. Then as resolution

increases, it takes on a fine texture and then a coarse texture.”

• “Texture is defined for our purposes as an attribute of a field having no components that appear

enumerable. The phase relations between the components are thus not apparent. Nor should the

field contain an obvious gradient. The intent of this definition is to direct attention of the

observer to the global properties of the display i.e., its overall “coarseness,” “bumpiness,” or

“fineness.” Physically, none numerable (a periodic) patterns are generated by stochastic as

opposed to deterministic processes. Perceptually, however, the set of all patterns without obvious

enumerable components will include many deterministic (and even periodic) textures.”

• “Texture is an apparently paradoxical notion. On the one hand, it is commonly used in the early

processing of visual information, especially for practical classification purposes. On the other

hand, no one has succeeded in producing a commonly accepted definition of texture. The

resolution of this paradox, we feel, will depend on a richer, more developed model for early

visual information processing, a central aspect of which will be representational systems at many

13

Page 14: Wavelet Palm Print

different levels of abstraction. These levels will most probably include actual intensities at the

bottom and will progress through edge and orientation descriptors to surface, and perhaps

volumetric descriptors. Given these multi-level structures, it seems clear that they should be

included in the definition of, and in the computation of, texture descriptors.”

This collection of definitions demonstrates that the “definition” of texture is formulated

by different people depending upon the particular application and that there is no generally

agreed upon definition. Some are perceptually motivated, and others are driven completely by

the application in which the definition will be used. Image texture, defined as a function of the

spatial variation in pixel intensities (gray values), is useful in a variety of applications and has

been a subject of intense study by many researchers.

One immediate application of image texture is the recognition of image regions using

texture properties. Texture is the most important visual cue in identifying these types of

homogeneous regions. This is called texture classification. The goal of texture classification then

is to produce a classification map of the input image where each uniform textured region is

identified with the texture class it belongs. We could also find the texture boundaries even if we

could not classify these textured surfaces. This is then the second type of problem that texture

analysis research attempts to solve — texture segmentation. Texture synthesis is often used for

image compression applications. It is also important in computer graphics where the goal is to

render object surfaces which are as realistic looking as possible. The shape from texture problem

is one instance of a general class of vision problems known as “shape from X”. This was first

formally pointed out in the perception literature by Gibson. The goal is to extract three-

dimensional shape information from various cues such as shading, stereo, and texture. The

texture features (texture elements) are distorted due to the imaging process and the perspective

projection which provide information about surface orientation and shape.

14

Page 15: Wavelet Palm Print

WAVELETS

FOURIER ANALYSIS

Signal analysts already have at their disposal an impressive arsenal of tools. Perhaps the

most well-known of these is Fourier analysis, which breaks down a signal into constituent

sinusoids of different frequencies. Another way to think of Fourier analysis is as a mathematical

technique for transforming our view of the signal from time-based to frequency-based.

15

Page 16: Wavelet Palm Print

For many signals, Fourier analysis is extremely useful because the signal’s frequency content is

of great importance. So why do we need other techniques, like wavelet analysis?

Fourier analysis has a serious drawback. In transforming to the frequency domain, time

information is lost. When looking at a Fourier transform of a signal, it is impossible to tell when

a particular event took place. If the signal properties do not change much over time — that is, if

it is what is called a stationary signal—this drawback isn’t very important. However, most

interesting signals contain numerous non stationary or transitory characteristics: drift, trends,

abrupt changes, and beginnings and ends of events. These characteristics are often the most

important part of the signal, and Fourier analysis is not suited to detecting them.

SHORT-TIME FOURIER ANALYSIS:

In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier

transform to analyze only a small section of the signal at a time—a technique called windowing

the signal. Gabor’s adaptation, called the Short-Time Fourier Transform (STFT), maps a signal

into a two-dimensional function of time and frequency.

16

Page 17: Wavelet Palm Print

The STFT represents a sort of compromise between the time- and frequency-based views

of a signal. It provides some information about both when and at what frequencies a signal event

occurs. However, you can only obtain this information with limited precision, and that precision

is determined by the size of the window. While the STFT compromise between time and

frequency information can be useful, the drawback is that once you choose a particular size for

the time window, that window is the same for all frequencies. Many signals require a more

flexible approach—one where we can vary the window size to determine more accurately either

time or frequency.

WAVELET ANALYSIS

Wavelet analysis represents the next logical step: a windowing technique with variable-

sized regions. Wavelet analysis allows the use of long time intervals where we want more precise

low-frequency information, and shorter regions where we want high-frequency information.

Here’s what this looks like in contrast with the time-based, frequency-based, and STFT

views of a signal:

17

Page 18: Wavelet Palm Print

You may have noticed that wavelet analysis does not use a time-frequency region, but rather a

time-scale region. For more information about the concept of scale and the link between scale

and frequency, see “How to Connect Scale to Frequency?”

What Can Wavelet Analysis Do?

One major advantage afforded by wavelets is the ability to perform local analysis, that

is, to analyze a localized area of a larger signal. Consider a sinusoidal signal with a small

discontinuity — one so tiny as to be barely visible. Such a signal easily could be generated in the

real world, perhaps by a power fluctuation or a noisy switch.

A plot of the Fourier coefficients (as provided by the fft command) of this signal shows

nothing particularly interesting: a flat spectrum with two peaks representing a single frequency.

However, a plot of wavelet coefficients clearly shows the exact location in time of the

discontinuity.

18

Page 19: Wavelet Palm Print

Wavelet analysis is capable of revealing aspects of data that other signal analysis techniques

miss, aspects like trends, breakdown points, discontinuities in higher derivatives, and self-

similarity. Furthermore, because it affords a different view of data than those presented by

traditional techniques, wavelet analysis can often compress or de-noise a signal without

appreciable degradation. Indeed, in their brief history within the signal processing field, wavelets

have already proven themselves to be an indispensable addition to the analyst’s collection of

tools and continue to enjoy a burgeoning popularity today.

WHAT IS WAVELET ANALYSIS?

Now that we know some situations when wavelet analysis is useful, it is worthwhile

asking “What is wavelet analysis?” and even more fundamentally,

“What is a wavelet?”

A wavelet is a waveform of effectively limited duration that has an average value of zero.

Compare wavelets with sine waves, which are the basis of Fourier analysis.

19

Page 20: Wavelet Palm Print

Sinusoids do not have limited duration — they extend from minus to plus infinity. And

where sinusoids are smooth and predictable, wavelets tend to be irregular and asymmetric.

Figure 2.7

Fourier analysis consists of breaking up a signal into sine waves of various frequencies.

Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the

original (or mother) wavelet. Just looking at pictures of wavelets and sine waves, you can see

intuitively that signals with sharp changes might be better analyzed with an irregular wavelet

than with a smooth sinusoid, just as some foods are better handled with a fork than a spoon. It

also makes sense that local features can be described better with wavelets that have local extent.

NUMBER OF DIMENSIONS:

Thus far, we’ve discussed only one-dimensional data, which encompasses most ordinary

signals. However, wavelet analysis can be applied to two-dimensional data (images) and, in

principle, to higher dimensional data. This toolbox uses only one and two-dimensional analysis

techniques.

THE CONTINUOUS WAVELET TRANSFORM:

Mathematically, the process of Fourier analysis is represented by the Fourier transform:

20

Page 21: Wavelet Palm Print

Which is the sum over all time of the signal f(t) multiplied by a complex exponential.

(Recall that a complex exponential can be broken down into real and imaginary sinusoidal

components.) The results of the transform are the Fourier coefficients F(w), which when

multiplied by a sinusoid of frequency w yields the constituent sinusoidal components of the

original signal. Graphically, the process looks like:

Figure 2.8

Similarly, the continuous wavelet transform (CWT) is defined as the sum over all time of

signal multiplied by scaled, shifted versions of the wavelet function.

The result of the CWT is a series many wavelet coefficients C, which are a function of

scale and position.

Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the

constituent wavelets of the original signal:

21

Page 22: Wavelet Palm Print

Figure 2.9

SCALING

We’ve already alluded to the fact that wavelet analysis produces a time-scale view of a

signal and now we’re talking about scaling and shifting wavelets.

What exactly do we mean by scale in this context?

Scaling a wavelet simply means stretching (or compressing) it.

To go beyond colloquial descriptions such as “stretching,” we introduce the scale factor,

often denoted by the letter a.

If we’re talking about sinusoids, for example the effect of the scale factor is very easy to

see:

The scale factor works exactly the same with wavelets. The smaller the scale factor, the

more “compressed” the wavelet.

22

Page 23: Wavelet Palm Print

Figure 2.11

It is clear from the diagrams that for a sinusoid sin (wt) the scale factor ‘a’ is related

(inversely) to the radian frequency ‘w’. Similarly, with wavelet analysis the scale is related to the

frequency of the signal.

SHIFTING

Shifting a wavelet simply means delaying (or hastening) its onset.

THE DISCRETE WAVELET TRANSFORM:

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it

generates an awful lot of data. What if we choose only a subset of scales and positions at which

to make our calculations? It turns out rather remarkably that if we choose scales and positions

based on powers of two so-called dyadic scales and positions—then our analysis will be much

23

Page 24: Wavelet Palm Print

more efficient and just as accurate. We obtain such an analysis from the discrete wavelet

transform (DWT).

An efficient way to implement this scheme using filters was developed in 1988 by

Mallat. The Mallat algorithm is in fact a classical scheme known in the signal processing

community as a two-channel sub band coder. This very practical filtering algorithm yields a fast

wavelet transform — a box into which a signal passes, and out of which wavelet coefficients

quickly emerge. Let’s examine this in more depth.

ONE-STAGE FILTERING: APPROXIMATIONS AND DETAILS:

For many signals, the low-frequency content is the most important part. It is what gives

the signal its identity. The high-frequency content on the other hand imparts flavor or nuance.

Consider the human voice. If you remove the high-frequency components, the voice sounds

different but you can still tell what’s being said. However, if you remove enough of the low-

frequency components, you hear gibberish. In wavelet analysis, we often speak of

approximations and details. The approximations are the high-scale, low-frequency components

of the signal. The details are the low-scale, high-frequency components.

The filtering process at its most basic level looks like this:

24

Page 25: Wavelet Palm Print

The original signal S passes through two complementary filters and emerges as two

signals. Unfortunately, if we actually perform this operation on a real digital signal, we wind up

with twice as much data as we started with. Suppose, for instance that the original signal S

consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a

total of 2000.

These signals A and D are interesting, but we get 2000 values instead of the 1000 we had.

There exists a more subtle way to perform the decomposition using wavelets. By looking

carefully at the computation, we may keep only one point out of two in each of the two 2000-

length samples to get the complete information. This is the notion of own sampling. We produce

two sequences called cA and cD.

The process on the right which includes down sampling produces DWT

Coefficients To gain a better appreciation of this process let’s perform a one-stage

discrete wavelet transform of a signal. Our signal will be a pure sinusoid with high- frequency

noise added to it.

Here is our schematic diagram with real signals inserted into it:

25

Page 26: Wavelet Palm Print

Notice that the detail coefficients cD is small and consist mainly of a high-frequency

noise, while the approximation coefficients cA contains much less noise than does the original

signal.

You may observe that the actual lengths of the detail and approximation coefficient

vectors are slightly more than half the length of the original signal. This has to do with the

filtering process, which is implemented by convolving the signal with a filter. The convolution

“smears” the signal, introducing several extra samples into the result.

MULTIPLE-LEVEL DECOMPOSITION:

The decomposition process can be iterated, with successive approximations being

decomposed in turn, so that one signal is broken down into many lower resolution components.

This is called the wavelet decomposition tree.

26

Page 27: Wavelet Palm Print

Looking at a signal’s wavelet decomposition tree can yield valuable information.

NUMBER OF LEVELS:

Since the analysis process is iterative, in theory it can be continued indefinitely. In

reality, the decomposition can proceed only until the individual details consist of a single sample

or pixel. In practice, you’ll select a suitable number of levels based on the nature of the signal, or

on a suitable criterion such as entropy.

27

Page 28: Wavelet Palm Print

WAVELET RECONSTRUCTION:

We’ve learned how the discrete wavelet transform can be used to analyze or decompose

signals and images. This process is called decomposition or analysis. The other half of the story

is how those components can be assembled back into the original signal without loss of

information. This process is called reconstruction, or synthesis. The mathematical manipulation

that effects synthesis is called the inverse discrete wavelet transforms (IDWT). To synthesize a

signal in the Wavelet Toolbox, we reconstruct it from the wavelet coefficients:

Where wavelet analysis involves filtering and down sampling, the wavelet reconstruction

process consists of up sampling and filtering. Up sampling is the process of lengthening a signal

component by inserting zeros between samples:

The Wavelet Toolbox includes commands like idwt and waverec that perform single-

level or multilevel reconstruction respectively on the components of one-dimensional signals.

These commands have their two-dimensional analogs, idwt2 and waverec2.

28

Page 29: Wavelet Palm Print

RECONSTRUCTION FILTERS:

The filtering part of the reconstruction process also bears some discussion, because it is

the choice of filters that is crucial in achieving perfect reconstruction of the original signal. The

down sampling of the signal components performed during the decomposition phase introduces a

distortion called aliasing. It turns out that by carefully choosing filters for the decomposition and

reconstruction phases that are closely related (but not identical), we can “cancel out” the effects

of aliasing.

The low- and high pass decomposition filters (L and H), together with their associated

reconstruction filters (L' and H'), form a system of what is called Quadrature mirror filters:

RECONSTRUCTING APPROXIMATIONS AND DETAILS:

We have seen that it is possible to reconstruct our original signal from the coefficients of

the approximations and details.

29

Page 30: Wavelet Palm Print

It is also possible to reconstruct the approximations and details themselves from their

coefficient vectors.

As an example, let’s consider how we would reconstruct the first-level approximation A1

from the coefficient vector cA1. We pass the coefficient vector cA1 through the same process we

used to reconstruct the original signal. However, instead of combining it with the level-one detail

cD1, we feed in a vector of zeros in place of the detail coefficients

vector:

The process yields a reconstructed approximation A1, which has the same length as the

original signal S and which is a real approximation of it. Similarly, we can reconstruct the first-

level detail D1, using the analogous process:

The reconstructed details and approximations are true constituents of the original signal.

In fact, we find when we combine them that:

30

Page 31: Wavelet Palm Print

A1 + D1 = S

Note that the coefficient vectors cA1 and cD1—because they were produced by down

sampling and are only half the length of the original signal — cannot directly be combined to

reproduce the signal.

It is necessary to reconstruct the approximations and details before combining them.

Extending this technique to the components of a multilevel analysis, we find that similar

relationships hold for all the reconstructed signal constituents.

That is, there are several ways to reassemble the original signal:

Relationship of Filters to Wavelet Shapes:

In the section “Reconstruction Filters”, we spoke of the importance of choosing the right

filters. In fact, the choice of filters not only determines whether perfect reconstruction is

possible, it also determines the shape of the wavelet we use to perform the analysis. To construct

a wavelet of some practical utility, you seldom start by drawing a waveform. Instead, it usually

makes more sense to design the appropriate Quadrature mirror filters, and then use them to create

the waveform. Let’s see how this is done by focusing on an example.

Consider the low pass reconstruction filter (L') for the db2 wavelet.

Wavelet function position

31

Page 32: Wavelet Palm Print

The filter coefficients can be obtained from the dbaux command:

Lprime = dbaux(2)

Lprime = 0.3415 0.5915 0.1585 –0.0915

If we reverse the order of this vector (see wrev), and then multiply every even sample by

–1, we obtain the high pass filter H':

Hprime = –0.0915 –0.1585 0.5915 –0.3415

Next, up sample Hprime by two (see dyadup), inserting zeros in alternate positions:

HU =–0.0915 0 –0.1585 0 0.5915 0 –0.3415 0

Finally, convolve the up sampled vector with the original low pass filter:

H2 = conv(HU,Lprime);

plot(H2)

If we iterate this process several more times, repeatedly up sampling and convolving the

resultant vector with the four-element filter vector Lprime, a pattern begins to emerge:

32

Page 33: Wavelet Palm Print

The curve begins to look progressively more like the db2 wavelet. This means that the

wavelet’s shape is determined entirely by the coefficients of the reconstruction filters. This

relationship has profound implications. It means that you cannot choose just any shape, call it a

wavelet, and perform an analysis. At least, you can’t choose an arbitrary wavelet waveform if

you want to be able to reconstruct the original signal accurately. You are compelled to choose a

shape determined by Quadrature mirror decomposition filters.

The Scaling Function:

We’ve seen the interrelation of wavelets and Quadrature mirror filters. The wavelet

function is determined by the high pass filter, which also produces the details of the wavelet

decomposition.

There is an additional function associated with some, but not all wavelets. This is the so-

called scaling function. The scaling function is very similar to the wavelet function. It is

determined by the low pass Quadrature mirror filters, and thus is associated with the

approximations of the wavelet decomposition. In the same way that iteratively up- sampling and

convolving the high pass filter produces a shape approximating the wavelet function, iteratively

33

Page 34: Wavelet Palm Print

up-sampling and convolving the low pass filter produces a shape approximating the scaling

function.

Multi-step Decomposition and Reconstruction:

A multi step analysis-synthesis process can be represented as:

This process involves two aspects: breaking up a signal to obtain the wavelet coefficients,

and reassembling the signal from the coefficients. We’ve already discussed decomposition and

reconstruction at some length. Of course, there is no point breaking up a signal merely to have

the satisfaction of immediately reconstructing it. We may modify the wavelet coefficients before

performing the reconstruction step. We perform wavelet analysis because the coefficients thus

obtained have many known uses, de-noising and compression being foremost among them. But

wavelet analysis is still a new and emerging field. No doubt, many uncharted uses of the wavelet

coefficients lie in wait. The Wavelet Toolbox can be a means of exploring possible uses and

hitherto unknown applications of wavelet analysis. Explore the toolbox functions and see what

you discover.

34

Page 35: Wavelet Palm Print

DIGITAL IMAGE PROCESSING

BACKGROUND:

Digital image processing is an area characterized by the need for extensive experimental

work to establish the viability of proposed solutions to a given problem. An important

characteristic underlying the design of image processing systems is the significant level of

testing & experimentation that normally is required before arriving at an acceptable solution.

This characteristic implies that the ability to formulate approaches &quickly prototype candidate

solutions generally plays a major role in reducing the cost & time required to arrive at a viable

system implementation.

What is DIP

An image may be defined as a two-dimensional function f(x, y), where x & y are spatial

coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray

level of the image at that point. When x, y & the amplitude values of f are all finite discrete

quantities, we call the image a digital image. The field of DIP refers to processing digital image

by means of digital computer. Digital image is composed of a finite number of elements, each of

which has a particular location & value. The elements are called pixels.

Vision is the most advanced of our sensor, so it is not surprising that image play the

single most important role in human perception. However, unlike humans, who are limited to the

visual band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging

from gamma to radio waves. They can operate also on images generated by sources that humans

are not accustomed to associating with image.

There is no general agreement among authors regarding where image processing stops &

other related areas such as image analysis& computer vision start. Sometimes a distinction is

made by defining image processing as a discipline in which both the input & output at a process

35

Page 36: Wavelet Palm Print

are images. This is limiting & somewhat artificial boundary. The area of image analysis (image

understanding) is in between image processing & computer vision.

There are no clear-cut boundaries in the continuum from image processing at one end to

complete vision at the other. However, one useful paradigm is to consider three types of

computerized processes in this continuum: low-, mid-, & high-level processes. Low-level

process involves primitive operations such as image processing to reduce noise, contrast

enhancement & image sharpening. A low- level process is characterized by the fact that both its

inputs & outputs are images. Mid-level process on images involves tasks such as segmentation,

description of that object to reduce them to a form suitable for computer processing &

classification of individual objects. A mid-level process is characterized by the fact that its inputs

generally are images but its outputs are attributes extracted from those images. Finally higher-

level processing involves “Making sense” of an ensemble of recognized objects, as in image

analysis & at the far end of the continuum performing the cognitive functions normally

associated with human vision.

Digital image processing, as already defined is used successfully in a broad range of

areas of exceptional social & economic value.

What is an image?

An image is represented as a two dimensional function f(x, y) where x and y are spatial

co-ordinates and the amplitude of ‘f’ at any pair of coordinates (x, y) is called the intensity of the

image at that point.

Gray scale image:

A grayscale image is a function I (xylem) of the two spatial coordinates of the image

plane.

I(x, y) is the intensity of the image at the point (x, y) on the image plane.

I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] ´[0, b]I:

[0, a] ´ [0, b] ® [0, info)

36

Page 37: Wavelet Palm Print

Color image:

It can be represented by three functions, R (xylem) for red, G (xylem) for green and B

(xylem) for blue.

An image may be continuous with respect to the x and y coordinates and also in

amplitude. Converting such an image to digital form requires that the coordinates as well as the

amplitude to be digitized. Digitizing the coordinate’s values is called sampling. Digitizing the

amplitude values is called quantization.

Coordinate convention:

The result of sampling and quantization is a matrix of real numbers. We use two

principal ways to represent digital images. Assume that an image f(x, y) is sampled so that the

resulting image has M rows and N columns. We say that the image is of size M X N. The values

of the coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use

integer values for these discrete coordinates. In many image processing books, the image origin

is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are

(xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second

sample along the first row. It does not mean that these are the actual values of physical

coordinates when the image was sampled. Following figure shows the coordinate convention.

Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments.

The coordinate convention used in the toolbox to denote arrays is different from the

preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the

notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the

same as the order discussed in the previous paragraph, in the sense that the first element of a

coordinate topples, (alb), refers to a row and the second to a column. The other difference is that

the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to

N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox

also employs another coordinate convention called spatial coordinates which uses x to refer to

columns and y to refers to rows. This is the opposite of our use of variables x and y.

37

Page 38: Wavelet Palm Print

Image as Matrices:

The preceding discussion leads to the following representation for a digitized image

function:

f (0,0) f(0,1) ……….. f(0,N-1)

f (1,0) f(1,1) ………… f(1,N-1)

f (xylem)= . . .

. . .

f (M-1,0) f(M-1,1) ………… f(M-1,N-1)

The right side of this equation is a digital image by definition. Each element of this array

is called an image element, picture element, pixel or pel. The terms image and pixel are used

throughout the rest of our discussions to denote a digital image and its elements.

A digital image can be represented naturally as a MATLAB matrix:

f (1,1) f(1,2) ……. f(1,N)

f (2,1) f(2,2) …….. f (2,N)

. . .

f = . . .

f (M,1) f(M,2) …….f(M,N)

Where f (1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB quantities).

Clearly the two representations are identical, except for the shift in origin. The notation f(p ,q)

denotes the element located in row p and the column q. For example f(6,2) is the element in the

sixth row and second column of the matrix f. Typically we use the letters M and N respectively

to denote the number of rows and columns in a matrix. A 1xN matrix is called a row vector

whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar.

38

Page 39: Wavelet Palm Print

Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array

and so on. Variables must begin with a letter and contain only letters, numerals and underscores.

As noted in the previous paragraph, all MATLAB quantities are written using mono-scope

characters. We use conventional Roman, italic notation such as f(x ,y), for mathematical

expressions

Reading Images:

Images are read into the MATLAB environment using function imread whose syntax is

Imread (‘filename’)

Format name Description recognized extension

TIFF Tagged Image File Format .tif, .tiff

JPEG Joint Photograph Experts Group .jpg, .jpeg

GIF Graphics Interchange Format .gif

BMP Windows Bitmap .bmp

PNG Portable Network Graphics .png

XWD X Window Dump .xwd

Here filename is a spring containing the complete of the image file(including any

applicable extension).For example the command line

>> f = imread (‘8. jpg’);

Reads the JPEG (above table) image chestxray into image array f. Note the use of single

quotes (‘) to delimit the string filename. The semicolon at the end of a command line is used by

MATLAB for suppressing output If a semicolon is not included. MATLAB displays the results

of the operation(s) specified in that line. The prompt symbol (>>) designates the beginning of a

command line, as it appears in the MATLAB command window.

39

Page 40: Wavelet Palm Print

Data Classes:

Although we work with integers coordinates the values of pixels themselves are not

restricted to be integers in MATLAB. Table above list various data classes supported by

MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to

as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred

to as logical data class.

All numeric computations in MATLAB are done in double quantities, so this is also a

frequent data class encounter in image processing applications. Class unit 8 also is encountered

frequently, especially when reading data from storages devices, as 8 bit images are most

common representations found in practice. These two data classes, classes logical, and, to a

lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt

functions however support all the data classes listed in table. Data class double requires 8 bytes

to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and

unit 32.

Name Description

Double Double _ precision, floating_ point numbers the Approximate.

Uint8 unsigned 8_bit integers in the range [0,255] (1byte per

Element).

Uint16 unsigned 16_bit integers in the range [0, 65535] (2byte per element).

Uint 32 unsigned 32_bit integers in the range [0, 4294967295](4 bytes per element).

Int8 signed 8_bit integers in the range [-128,127] 1 byte per element)

Int 16 signed 16_byte integers in the range [32768, 32767] (2 bytes per element).

Int 32 Signed 32_byte integers in the range [-2147483648, 21474833647] (4 byte per

element).

Single single _precision floating _point numbers with values

In the approximate range (4 bytes per elements)

40

Page 41: Wavelet Palm Print

Char characters (2 bytes per elements).

Logical values are 0 to 1 (1byte per element).

Int 32 and single required 4 bytes each. The char data class holds characters in Unicode

representation. A character string is merely a 1*n array of characters logical array contains only

the values 0 to 1,with each element being stored in memory using function logical or by using

relational operators.

Image Types:

The toolbox supports four types of images:

1 .Intensity images;

2. Binary images;

3. Indexed images;

4. R G B images.

Most monochrome image processing operations are carried out using binary or intensity

images, so our initial focus is on these two image types. Indexed and RGB colour images.

Intensity Images:

An intensity image is a data matrix whose values have been scaled to represent

intentions. When the elements of an intensity image are of class unit8, or class unit 16, they have

integer values in the range [0,255] and [0, 65535], respectively. If the image is of class double,

the values are floating point numbers. Values of scaled, double intensity images are in the range

[0, 1] by convention.

Binary Images:

Binary images have a very specific meaning in MATLAB.A binary image is a logical

array 0s and1s.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not

considered as a binary image in MATLAB .A numeric array is converted to binary using

function logical. Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using

the statement.

41

Page 42: Wavelet Palm Print

B=logical (A)

If A contains elements other than 0s and 1s.Use of the logical function converts all

nonzero quantities to logical 1s and all entries with value 0 to logical 0s.

Using relational and logical operators also creates logical arrays.

To test if an array is logical we use the I logical function: islogical(c).

If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be

converted to numeric arrays using the data class conversion functions.

Indexed Images:

An indexed image has two components:

A data matrix integer, x

A color map matrix, map

Matrix map is an m*3 arrays of class double containing floating point values in the range

[0, 1].The length m of the map are equal to the number of colors it defines. Each row of map

specifies the red, green and blue components of a single color. An indexed images uses “direct

mapping” of pixel intensity values color map values. The color of each pixel is determined by

using the corresponding value the integer matrix x as a pointer in to map. If x is of class

double ,then all of its components with values less than or equal to 1 point to the first row in

map, all components with value 2 point to the second row and so on. If x is of class units or unit

16, then all components value 0 point to the first row in map, all components with value 1 point

to the second and so on.

RGB Image:

An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet

corresponding to the red, green and blue components of an RGB image, at a specific spatial

location. An RGB image may be viewed as “stack” of three gray scale images that when fed in to

the red, green and blue inputs of a color monitor

42

Page 43: Wavelet Palm Print

Produce a color image on the screen. Convention the three images forming an RGB color

image are referred to as the red, green and blue components images. The data class of the

components images determines their range of values. If an RGB image is of class double the

range of values is [0, 1].

Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or

unit 16 respectively. The number of bits use to represents the pixel values of the component

images determines the bit depth of an RGB image. For example, if each component image is an

8bit image, the corresponding RGB image is said to be 24 bits deep.

Generally, the number of bits in all component images is the same. In this case the

number of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each

component image. For the 8bit case the number is 16,777,216 colors.

43

Page 44: Wavelet Palm Print

INTRODUCTION TO MATLAB

What Is MATLAB?

MATLAB® is a high-performance language for technical computing. It integrates

computation, visualization, and programming in an easy-to-use environment where problems and

solutions are expressed in familiar mathematical notation. Typical uses include

Typical uses of MATLAB

Math and computation

Algorithm development

Data acquisition

Modeling, simulation, and prototyping

Data analysis, exploration, and visualization

Scientific and engineering graphics

Application development, including graphical user interface building.

44

Page 45: Wavelet Palm Print

The main features of MATLAB

1. Advance algorithm for high performance numerical computation ,especially in the

Field matrix algebra

2. A large collection of predefined mathematical functions and the ability to define one’s own

functions.

3. Two-and three dimensional graphics for plotting and displaying data

4. A complete online help system

5. Powerful , matrix or vector oriented high level programming language for individual

applications.

6. Toolboxes available for solving advanced problems in several application areas

45

Page 46: Wavelet Palm Print

Features and capabilities of MATLAB

MATLAB is an interactive system whose basic data element is an array that does not

require dimensioning. This allows you to solve many technical computing problems, especially

46

MATLAB

MATLAB

Programming language

User written / Built in functions

Graphics

2-D graphics

3-D graphics

Color and lighting

Animation

Computation

Linear algebra

Signal processing

Quadrature

Etc

External interface

Interface with C and

FORTRAN

Programs

Tool boxes

Signal processingImage processingControl systemsNeural NetworksCommunicationsRobust controlStatistics

Page 47: Wavelet Palm Print

those with matrix and vector formulations, in a fraction of the time it would take to write a

program in a scalar non interactive language such as C or FORTRAN.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to

provide easy access to matrix software developed by the LINPACK and EISPACK projects.

Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of

the art in software for matrix computation.

MATLAB has evolved over a period of years with input from many users. In university

environments, it is the standard instructional tool for introductory and advanced courses in

mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-

productivity research, development, and analysis.

MATLAB features a family of add-on application-specific solutions called toolboxes.

Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized

technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that

extend the MATLAB environment to solve particular classes of problems. Areas in which

toolboxes are available include signal processing, control systems, neural networks, fuzzy logic,

wavelets, simulation, and many others.

The MATLAB System:

The MATLAB system consists of five main parts:

Development Environment:

 This is the set of tools and facilities that help you use MATLAB functions and files. Many

of these tools are graphical user interfaces. It includes the MATLAB desktop and Command

Window, a command history, an editor and debugger, and browsers for viewing help, the

workspace, files, and the search path.

47

Page 48: Wavelet Palm Print

The MATLAB Mathematical Function:

This is a vast collection of computational algorithms ranging from elementary functions

like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix

inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

The MATLAB Language:

This is a high-level matrix/array language with control flow statements, functions, data

structures, input/output, and object-oriented programming features. It allows both "programming

in the small" to rapidly create quick and dirty throw-away programs, and "programming in the

large" to create complete large and complex application programs.

Graphics:

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as

annotating and printing these graphs. It includes high-level functions for two-dimensional and

three-dimensional data visualization, image processing, animation, and presentation graphics. It

also includes low-level functions that allow you to fully customize the appearance of graphics as

well as to build complete graphical user interfaces on your MATLAB applications.

The MATLAB Application Program Interface (API):

This is a library that allows you to write C and Fortran programs that interact with

MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling

MATLAB as a computational engine, and for reading and writing MAT-files.

48

Page 49: Wavelet Palm Print

MATLAB WORKING ENVIRONMENT:

MATLAB DESKTOP:-

Matlab Desktop is the main Matlab application window. The desktop contains five sub

windows, the command window, the workspace browser, the current directory window, the

command history window, and one or more figure windows, which are shown only when the

user displays a graphic.

49

Page 50: Wavelet Palm Print

The command window is where the user types MATLAB commands and expressions at

the prompt (>>) and where the output of those commands is displayed. MATLAB defines the

workspace as the set of variables that the user creates in a work session. The workspace browser

shows these variables and some information about them. Double clicking on a variable in the

workspace browser launches the Array Editor, which can be used to obtain information and

income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current

directory, whose path is shown in the current directory window For example, in the windows

operating system the path might be as follows: C:\MATLAB\Work, indicating that directory

“work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN

DRIVE C. clicking on the arrow in the current directory window shows a list of recently used

paths. Clicking on the button to the right of the window allows the user to change the current

directory.

MATLAB uses a search path to find M-files and other MATLAB related files, which are

organize in directories in the computer file system. Any file run in MATLAB must reside in the

current directory or in a directory that is on search path. By default, the files supplied with

MATLAB and math works toolboxes are included in the search path. The easiest way to see

which directories are on the search path The easiest way to see which directories are soon the

search paths, or to add or modify a search path, is to select set path from the File menu the

desktop, and then use the set path dialog box. It is good practice to add any commonly used

directories to the search path to avoid repeatedly having the change the current directory.

The Command History Window contains a record of the commands a user has entered in

the command window, including both current and previous MATLAB sessions. Previously

entered MATLAB commands can be selected and re-executed from the command history

window by right clicking on a command or sequence of commands. This action launches a menu

from which to select various options in addition to executing the commands. This is useful to

select various options in addition to executing the commands. This is a useful feature when

experimenting with various commands in a work session.

50

Page 51: Wavelet Palm Print

Implementations

Arithmetic operations

Entering Matrices

The best way for you to get started with MATLAB is to learn how to handle matrices. Start

MATLAB and follow along with each example.

You can enter matrices into MATLAB in several different ways:

• Enter an explicit list of elements.

• Load matrices from external data files.

• Generate matrices using built-in functions.

• Create matrices with your own functions in M-files.

Start by entering Dürer’s matrix as a list of its elements. You only have to follow a few basic

conventions:

• Separate the elements of a row with blanks or commas.

• Use a semicolon, to indicate the end of each row.

• Surround the entire list of elements with square brackets, [ ].

To enter matrix, simply type in the Command Window

A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]

MATLAB displays the matrix you just entered:

A =

16 3 2 13

5 10 11 8

9 6 7 12

51

Page 52: Wavelet Palm Print

4 15 14 1

This matrix matches the numbers in the engraving. Once you have entered the matrix, it is

automatically remembered in the MATLAB workspace. You can refer to it simply as A. Now

that you have A in the workspace,

Sum, transpose, and diag

You are probably already aware that the special properties of a magic square have to do with the

various ways of summing its elements. If you take the sum along any row or column, or along

either of the two main diagonals, you will always get the same number. Let us verify that using

MATLAB.

The first statement to try is

Sum (A)

MATLAB replies with

Ans =

34 34 34 34

52

Page 53: Wavelet Palm Print

When you do not specify an output variable, MATLAB uses the variable ans, short for answer,

to store the results of a calculation. You have computed a row vector containing the sums of the

columns of A. Sure enough, each of the columns has the same sum, the magic sum, 34. How

about the row sums? MATLAB has a preference for working with the columns of a matrix, so

one way to get the row sums is to transpose the matrix, compute the column sums of the

transpose, and then transpose the result. For an additional way that avoids the double transpose

use the dimension argument for the sum function. MATLAB has two transpose operators. The

apostrophe operator (e.g., A') performs a complex conjugate transposition. It flips a matrix about

its main diagonal, and also changes the sign of the imaginary component of any complex

elements of the matrix. The apostrophe-dot operator (e.g., A'.), transposes without affecting the

sign of complex elements. For matrices containing all real elements, the two operators return the

same result.

So

A'

Produces

Ans =

16 5 9 4

3 10 6 15

2 11 7 14

13 8 12 1

and

Sum (A')'

Produces a column vector containing the row sums

Ans =

34

53

Page 54: Wavelet Palm Print

34

34

34

The sum of the elements on the main diagonal is obtained with the sum and

The diag functions:

diag (A)

Produces

Ans =

16

10

7

1

and

Sum (diag (A))

Produces

Ans =

34

The other diagonal, the so-called anti diagonal, is not so important mathematically, so MATLAB

does not have a ready-made function for it. But a function originally intended for use in graphics,

fliplr, flips a matrix

From left to right:

Sum (diag (fliplr (A)))

54

Page 55: Wavelet Palm Print

Ans =

34

You have verified that the matrix in Dürer’s engraving is indeed a magic Square and, in the

process, have sampled a few MATLAB matrix operations.

Operators

Expressions use familiar arithmetic operators and precedence rules.

+ Addition

- Subtraction

* Multiplication

/ Division

\ Left division (described in “Matrices and Linear Algebra” in the

MATLAB documentation)

. ^ Power

‘Complex conjugate transpose

( ) Specify evaluation order

Generating Matrices

MATLAB provides four functions that generate basic matrices.

Zeros all zeros

Ones All ones

rand Uniformly distributed random elements

randn Normally distributed random elements

55

Page 56: Wavelet Palm Print

Here are some examples:

Z = zeros (2, 4)

Z =

0 0 0 0

0 0 0 0

F = 5*ones (3, 3)

F =

5 5 5

5 5 5

5 5 5

N = fix (10*rand (1, 10))

N =

9 2 6 4 8 7 4 0 8 4

R = randn (4, 4)

R =

0.6353 0.0860 -0.3210 -1.2316

-0.6014 -2.0046 1.2366 1.0556

0.5512 -0.4931 -0.6313 -0.1132

-1.0998 0.4620 -2.3252 0.3792

Using the MATLAB Editor to create M-Files:

56

Page 57: Wavelet Palm Print

The MATLAB editor is both a text editor specialized for creating M-files and a graphical

MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in

the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor

window has numerous pull-down menus for tasks such as saving, viewing, and debugging files.

Because it performs some simple checks and also uses color to differentiate between various

elements of code, this text editor is recommended as the tool of choice for writing and editing M-

functions. To open the editor, type edit at the prompt opens the M-file filename.m in an editor

window, ready for editing. As noted earlier, the file must be in the current directory, or in a

directory in the search path.

Getting Help:

The principal way to get help online is to use the MATLAB help browser, opened

as a separate window either by clicking on the question mark symbol (?) on the desktop toolbar,

or by typing help browser at the prompt in the command window. The help Browser is a web

browser integrated into the MATLAB desktop that displays a Hypertext Markup Language

(HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to

find information, and the display pane, used to view the information. Self-explanatory tabs other

than navigator pane are used to perform a search

57

Page 58: Wavelet Palm Print

CONCLUSION:

This project investigates combination of multiple wavelets at feature level for Palmprint

based authentication system using an indigenously developed peg-free image acquisition

platform. The results depict the superiority of combined wavelets over individual wavelet feature

for the Palmprint authentication, using coarse level information. The project also presented a

new approach for rotation invariance, which proved its effectiveness by enhancing genuine

acceptance rate.

58

Page 59: Wavelet Palm Print

REFERENCES:

[1] Xiangqian Wu, David Zhang, and Kuanquan Wang “Palm Line Ex-traction and Matching for

Personal authentication. IEEE TRANSAC-TIONS ON SYSTEMS, MAN, AND

CYBERNETICS—PART A: SYS-TEMS AND HUMANS, vol. 36, no. 5, September 2006.

[2] Ajay Kumar, David C. M. Wong, Helen C. Shen, Anil K. Jain. Per-sonal Verification using

Palmprint and Hand Geometry Biometric Audio- and Video-Based Biometric Person

Authentication. Vol 2688/2003, Springer Verlag Berlin Heidelberg 2003.

[3] Junwei Tao, Wei Jiang, Zan Gao, Shuang Chen, and Chao Wang. “Palmprint Recognition

Based on Improved 2D PCA”. Agent Computing and Multi-Agent Systems vol 4088/2006,

Springer- Verlag Berlin Heidelberg 2006. School of Information Science & Engineering,

Shandong University, Jinan.

[4] Tee Connie, Andrew Teoh, Michael Goh, David Ngo. “Palmprint Recognition with PCA and

ICA.” Image and Vision Computing New Zealand 2003, Palmerston North, New Zealand, 3

(2003) 232-227.

[5] Jiang, W., Tao, J., Wang, L. “A Novel Palmprint Recognition Algorithm Based on PCA and

FLD.” IEEE, Int. Conference. On Digital Telecommunications, IEEE Computer Society Press,

Los Alamitos (2006).

59

Page 60: Wavelet Palm Print

[6] Murat Ekinci and Murat Aykut. “Palmprint Recognition by Applying Wavelet Subband

Representation and Kernel PCA.” Computer Vision Lab. Department of Computer Engineering,

Karadeniz Technical Uni-versity, Trabzon, Turkey. Machine Learning and Data Mining in Pat-

tern Recognition vol 4571/2007, Springer-Verlag Berlin Heidelberg 2007.

[7] Li Shang, De-Shuang Huang, Ji-Xiang Du, and Zhi-Kai Huang. “Palmprint Recognition

Using ICA Based on Winner-Take-All Net-work and Radial Basis Probabilistic Neural

Network.” Advances in Neural Networks - ISNN 2006, Volume 3972/2006, Springer-Verlag

Berlin Heidelberg 2006.

[8] G. Lu, D. Zhang, K.Q. Wang, “Palmprint recognition using eigen-palms features.”, Pattern

Recognition Letters, vol. 24, no. 9-10, pp. 1473-1477, 2003.

[9] David Zhang, Wai-kin kong, Jane You and Micheal Wong. “Online Palmprint

Identification.” IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE

INTELLIGENCE, vol. 25, no. 9, September 2003.

[10]Kumar, Shen, H. C. “Recognition of Palmprints Using Wavelet-based Features.” Proc. Intl.

Conf. Sys., Cybern., SCI-2002, Orlando, Florida (2002).

60