Top Banner
Institute for Plasma Research Bhat, Gandhinagar. Summer School Project 2011 on Image Fusion Submitted by Peddiraju Bhargava SVNIT, Surat. Project Guides: Dr. J. Govindarajan. (Group Leader) Santosh P. Pandya Swetang N. Pandya
48
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: My Project Report

Institute for Plasma Research

Bhat, Gandhinagar.

Summer School Project – 2011

on

Image Fusion

Submitted by

Peddiraju Bhargava

SVNIT, Surat.

Project Guides:

Dr. J. Govindarajan. (Group Leader)

Santosh P. Pandya

Swetang N. Pandya

Page 2: My Project Report

1

Certificate

This is to certify that Mr. Peddiraju Bhargava from Sardar

Vallabhbhai National Institute of Technology, Surat, has successfully

completed his project on Image Fusion & Techniques, as a part of

Summer School Program 2011, conducted at the Institute for Plasma

Research. His overall performance in the program was found to be

excellent.

Project Guides: Group Leader:

Santosh P. Pandya

Dr. J. Govindarajan

Swetang N. Pandya

Page 3: My Project Report

2

Acknowledgement

First of all, I render my all hearted thanks to my project leader Dr. J.

Govindarajan and my project guides Mr. Santosh Pandya and Mr. Swetang

Pandya for their guidance and encouragement.

They provided me with an interesting project which helped me to gain

much more knowledge in the field of Image processing. It would not have been

possible without their support.

I would also like to thank to Dr. Mohan Das who helped us a lot in the

photography part of the project, without which the experiments couldn't have

been performed.

Further, I would like to express my sincere thanks to the administration of

the Institute for Plasma Research, Gandhinagar, for giving me this wonderful

opportunity of participating in the Summer School Programme 2011. I also

thank my parents and fellow SSP students for the encouragement they gave me

to boost my confidence. The entire experience was truly magnificent.

Peddiraju Bhargava

B. Tech. III

Electronics & Communication Engineering

SVNIT

Surat

Gujarat.

Page 4: My Project Report

3

Abstract

mage Fusion is the process of combining images from different

sources to obtain a single composite image with extended and

enhanced image content, superior to the base images. Fusion may be for

extension in time, space or spectral region. Present day digital imagers capture

and store the images as arrays of pixels. The captured image of a scene may be

in the X-Ray, UV, Visible and Infrared or in Microwave region of the spectrum

depending upon the imaging source. Or the individual image may be with

different depth of focus, from different viewing angle or at different time. The

Image fusing techniques aim at fusing such images to improve the perceptibility

or information content of a scene. Image fusion can occur at the pixel level or

transformed image level or highest decision level.

In this project, Image Fusion techniques will be learnt and algorithms will

be developed with MatLab as the working environment and will be tested with

simulated images. Taking visible and infrared images using special cameras will

follow this. These images will then be enhanced and fused using the algorithms

developed and the quality assessed.

I

Page 5: My Project Report

4

Contents

1. Introduction

2. What is an Image?

3. Methods of image processing

3.1 Multi resolution Analysis

4. Image Fusion

4.1 Types of Image Fusion

4.1.1 Single sensor image fusion systems

4.1.2 Multiple sensor image fusion systems

4.2 Why MATLAB for Image Processing?

4.3 Multi resolution Analysis using image pyramids.

4.3.1 Laplacian pyramids.

4.3.2 Why pyramid analysis?

5. Fusion methods used

6. Procedure for fusion

7. Experiments performed

8. Results

9. Applications

10. GUI and Miscellaneous.

conclusions.

11. References

Appendix 1- Code for Command line execution in MATLAB.

Appendix 2- Code for GUI creation in MATLAB.

Page 6: My Project Report

5

List of tables and figures:

Fig. 1. Single Sensor Image Fusion Systems.

Fig. 2. Multi Sensor Image Iusion Systems.

Fig. 3. Gaussian Decomposition of an image.

Fig. 4. Gaussian image pyramid.

Fig. 5. Flow chart for implementation of Gaussian pyramid.

Fig. 6. Comparison of direct spatial processing and pyramid processing

Fig. 7. Problem faced while pyramid decomposing and reconstruction.

Fig. 8. Fusion schematic diagram.

Fig. 9. Fusing two visible images out of focus.

Fig. 10. Fusing two IR images out of focus.

Fig. 11. Fusion of visible images out of focus.

Fig. 12. Fusion of images with non-uniform illumination.

Fig. 13. Fusion of astronomical images.

Fig. 14. Non destructive testing.

Fig. 15. Fusion of two different images of brain.

Fig. 16. Fusion in surveillance.

Fig. 17. Hidden weapon detection.

Fig. 18. Enhanced detail by fusion.

Fig. 19. Fusion of images of solar surface.

Fig. 20. Graphical User Interface Developed.

Table 1. Functions of different buttons in GUI.

Page 7: My Project Report

6

1. Introduction

Of all the five senses, vision is the one which gives the majority of

knowledge we gain about our surroundings. The AMOUNT of knowledge we

gain by the interpretation of a Picture, the product of the process of vision,

depends on the detail in the picture itself. Being sensitive to the visible region of

the electromagnetic spectrum, the human eye can see only those bodies which

interact in this region. In everyday life, this is far beyond sufficient. But, in

many scientific and industrial applications like material testing, astronomical

imaging, surveillance etc., there is a need to get more details about the

surroundings in other wavelength regions as well. Also, as previously

mentioned in the abstract, images can be out of focus when they are taken from

different sources or at different times etc. All these limitations give rise to the

capture of only certain amount of detail, but not all.

Also, with the advent of robotics and automation, the deep necessity for

compact images with more interpretable information has been felt. The

emerging field of data compression when applied to images requires a variety of

techniques. All these problems collectively gave birth to the field of Image

Processing.

Image Fusion is a special segment of Image Processing which aims at

combining two or more images into a single image with superior image content

than all the individual images. This helps in quick interpretation, locating and

comparison as well. With the rapid growth in the field of electronics over the

past decades, the images started taking their form digitally rather than their

photographic ancestors.

This project describes the process of image fusion using a technique

called Image Pyramids, with MatLab environment, using digital images. All the

procedures described here are software based and can also be implemented on

hardware.

2. What is an Image?

An Image may be defined as a two dimensional function, f(x,y), where x

and y are the spatial co-ordinates, and the amplitude of f at any pair of co-

ordinates is called the intensity of the image at that location. The term

‘Graylevel’ is frequently used to refer to the intensity of monochrome images.

Page 8: My Project Report

7

Color images are formed as a combination of individual 2D images, each

representing Red, Blue, Green levels of the image.

A digital image is a 2D signal in essence, and is the digital version of the

2D manifestation of the real world 3D scene. Although the words picture and

image are quite synonymous, we will make the useful distinction that the

‘picture’ is the analog version of the ‘image’.

The modern digital era demands everything in a digital format i.e.,

discrete and limited. So, the former analog pictures are now replaced by the

present digital images by processes called Sampling and Quantisation. The

sampling is simply the process of taking the intensity values at periodic

spacings along the image, converting the spatial continuity of the picture to a

discrete form. Quantisation is the process of limiting the intensity levels to

certain pre-defined levels, again discrete, which can be handled by an image

processing system like a computer. To achieve both of the above requirements,

the photographic plates previously used have been replaced by arrays of

electronic detectors like CCDs, CIDs and CMOS devices, all of which are made

of photosensitive elements. Each of these elements, when represented in an

image, is called a Pixel (short form of Picture Element). The quality of the

image depends on the number of sensing elements in the array or the number of

pixels and also the number of quantisation levels used for the intensity values.

This measure is called the Resolution. The greater the resolution, the clearer the

image.

Images, as mentioned above can be of many kinds. They range from

simple binary images to complex 3D images. The devices used for their

acquisition also range from simple photographic plates to complex X ray

cameras. All the above devices use direct light from the source to make an

image. Indirect methods like ultra scan, MRI etc are also widely popular in

medicine, industrial non-destructive testing etc. In the present project, visible

and IR cameras are being used for experimentation.

3. Methods of image processing:

Image processing can be done in either spatial domain on pixel to pixel

basis or in a transformed domain on frequencies. The spatial domain is very

easy to implement but takes more time to execute. The techniques become more

complex as one goes to higher level processing in spatial domain since it

Page 9: My Project Report

8

involves matrices and convolutions most of the time. Moreover, we cannot get

all the details from the image by directly processing in spatial domain.

Instead, we can go for the transformed domain, the frequency domain,

which represents the information in the image in a very compact form. Different

frequently used operations like filtering, compression etc can be done very

easily using the frequency domain analysis. The Fourier series has laid the first

step in spectral image processing. Though it is very versatile, it can give the

information only about frequency. Many applications demand the frequency

content at a specific instant of time (like in EEGs) or in a specific region of

space (like in image registration). These requirements have taken Fourier series

a step further ahead, giving rise to multi resolution analysis, a product of

observation and analysis. Here, we discuss this analysis in terms of frequency

and space (though equally applicable for time).

3.1. Multi resolution analysis:

In an image, just like in daily life, we encounter objects of different sizes

on the basis of the imaging distance, angle etc. So, a single resolution cannot

resolve all the objects adequately. We need different resolutions for objects of

different sizes. This is exactly the way the human eye works. Inspired by this

observation, various methods of multi resolution analysis have been developed

like the Short Time Fourier Transform, Wavelet transform, Pyramid transform,

S transform, all having their own merits and demerits according to the

application in hand. The present project is based on the pyramid decomposition

method.

4. Image Fusion:

As said before, this is the technique of combining two or more images of

different information content to get another image with superior content than

both the individual images. The above images in Fig.3 show this. The images

‘a’ and ‘b’ have some area which couldn’t be focused well. In ‘d’, the entire

defocus problem has been solved. In the same way, images can be taken in a

variety of electromagnetic wavelengths like IR, X ray, UV etc. Each of them

give details about some specific objects in the field of view. For example, IR

imaging gives the information about invisible but hot objects in the scene. To

spot these objects relative to what we see, we combine them with visible also to

get a clear identification.

Page 10: My Project Report

9

Also, in this project, image fusion algorithm has been applied for

illumination problems i.e., to solve the non-uniform illumination of objects in

the scene. The results of the said experiments will be clearly understood once

we go through the next few sections.

4.1. Types of Image Fusion:

On the basis of how we acquire the images, image fusion is of two types

namely, Single sensor image fusion system and Multi sensor image fusion

system, explained here.

4.1.1. Single sensor image fusion system:

An illustration of a single sensor image fusion system is shown in the

figure below. The sensor shown could be a visible band sensor such as a digital

camera. The sensor captures the real world as a sequence of images. This

sequence is fused into one single image and used either by a human operator or

by a computer to do some task. For example, in object detection, a human

operator searches the scene to detect objects such as intruder in a secured area.

Fig. 1. Single sensor Image Fusion Systems

This kind of system has some limitation due to the capability of the

imaging sensor that is being used. The conditions, under which the system can

operate, the dynamic range, resolution etc., are all limited by the capability of

the sensor. For example, a visible band sensor such as a digital camera is

appropriate for a very brightly illuminated environment such as daylight scenes

but is not suitable for poorly illuminated scenes like scenes including fog or

taken during the night time etc.

Page 11: My Project Report

10

4.1.2. Multi sensor image fusion systems:

Multi sensor image fusion system overcomes the limitations of single

sensor counterpart by combining images from different sensors to form a

composite image. A rough schematic is shown in the figure given below. As an

example, one can use a visible camera to picture visible features of a night

scene while an infra red camera (also suitable for poorly illuminated images)

can be used to get a better detail of the scene. When fused, the composite image

gives the information about the objects in the view and also their relative spatial

locations.

The benefits of a multi sensor image fusion system include:

Extended range of operation- multiple sensors capable of operation in

different conditions can be deployed to extend the effective range of

operation.

Fig. 2. Multi sensor Image Fusion System

Extended spatial and temporal coverage – joint information from sensors

that differ in spatial resolution can increase the spatial coverage. The

same is true for temporal dimensions.

Reduced uncertainty – joint information from multiple sensors can reduce

the uncertainty associated with sensing or decision process.

Increased reliability – the fusion of multiple measurements can reduce

noise and therefore improve the reliability of measured quantity.

Robust system performance – redundancy in multiple measurements can

help in system robustness. In case one or more sensors fail to perform or

if the performance deteriorates, the system can depend on other sensors.

Page 12: My Project Report

11

Compact representation of information – fusion leads to compact

representation of large amount of data present in many different images.

For example, in remote sensing, instead of storing imagery from several

spectral bands, it is comparatively more efficient to store the fused image

information.

4.2. Why MATLAB for image processing?

MATLAB stands for MATrix LABoratory. It is highly efficient software

designed to handle matrix manipulative operations. Though this was confined

initially to small data handling needs, it was soon realised that the capacity of

matlab exceeds quite beyond what it was designed for. Since the image is an

array of intensities, it can also be treated as a matrix and can be easily

manipulated in matlab. The power and extensive usage of matlab is due to it’s

very specialised feature called ‘toolboxes’, each designed by experts of specific

fields to handle high level tasks of that particular field. In the same way, matlab

provides a very robust Image Processing Toolbox for all sorts of image related

operations. In addition, toolboxes like wavelet, signal processing, data

acquisition etc all help IPT to give a beautiful image processing tool. But, since

the interpretation is done slowly in matlab, it has a limitation on its speed. So, it

is very useful not for practical implementation but for the initial development of

the algorithm without worrying about coding a lot. Once this is complete, the

generated code can be converted into another language like C to implement in

practical systems. These reasons have made matlab the best choice for this

project development.

Coming to the technique used in this project, as already said, pyramid

analysis has been used. This is one of the different ways of multi resolution

analysis and is described below.

4.3. Multi resolution analysis using image pyramids:

In this method, the source image is progressively converted into smoother

and smaller images with lesser and lesser frequency components. The basic

procedure is outlined as follows:

The image is smoothed using a mask to filter the low frequency

components and eliminating the high frequency components. Since the

Page 13: My Project Report

12

high frequency components have less information, their filtering results in

no appreciable loss.

Then, the filtered image is down sampled (decimated) by a factor of two

along the rows and columns, giving an image which has 4 times less no.

of pixels than the previous image. Thus, if the initial image is MxN, the

resulting image would be (M/2)x(N/2). Thus, both the pixel density and

frequency density are decreasing.

The same procedure is applied to this image to get another condensed

image.

This process is repeated for L times where L is decided on some

statistical criterion. When all the images obtained likewise are stacked

upon each other, they look like a pyramid, and hence the name. The

decomposition procedure can be named ‘Reduce’. Each level has a range

of frequencies towards the low frequency range.

The Filter used here is of a lot of importance since it decides the

information content in the output image. Mostly the filters used in digital

image processing applications are ‘Gaussian’ filters, since they give the

minimum possible ringing effects and noise. The Gaussian filter used in

the above process gives the name, ‘the Gaussian Pyramid’.

The example of a Gaussian pyramid is shown here:

Fig. 3. Gaussian decomposition of an image. Here 0 is the original image with all the others being the reduced levels.

Page 14: My Project Report

13

Fig. 4. Gaussian Image Pyramid.

The flow chart for Gaussian

decomposition is as follows:

The level-to-level averaging process

is performed by the function

Reduce.

gk = Reduce (gk — 1) (1)

which means, for levels 0 < l < N

and nodes i, j, 0 < i < Cl, 0 < j < Rl.

gl(i, j)=

m=2 n=2

Σ Σ w(m, n)gl - 1 (2i + m, 2j + n). m=-2 n=-2

Here N refers to the number of levels

in the pyramid, while

Cl and Rl are the dimensions of the

l th level. Note in Fig. 1

that the density of nodes is reduced

by half in one dimension,

or by a fourth in two dimensions

from level to level.

Fig. 5. Flow chart for implementation of

Gaussian pyramid.

Yes

Get the Image

Smooth the current Image using

the Gaussian filter

Down sample it by 2 to get the

reduced image

Store the Image

Required no.

of levels?

End

No

Page 15: My Project Report

14

The operation of reduction of the image into pyramid levels is done using

function impyramid of MatLab. This performs reduction of image b if used in

the form,

>>a= impyramid (b,’reduce’);

And stores the resulting image in a. When repeated in a loop, this command

creates the whole pyramid. If the argument ‘reduce’ is replaced by ‘expand’, the

reduced image gets expanded using interpolation, the process of which is

discussed shortly.

4.3.1. Laplacian pyramid:

The Gaussian pyramid has all the low frequencies in each level. A more

compact form can be achieved by using the Laplacian pyramid method. Since

laplacian is derivative of Gaussian, it follows that to get the laplacian pyramid;

each level’s image in the Gaussian pyramid must be subtracted from its

previous level’s image after proper expanding. This results in the band pass

information in each level which, being of very low power and content (valuable

for edge detections, object identification etc), requires less number of bits and

resolution to represent. Before proceeding, expansion methods are discussed in

the following section.

Using the same procedure and the same filter used for reduction, the

reduced image can be expanded. It is done by convolution of image to be

expanded with the filter. This interpolates and also doubles the size of the

image. More considerations and problems often faced in this expansion will be

discussed in the later sections. Mathematically, the process is denoted by

‘Expand’, expressed as follows:

Gl,k = EXPAND [G Gl,k-1] or, to be precise, Gl,0 = Gl, and for k>0,

m=2 n=2

Gl,k(i,j) = 4*Σ Σ w(m, n)gl, n-1 ((i-m)/2, (j- n)/2). m=-2 n=-2

The expand operation doubles the size of the image with each iteration, so that

Gl,1, is the size of Gl,1, and Gl,1 is the same size as that of the original image.

Page 16: My Project Report

15

This is how the reduced image is expanded again. It is further used to

reconstruct the original image from the laplacian pyramid exactly by the

algorithm as follows:

First, the laplacian is constructed from the Gaussian by using expansion

followed by subtraction. The base image is kept as a reference. The next

reduced level is first expanded and subtracted from the base image. This

gives one level of the laplacian pyramid. This is repeated for lower and

lower pairs of levels to finally obtain the full pyramid. Note that the top

most level is left as it is since there is no further level to subtract from.

Mathematically,

Li=Gi-Expand (Gi+1); LN=GN; where Li denotes the laplacian of the i th

level and ranges from 0 to N-1 for a pyramid of N levels.

Since each image in a laplacian pyramid corresponds to difference

between two Gaussian images, at a discrete level, it is a band pass version

of the image. Each level has information corresponding to a certain

frequency band.

To obtain the original image, all the levels starting from the top are

expanded once and added to the lower level one after the other. This is

equivalent to

Gi=Li + Expand (i+1) where N-1 > i > 0

Multi resolution decomposition and reconstruction are thus complete. If

decomposed information is available, then actual image can be reconstructed

almost exactly.

4.3.2. Why pyramid analysis?

Sometimes there may be a question as to why do we need pyramid

analysis for image processing. As mentioned earlier, it gives information about

frequency content at particular space in the image resulting in multi dimensional

analysis. Further, when image in a raw form is processed, gray levels get

changes at pixel level and high frequency components can get dissolved since

each pixel is affected by its entire neighborhood.

When pyramid decomposition is done, each band of frequencies is treated

separately and they can all be restored very well since no interference from the

Page 17: My Project Report

16

neighborhood is observed as the neighboring pixels are mostly dark due to the

absence of certain lower frequencies in each level. An example is shown here

pertaining to image fusion. The two images ‘a’ and ‘b’ to be fused give the

image ‘c’ when directly manipulated but result in ‘d’ when combined using

multi resolution analysis. Also, since this is a king of frequency division

technique, filtering can be done by selecting or deleting one or more

intermediate levels in the pyramid and then reconstructing the image grom the

residual images.

a b

c d

Fig. 6. Comparision of direct spatial processing and pyramid processing. a and b are the

images to be combined to get a complete image. C is a result of direct averaging

operation on the images with each other. d is the result pyramid processing using the

same averaging technique on each level.

Page 18: My Project Report

17

5. Fusion methods used:

There are four different methods used in this project while development

of the algorithm. They are

1. Averaging

2. Maximum

3. Minimum

4. Weighted average.

Any one of these methods is supported by the program so that a careful

selection can be done among them to get the best details in fused image. But,

there are some problems faced during fusion. Along with the solutions

implemented for them, a brief description about them is as follows:

During the reduction of an image, we are constrained by the fact that as

the image level increases; the size of the image reduces by half in each

dimension. If the number of levels of decomposition is more, then after

sometime, the top most level will completely lose one of its dimensions. It may

end up as a row or column vector or sometimes even as a single pixel.

Decomposition after this level is just wastage of valuable execution time since

all the levels from this point have no further information. This problem may not

sound serious it becomes prominent while dealing with high resolution images

since it increases the processing time of the program. In order to avoid this, the

decomposition is stopped whenever the image is about to lose one of its

dimensions.

The number of levels can be controlled in the program by specifying two

inputs namely ‘threshold’ and ‘graysc’. Threshold is just another parameter

denoting the square root of minimum power in an image level. If this power

goes below the threshold, decomposition stops.

threshold2 <= (1/(MxN)).∑x ∑y f(x, y)

2

Another problem faced during the expansion and reconstruction is that

expansion is mainly done through interpolation operation and using the

Gaussian filter as mentioned above. As mentioned earlier, low frequencies are

present in the top levels of the pyramid. As these levels are successively

expanded and added, these low frequency components get spread and give the

Page 19: My Project Report

18

image a blurry view. To avoid this, the graylevels in each level are scaled to a

lower range using the parameter ‘graysc’ (0<graysc<=1) along with inbuilt

function ‘imadjust’. To increase the contrast of the image at each level, gamma

correction parameter ‘adjpar’ is also available as an optional argument in the

program. More details can be found in the documentation provided with the

program. A small example of how these problems affect image fusion is given

below. A clear inspection of these images will clarify the problem involved.

a b c

Fig. 7. Problem faced while pyramid decomposing and reconstruction. (a) is the original

image. (b) is the image formed by direct averaging. (c) is formed by using image

reconstruction after adjusting graylevels in each stage.

6. Procedure for fusion:

The algorithm and procedural flow chart are described here.

1. Take both of the images to be fused and reduce them into respective

laplacian pyramids, each with same no. of levels.

2. Now, take images from the top most level of both pyramids and apply the

required method, where 0 <= k <= N

(a) In averaging method, each pixel in the final image is the average

of the corresponding pixels in the two images.

f(x, y) = .5[gk(x,y)+hk(x,y)] for all x,y

(b) In maximum method, each pixel in the final image is the

maximum of gray levels of corresponding pixels in the two images.

f(x, y) = MAX [gk(x,y)+hk(x,y)] for all x,y

Page 20: My Project Report

19

(c) In minimum method, do the same operation mentioned in (b) but

take minimum this time.

f(x,y) = MIN [gk(x,y)+hk(x,y)] for all x,y

(d) In weighted average method, each pixel in the fused image is the

weighted average of corresponding pixels in individual images.

f(x,y) = a.gk(x,y)+b.hk(x,y)/(a+b) for all x,y

Where a, b are the weights for each image.

3. Do the same for all the lower levels also all the way up to level 0 to get a

single fused pyramid.

4. Now, reconstruct the image from the fused levels to get the final image.

The schematic diagram for the technique is given below.

Fig. 8. Fusion Schematic Diagram

7. Experiments performed:

The algorithm as outlined above is used to write a MATLAB code given

in the appendix 1. To test the performance of the algorithm, an experimental

verification is essential. In the initial stages, images already existing in

MATLAB database were used to test. Later on specialized instruments are used

to get the images of some particular objects. These instruments are Visible

cameras with manual focus and IR Camera.

Visible camera, as everyone knows, takes the visible photograph of an

image. All the attributes seen in this image can be seen by a naked eye except

Page 21: My Project Report

20

for the zooming capacity it provides. Using manual focus, getting a clearer

picture of a specific part of the scene is possible. On the other hand, IR camera

is a lot more sophisticated instrument, a brief description of which is given here

under:

IR camera: In this project, to acquire images in the infra red band, a Medium

Wavelength IR camera has been used which senses IR radiations in 3-5 µm

(filtered to 3.9 µm). Though this can sense IR radiations beyond these

wavelengths, it is much dominantly sensitive for this region. This takes images

of objects at a maximum rate of 13 kHz and produces images of 320x240

pixels. The focal length of the camera used is 25 mm, a field of view of 21o

x16o. The detectors are of MCT type (Mercury Cadmium Telluride)

incorporating Focal Plane Array technology.

Using these, visible photographs and infrared photographs of the same

image have been taken and fused to find out the defects in it, as a part of non-

destructive testing. Different images have been taken both in visible and

infrared with partial focus of the scene, fused together to get a uniform

perceptible focus.

The results are presented in the following section along with the method

used.

8. Results:

The following images summarize the results of the program implementation:

(a) Fusing two images out of focus using averaging:

a b c

Fig. 9. (a) and (b) are images with focus only in some portion. (c) is the fused image.

Method used: Averaging with all other program parameters set to default.

Page 22: My Project Report

21

(b) Fusing of two IR images using MIN method.

a b

c

Fig. 10. Fusing two IR images out of focus. (a) IR image with focus on front object. (b)

IR image with focus on back object. (c) Fused image with uniform focus.

Method used: Minimum gray level method with all other program parameters

set to default.

Page 23: My Project Report

22

(c) Fusing of two visible images out of focus to give an average focus image:

a

b

Method used: Minimum gray level method with all other default parameters.

Page 24: My Project Report

23

c

Fig. 11. Fusion of visible images out of focus. (a) Visible image with focus on front

object. (b) Visible image with focus on back object. (c) Fused image with uniform

focus.

(d) Fusing of two images with incomplete illumination to get total

illumination.

a b

c d

Fig. 12. (a), (b) Images with non uniform illumination. (c), (d) fused images.

Page 25: My Project Report

24

Method used: In (c), weighted average is used with default parameters and

weight ratio of 2:5 for (a) and (b). In (d), the same is used with gamma

correction of 2.

(e) Fusing of two images in two different wavelengths to give a composite

image.

a b

c d

Fig. 13. Fusion of astronomical images. (a) Visible image. (b) IR Image. (c) Fused

image using Averaging. (d) Fused image using Maximum method. (Courtesy: Spitzer

space telescope, NASA, JPL)

Method used: Maximum gray level with default parameters.

Page 26: My Project Report

25

(f) Non Destructive testing:

In non destructive testing, object under consideration is examined without

destroying its original form. Unlike destructive testing, which uses any one of

similarly prepared samples as a representative to all others and analyzing that,

non destructive testing employs testing individual images with high accuracy

using different techniques without dismantling it. For example, X ray is a kind

of non destructive testing. In industries also, non destructive testing is

frequently employed to monitor the wear and tear of industrial machinery. Here

is an example of application of image fusion in non destructive testing.

a b

c

Fig. 14. Non-destructive testing. (a) Visible image of plate. (b) IR image of the same

showing Defects. (c) Fused image showing both the texture and defects.

In figure (a), original plate with defect is shown. In figure (b), IR image of the

plate is shown. (c) is the combined image showing the defects plus texture. This

kind of analysis is used in medicine, industries etc.

Page 27: My Project Report

26

9. Applications:

In medical imaging like X rays and EEGs, image fusion helps in identification

of tumors, blood vessels etc with respect to visible portion of the body enabling

quick identification of area to be treated. In this way, diagnostics time will be

reduced. The images below show this application.

a b c

Fig. 15. (a), (b) two different images of brain tissue obtained using different methods.

(c) Fused image using Weighted average technique.

n surveillance and security, while monitoring the area during night time, visible

photography combined with IR imaging gives information about the presence

relative locations of foreign objects and trespassers. Example for such an

application is shown below.

a b c

Fig. 16. (a) Visible photograph of a scene. (b) IR image of same scene showing a person.

(c) Combined image showing more details.

A very important and interesting application is surveillance through CC TV

cameras. In a well populated area like a railway station or an airport, checking

all the people continuously for any hidden weapons or harmful objects is

crucial. And in such a huge population survey, if we take just an IR photograph,

clear identification of facial features of the suspect is not possible. So, we fuse

Page 28: My Project Report

27

the visible and IR images to form the final image so that it the person posing

any threat can be found out. Images shown below are examples of hidden object

detection.

a b c

Fig. 17. (a) Visible image. (b) IR image showing a man with a gun. (c) Combined image

showing who possesses the gun. (Hidden weapon detection)

Images shown in figure below are related to image enhancement in industrial

applications like non destructive testing. The final fused image shows all the

visible features plus the cracks formed.

a b c

Fig. 18. (a) Visible image. (b) IR image showing cracks much clearly. (c) Fused image

showing both very clearly.

Even in astronomy, image fusion plays a very important role. In space sciences,

imaging the same field of view with different imaging wavelengths is very

common. For example, while observing supernovae, visible images show just a

smudge of light spreading all over the galaxy. But in UV, X ray, IR imaging

combined with Radio imaging gives all the voids, new star formation and

distribution of mass details along with the reminiscent of the parent star in the

middle. Such information when observed collectively give beautiful content

about what exactly happens in those monstrous blasts. In the images shown

below, solar corona has been pictured in different wavelengths (Courtesy:

Page 29: My Project Report

28

www.spaceweather.com). Fusion of all these images show flux loops and heat

distributions around them.

a b

c

Fig. 19. (a) IR image of solar surface. (b) Visible image of solar surface showing flux

loops. (c) Fusde image showing both flux loops and heat distribution.

10. GUI and Miscellaneous:

As already said, image fusion application in non-uniform illumination

problems while acquiring images has been successfully studied. Also, for the

implementation of fusion algorithm developed, a new Graphical User Interface

has been created to skip all the programming skills required. Fusion of more

than one image in a sequence is also possible here with an interactive interface.

Page 30: My Project Report

29

Various buttons and their functions are to be described here while the

properties they execute can also be studied from the documentation associated

with help text of the program provided in the appendix 1.

Fig. 20. Graphical User Interface Developed

To access this feature, the system should have MATLAB installed since the

basic platform is MATLAB. Typing

>> imafus

in the command window opens this GUI and it can then be used independently

until MATLAB window itself is open.

Different buttons and their functions are described in the following table:

Table 1:

Button or Plot name Function

Browse buttons They open a browsing window for choosing the

image file to be selected. By default, it will list all the

folders and '.jpg' files in any folder, which can be

changed.

Page 31: My Project Report

30

The two smaller

plotting windows on the

top, Axes 1 and Axes 2.

These windows show the image selected by

corresponding browse buttons under them, only when

the user clicks on them.

File path boxes The boxes adjoining browse buttons show the path of

the image currently in the axes window. If only file

name is displayed, it means that the file is in

MATLAB's current working directory. This happens

only when the intermediate file saved in

'imafustemp.jpg' is brought into the axes using 'Get

Fused Image Here' button. In all other cases, the

entire path has to be present.

'Get Fused Image Here'

buttons

If the button is pressed, corresponding image in the

axes window is cleared and fused image in axes 3

window is accessed. A new axes window is displayed

and image will be displayed when the user clicks on

it. Useful especially to fuse more than two images.

'Reset x' buttons Reset x buttons clear the axes window 'x' associated

with them and also the path of the image currently

associated with the image. They create a new area to

browse and display the image.

Method It is a scroll down menu used to select any of the four

available methods, Average, Maximum, Minimum,

Weighted Average. Default-Average.

Threshold Allows the user to specify the threshold limit to be

considered. Default-0. Details about usage of

threshold can be seen in section

Gray scale factor Allows the user to give the fraction of range [0 255]

to use in each level. It should be between 0 to 1. The

default value is 0.5.

Gamma correction

factor

Allows the user to give the gamma correction

coefficient as input. Default-1.

Weight Usable only when Weighted Average method is being

used. Takes the weight of each image in respective

Page 32: My Project Report

31

boxes and averages accordingly.

Fuse Images Now Executes the function fusloop.m to fuse images

present in axes 1 and axes 2. Stores the fused image

in 'imafustemp.jpg' in current working directory.

Axes 3 When clicked on it, the fused image stored in

'imafustemp.jpg' is displayed.

Save as.. Opens up a save as dialog box to save the fused

image at desired location.

Master Reset Clears all the data present in current 'imafus' GUI

window and makes it ready for usage afresh.

Conclusion:

The scope of image fusion has been realized and sufficient command based and

graphical based algorithms have been developed. All of these serve the same

purpose but GUI based fusion is a lot more comfortable since it doesn't demand

any more programming skills. The illumination problem has been addressed and

image fusion technique has been successfully applied to solve it. Importance of

the complementary nature between IR and Visible imaging has been identified.

It is found to be useful in many security applications and Non-destructive

testing techniques.

11. References:

1. Digital Image Processing, second edition by Rafael C. Gonzalez and

Richard E. Woods, Prentice Hall.

2. Digital Image Processing using MATLAB, second edition by Rafael C.

Gonzalez, Richard E. Woods and Steven L. Eddins, Pearson Education.

3. Graphics and GUIs with MATLAB - Patrick Marchand and O. Thomas

Holland, third edition, CRC press.

4. Pyramid methods in image processing by E. H. Adelson, C. H. Anderson,

J. R. Bergen, P. J. Burt and J. M. Ogden.

5. Image fusion by Eduardo Fernandez Canga, University of Bath.

Page 33: My Project Report

32

Appendix 1

Command line program for image fusion:

1. For fusion of two gray scale images.( Fusion of colour images is

accomplished by using a loop each for Red, Blue, Green respectively.)

function y=fusion(a,b,varargin)

% FUSION is a function which is used to fuse two images 'a' and 'b'

% on the basis of the input arguements. Though these arguements are

% not given explicitly, some default values are previously assumed.

%

% The images a,b are the inputs either grayscale or coloured. The

%'adjpar' is a parameter used to decide number of levels we want.

% This is the gamma parameter in the imadjust function. Giving the

% value of it between 0 to 1 will increase the numer of levels in

% the image. But, reducing this value to a very low level can cause

% the image to get more brighter. Hence, a judicious decision has to

% be made. This is an optional arguement with default value equal to

% 1.

%

% The parameter 'threshold' is used also to limit the number of

% levels. The functions stops pyramid decomposition once the RMS

% power of a certain level falls below this threshold. No further

% decomposition will be done.

%

% This function supports four methods, namely, Average, Weighted

% average, Minimum and Maximum. When Weighted average method is

% required, the corresponding weights of each image must be

% specified in the 'weight' arguement array, otherwise, only

% averaging will be done with appropriate message. The corresponding

% strings to be used are:

% Average---> 'av'

% Weighted average---> 'wav'

% Maximum--->'max'

% Minimum--->'min' default being 'average'.

%

% Also refer to imresize,impyramid,imadjust and see articles on

% image pyramids.

%

% Refernces: 'Image Fusion' by Eduardo Fernandez Canga, University

% of Bath.

% 'Pyramid techniques in image processing', Prof. Andelson.

%-------------------------------------------------------------------

% Parsing the inputs.....

p=inputParser;

p.addRequired('a',@isreal);

p.addRequired('b',@isreal);

p.addOptional('adjpar',1,@(x)x>=0);

p.addOptional('threshold',0,@(x)x>=0);

p.addOptional('graysc',.5,@(x)x>=0);

Page 34: My Project Report

33

p.addOptional('method','av',@(x)any(strcmpi(x,{'av','wav','max',…

'min'})));

p.addOptional('weight',[1 1],@isreal);

p.addOptional('display','no',@(x)any(strcmpi(x,{'yes'})));

p.parse(a,b, varargin{:});

a=p.Results.a;b=p.Results.b;

adjpar=p.Results.adjpar;threshold=p.Results.threshold;

method=p.Results.method;weight=p.Results.weight;

dim=p.Results.dim;display=p.Results.display;

graysc=p.Results.graysc;

%-------------------------------------------------------------------

% Routine to resize the images to a common scale. If both of them

% have same size, then nothing will be done.

[g1 h1]=size(a);

[g2 h2]=size(b);

if g1*h1 > g2*h2

b=imresize(b,[g1 h1],'bicubic');

elseif g2*h2 > g1*h1

a=imresize(a,[g2 h2],'bicubic');

end

%-------------------------------------------------------------------

%-------------------------------------------------------------------

% Creating the laplacian pyramids.......

% Calls the function inpyl.

c=inpyl(a,threshold,adjpar,graysc);

d=inpyl(b,threshold,adjpar,graysc);

%-------------------------------------------------------------------

%-------------------------------------------------------------------

% Now, fusing the two images..........

l=length(c);

if strcmp(method,'max')

for x=1:l

e{x}=inmax(c{x},d{x});

end

elseif strcmp(method,'min')

for x=1:l

e{x}=inmin(c{x},d{x});

end

elseif strcmp(method,'wav');

if (isequal(weight,[1 1]) ~=0)

error(' Please mention different weights or use average…

arguement');

end

for x=1:l

e{x}=weight(1)*c{x}+weight(2)*d{x}./sum(weight);

end

else

for x=1:l

e{x}=.5*(c{x}+d{x});

Page 35: My Project Report

34

end

end

%-------------------------------------------------------------------

%-------------------------------------------------------------------

% Now, reconstructing the image...........

f=inpyrec(e);

%-------------------------------------------------------------------

%-------------------------------------------------------------------

% Assign the output to output argument 'Y'.

y=f;

end

%-------------------------------------------------------------------

%-------------------------------------------------------------------

% Function sub routines...............

%-------------------------------------------------------------------

% Function subroutine for inpyl.

function v=inpyl(a,threshold,adjpar,graysc)

% Computes the laplacian image pyramid of given image and gives the

% output to the variable 'V' as a cell array. It first decomposes

% the image imto gaussian pyramids and takes the difference of pairs

% of levels.

k=1;

[x y]=size(a);

x=(round(x/2))*2;

y=(round(y/2))*2;

a=imresize(a,[x y],'bicubic');

c{1}=a;

% Computing the gaussian pyramid……….

for d=2:100

c{d}=impyramid(c{d-1},'reduce');

c{d}=imadjust(c{d},[0 1],[0 graysc],adjpar);

[g h]=size(c{d});

k=k+1;

t=inpower(c{d});

if t<threshold || g<=1 || h<=1

break;

end

end

clear d

d=c;

% Computing the laplacian pyramid………

for h=1:k-1

x=impyramid(d{h+1},'expand');

[g f]=size(d{h});

Page 36: My Project Report

35

x1=imresize(x,[g f],'bicubic');

d{h}=d{h}-x1;

end

v=d;

end

%-------------------------------------------------------------------

% Function subroutine for inmax

function c=inmax(a,b)

[g h]=size(a);

for x=1:g

for y=1:h

p(x,y)=max(a(x,y),b(x,y));

end

end

c=p;

end

%-------------------------------------------------------------------

% Function subroutine for inmin

function c=inmin(a,b)

[g h]=size(a);

for x=1:g

for y=1:h

p(x,y)=min(a(x,y),b(x,y));

end

end

c=p;

end

%-------------------------------------------------------------------

% Function subroutine for inpyrec for pyramid reconstruction.

% Reconstructs the image from the input cell array 'V' consisting of

% laplacian pyramid images.

function f=inpyrec(v)

l=length(v);

b=v;

b{l}=v{l};

for x=l-1:-1:1

e=impyramid(b{x+1},'expand');

[row col]=size(b{x});

e1=imresize(e,[row col],'bicubic');

b{x}=b{x}+e1;

end

f=b{1};

end

%-------------------------------------------------------------------

% Function subroutine for inpower to calculate the root of power in

% the image.

function x=inpower(a)

[g h]=size(a);

Page 37: My Project Report

36

sum=0;

for x=1:g

for y=1:h

sum=sum+double((a(x,y)^2));

end

end

sum;

x=sqrt(sum/(g*h));

end

%-------------------------------------------------------------

%-------------------------------------------------------------

(2) In order to utilize this function for color images as well, the entire program

has to go through a loop, storing the outcome of each loop in the form of a cell

structure element. This cell structure finally becomes the fused image which can

be taken out using an output argument.

(3) Individual functions to create and reconstruct laplacian pyramids:

For decomposition:

function v=impyl(a,display)

% This is a function m file to calculate or decompose an image given

% as an input by 'a' into different levels of image pyramids. This m

% file uses the built in function of image reduction, 'impyramid'

%

% The pyramid decomposition is done by passing the image through a

% low pass filter and filtering the high frequency components. This

% is effectively acheived by using the spacial gaussian mask of 5x5

% size and after this filtering, the size of the image is reduced to

% half of the previous one. Later on the successive levels are

% subtracted from each other to give the laplacian pyramid.

%

% The function gives the output sequence of images as a cell array

% 'v', which contains the 'k' number of images, each half of the

% size of the preceeding one. At the same time, image is displayed

% in the figure window if the arguement 'display' is 'yes'. Any

% other arguement or empty arguement doesn't display the image

% sequence.

%

% For example, the command sequence

% a=imread('pout.tif');

% h=impyl(a,12,'yes')

% will decompose the image a into a 12 level pyramid.

% Omission of 'yes' doesn't display the output images. 'yes' is

% used here to view the image pyramid.

k=1;

[x y]=size(a);

x=(round(x/2))*2;

y=(round(y/2))*2;

Page 38: My Project Report

37

a=imresize(a,[x y],'bicubic');

% Generating the cell array required to store the image sequence.

c{1}=a;

for d=2:20

c{d}=impyramid(c{d-1},'reduce');

c{d}=imadjust(c{d},[0 1],[0 1/(2^(d-1))]);

t=impower(c{d});

k=k+1;

[g h]=size(c{d});

if g<=1 || h<=1

break;

end

end

k1=ceil(sqrt(k))

clear d

for h=1:k-1

x=impyramid(c{h+1},'expand');

[g f]=size(c{h});

x1=imresize(x,[g f],'bicubic');

d{h}=c{h}-x1;

end

% If the arguement of the display option is also given, then take

% the action accordingly. If display='yes', show the figure output.

% Anything else doesn't give figure output.

if (exist('display')==1)

if strcmp(display,'yes')

figure

for x=1:k

subplot(k1,k1,x)

imshow(d{x});

end

end

end

v=d;

end

%-------------------------------------------------------------------

For reconstruction:

function a=pyrec(v,display)

% This is a function to combine a given input pyramid of images

% created previously by laplacian of a certain image, back into a

% single unique image. The input cell array is given by 'v' and the

% output image is stored in the array 'a'.

%

% For example, the foolowing commands perform the laplace reverse

% combination of a given cell array if images 'c':

% l=impyl(a,3);

% v=pyrec(l);

% The usage of the impyramid command to expand the previously

% reduced image in the pyramid can be seen here. For more

% information, see the documentation of impyramid.

Page 39: My Project Report

38

%-------------------------------------------------------------------

l=length(v);

b=v;

b{l}=v{l};

for x=l-1:-1:1

e=impyramid(b{x+1},'expand');

[row col]=size(b{x});

e1=imresize(e,[row col]);

b{x}=b{x}+e1;

figure

imshow(b{x});

end

a=b{1};

end

%-------------------------------------------------------------------

Page 40: My Project Report

39

Appendix 2

For GUI, the call back functions are formed as given here in the program

imafus.m

function varargout = imafus(varargin)

% IMAFUS M-file for imafus.fig

% IMAFUS, by itself, creates a new IMAFUS or raises the

% existing singleton*.

%

% H = IMAFUS returns the handle to a new IMAFUS or the handle

% to the existing singleton*.

%

% IMAFUS('CALLBACK',hObject,eventData,handles,...) calls the

% local function named CALLBACK in IMAFUS.M with the given

% input arguments.

%

% IMAFUS('Property','Value',...) creates a new IMAFUS or raises

% the existing singleton*. Starting from the left, property

% value pairs are applied to the GUI before imafus_OpeningFcn

% gets called. An unrecognized property name or invalid value

% makes property application stop. All inputs are passed to

% imafus_OpeningFcn via varargin.

%

% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows

% only one instance to run (singleton)".

%

% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help imafus

% Last Modified by GUIDE v2.5 03-Jul-2011 16:18:13

% Begin initialization code - DO NOT EDIT

gui_Singleton = 1;

gui_State = struct('gui_Name', mfilename, ...

'gui_Singleton', gui_Singleton, ...

'gui_OpeningFcn', @imafus_OpeningFcn, ...

'gui_OutputFcn', @imafus_OutputFcn, ...

'gui_LayoutFcn', [] , ...

'gui_Callback', []);

if nargin && ischar(varargin{1})

gui_State.gui_Callback = str2func(varargin{1});

end

if nargout

[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});

else

gui_mainfcn(gui_State, varargin{:});

end

% End initialization code - DO NOT EDIT

Page 41: My Project Report

40

% --- Executes just before imafus is made visible.

function imafus_OpeningFcn(hObject, eventdata, handles, varargin)

% This function has no output args, see OutputFcn.

% hObject handle to figure

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% varargin command line arguments to imafus (see VARARGIN)

% Choose default command line output for imafus

handles.output = hObject;

% Update handles structure

guidata(hObject, handles);

% UIWAIT makes imafus wait for user response (see UIRESUME)

% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.

function varargout = imafus_OutputFcn(hObject, eventdata, handles)

% varargout cell array for returning output args (see VARARGOUT);

% hObject handle to figure

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure

varargout{1} = handles.output;

function edit1_Callback(hObject, eventdata, handles)

% hObject handle to edit1 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit1 as text

% str2double(get(hObject,'String')) returns contents of edit1

% as a double

% --- Executes during object creation, after setting all properties.

function edit1_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit1 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

function edit2_Callback(hObject, eventdata, handles)

% hObject handle to edit2 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit2 as text

% str2double(get(hObject,'String')) returns contents of edit2

% as a double

Page 42: My Project Report

41

% --- Executes during object creation, after setting all properties.

function edit2_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit2 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

function edit3_Callback(hObject, eventdata, handles)

% hObject handle to edit3 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit3 as text

% str2double(get(hObject,'String')) returns contents of edit3

% as a double

% --- Executes during object creation, after setting all properties.

function edit3_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit3 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

% --- Executes on selection change in popupmenu1.

function popupmenu1_Callback(hObject, eventdata, handles)

% hObject handle to popupmenu1 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Determine the selected data set.

str=get(handles.popupmenu1,'String');

val = get(handles.popupmenu1,'Value');

switch str{val};

case 'Average'

set(handles.popupmenu1,'UserData','av');

case 'Weighted Average'

set(handles.popupmenu1,'UserData','wav');

case 'Maximum'

set(handles.popupmenu1,'UserData','max');

case 'Minimum'

set(handles.popupmenu1,'UserData','min');

end

% Hints: contents = cellstr(get(hObject,'String')) returns

% popupmenu1 contents as cell array contents{get(hObject,'Value')}

% returns selected item from popupmenu1

Page 43: My Project Report

42

% --- Executes during object creation, after setting all properties.

function popupmenu1_CreateFcn(hObject, eventdata, handles)

% hObject handle to popupmenu1 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: popupmenu controls usually have a white background on

% Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

function edit4_Callback(hObject, eventdata, handles)

% hObject handle to edit4 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit4 as text

% str2double(get(hObject,'String')) returns contents of edit4

% as a double

user_string = get(hObject,'String');

set(handles.edit4,'UserData',str2num(user_string));

% --- Executes during object creation, after setting all properties.

function edit4_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit4 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

function edit5_Callback(hObject, eventdata, handles)

% hObject handle to edit5 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit5 as text

% str2double(get(hObject,'String')) returns contents of edit5

% as a double

user_string = get(hObject,'String');

set(handles.edit5,'UserData',str2num(user_string));

% --- Executes during object creation, after setting all properties.

function edit5_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit5 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

Page 44: My Project Report

43

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

function edit6_Callback(hObject, eventdata, handles)

% hObject handle to edit6 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit6 as text

% str2double(get(hObject,'String')) returns contents of edit6

% as a double

user_string = get(hObject,'String');

set(handles.edit6,'UserData',str2num(user_string));

% --- Executes during object creation, after setting all properties.

function edit6_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit6 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

function edit7_Callback(hObject, eventdata, handles)

% hObject handle to edit7 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit7 as text

% str2double(get(hObject,'String')) returns contents of edit7

% as a double

user_string = get(hObject,'String');

set(handles.edit7,'UserData',str2num(user_string));

% --- Executes during object creation, after setting all properties.

function edit7_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit7 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

function edit8_Callback(hObject, eventdata, handles)

% hObject handle to edit8 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit8 as text

Page 45: My Project Report

44

% str2double(get(hObject,'String')) returns contents of edit8

% as a double

user_string = get(hObject,'String');

set(handles.edit8,'UserData',str2num(user_string));

% --- Executes during object creation, after setting all properties.

function edit8_CreateFcn(hObject, eventdata, handles)

% hObject handle to edit8 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles empty - handles not created until after all CreateFcns

% called

% Hint: edit controls usually have a white background on Windows.

% See ISPC and COMPUTER.

if ispc && isequal(get(hObject,'BackgroundColor'),…

get(0,'defaultUicontrolBackgroundColor'))

set(hObject,'BackgroundColor','white');

end

% --- Executes on button press in pushbutton2.

function pushbutton2_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton2 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

set(handles.pushbutton2,'Visible','on');

threshold=get(handles.edit4,'UserData');

adjpar=get(handles.edit6,'UserData');

graysc=get(handles.edit5,'Userdata');

w1=get(handles.edit7,'UserData');

w2=get(handles.edit8,'UserData');

method=get(handles.popupmenu1,'UserData');

if isempty(method)~=0

method='av';

end

l=get(handles.edit1,'String');

a=imread(l);

m=get(handles.edit2,'String');

b=imread(m);

weight=[w1 w2];

c=fusloop(a,b,method,weight,adjpar,threshold,graysc);

imwrite(c,'imafustemp.jpg');

% --- Executes on button press in pushbutton3.

function pushbutton3_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton3 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

[fil1 pat1]=

uigetfile({'*.jpg';'*.jpeg';'*.bmp';'*.tif';'*.*'},'File Selector');

set(handles.edit1,'String',[pat1 fil1]);

clear fil1 pat1

% --- Executes on button press in pushbutton4.

function pushbutton4_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton4 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

Page 46: My Project Report

45

% handles structure with handles and user data (see GUIDATA)

[fil2 pat2]=uigetfile({'*.jpg';'*.jpeg';'*.bmp';'*.tif';'*.*'},'File

Selector');

set(handles.edit2,'String',[pat2 fil2]);

clear fil2 pat2

% --- Executes on button press in pushbutton5.

function pushbutton5_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton5 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

[fil3 pat3]=uiputfile({'*.jpg';'*.jpeg';'*.bmp';'*.tif';'*.*'},'Save

Image as..');

set(handles.edit3,'String',[pat3 fil3]);

d=imread('imafustemp.jpg');

imwrite(d,[pat3 fil3]);

clear pat3 fil3

delete imafustemp.jpg

% --- Executes on mouse press over axes background.

function axes1_ButtonDownFcn(hObject, eventdata, handles)

% hObject handle to axes1 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

c=get(handles.pushbutton12,'UserData');

if c==1

set(handles.pushbutton12,'UserData',0);

end

d=get(handles.edit1,'String');

a=imread(d);

imshow(a);

% --- Executes on mouse press over axes background.

function axes2_ButtonDownFcn(hObject, eventdata, handles)

% hObject handle to axes2 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

c=get(handles.pushbutton13,'UserData');

if c==1

set(handles.pushbutton13,'UserData',0);

end

d=get(handles.edit2,'String');

b=imread(d);

imshow(b);

% --- Executes on button press in pushbutton8.

function pushbutton8_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton8 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

set(handles.edit1,'String',[]);

set(handles.axes1,'NextPlot','replacechildren');

pai1=newplot(handles.axes1);

handles.axes1=pai1;

set(handles.axes1,'Visible','on');

Page 47: My Project Report

46

% --- Executes on button press in pushbutton9.

function pushbutton9_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton9 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

set(handles.edit2,'String',[]);

set(handles.axes2,'NextPlot','replacechildren');

pai2=newplot(handles.axes2);

handles.axes2=pai2;

set(handles.axes2,'Visible','on');

% --- Executes on button press in pushbutton10.

function pushbutton10_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton10 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

set(handles.edit3,'String',[]);

set(handles.axes3,'NextPlot','replacechildren');

pai3=newplot(handles.axes3);

handles.axes3=pai3;

set(handles.axes3,'Visible','on');

% --- Executes during object deletion, before destroying properties.

function popupmenu1_DeleteFcn(hObject, eventdata, handles)

% hObject handle to popupmenu1 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

% --- Executes on mouse press over axes background.

function axes3_ButtonDownFcn(hObject, eventdata, handles)

% hObject handle to axes3 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

d=imread('imafustemp.jpg');

imshow(d);

% --- Executes on button press in pushbutton11.

function pushbutton11_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton11 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

close all

imafus

% --- Executes on button press in pushbutton12.

function pushbutton12_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton12 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

st=get(handles.edit2,'String');

if strcmp(st,'imafustemp.jpg') ~=1

set(handles.pushbutton12,'UserData',1);

set(handles.edit1,'String','imafustemp.jpg');

set(handles.axes1,'NextPlot','replacechildren');

pai4=newplot(handles.axes1);

Page 48: My Project Report

47

handles.axes1=pai4;

set(handles.axes1,'Visible','on');

end

% --- Executes on button press in pushbutton13.

function pushbutton13_Callback(hObject, eventdata, handles)

% hObject handle to pushbutton13 (see GCBO)

% eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA)

st=get(handles.edit1,'String');

if strcmp(st,'imafustemp.jpg') ~=1

set(handles.pushbutton13,'UserData',1);

set(handles.edit2,'String','imafustemp.jpg');

set(handles.axes2,'NextPlot','replacechildren');

pai5=newplot(handles.axes2);

handles.axes2=pai5;

set(handles.axes2,'Visible','on');

end