GEOLOGY Paper: Remote Sensing and GIS Module: Digital Image Fusion Subject Geology Paper No and Title Remote Sensing and GIS Module No and Title Digital Image Fusion Module Tag RS & GIS XI Principal Investigator Co-Principal Investigator Co-Principal Investigator Prof. Talat Ahmad Vice-Chancellor Jamia Millia Islamia Delhi Prof. Devesh K Sinha Department of Geology University of Delhi Delhi Prof. P. P. Chakraborty Department of Geology University of Delhi Delhi Paper Coordinator Content Writer Reviewer Dr. Atiqur Rahman Department of Geography, Faculty of Natural Sciences, Jamia Millia Islamia Delhi Dr. Iqbal Imam Aligarh Muslim University Aligarh Dr. Atiqur Rahman Department of Geography, Faculty of Natural Sciences, Jamia Millia Islamia Delhi
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
GEOLOGY
Paper: Remote Sensing and GIS
Module: Digital Image Fusion
Subject Geology
Paper No and Title Remote Sensing and GIS
Module No and Title Digital Image Fusion
Module Tag RS & GIS XI
Principal Investigator Co-Principal Investigator Co-Principal Investigator
Prof. Talat Ahmad
Vice-Chancellor
Jamia Millia Islamia
Delhi
Prof. Devesh K Sinha
Department of Geology
University of Delhi
Delhi
Prof. P. P. Chakraborty
Department of Geology
University of Delhi
Delhi
Paper Coordinator Content Writer Reviewer
Dr. Atiqur Rahman
Department of Geography,
Faculty of Natural Sciences,
Jamia Millia Islamia
Delhi
Dr. Iqbal Imam
Aligarh Muslim University
Aligarh
Dr. Atiqur Rahman
Department of Geography,
Faculty of Natural Sciences,
Jamia Millia Islamia
Delhi
GEOLOGY
Paper: Remote Sensing and GIS
Module: Digital Image Fusion
Table of Content
1. Introduction
2. Concept of image fusion
2.1. At Pixel level
2.2. At Feature level
2.3. At Decision level
3. Objectives of image fusion
4. Image fusion techniques
4.1. Numerical Method
4.1.1. Multiplicative Algorithm
4.1.2. The Brovey transform image fusion technique
4.1.3. Fusion technique based on subtractive method
4.1.4. Wavelet image fusion technique
4.2. Colour related technique
4.2.1. The intensity-hue-saturation (IHS) image fusion technique
4.3. Statistical Method
4.3.1. Principal Component Analysis (PCA)
4.3.2. Fusion technique based on high-pass filter
4.4. Feature level technique
4.4.1. Ehlers method
5. Application of image fusion
5.1. Object identification
5.2. Classification
5.3. Change Detection
GEOLOGY
Paper: Remote Sensing and GIS
Module: Digital Image Fusion
1. Introduction
In remote sensing, image fusion is the combination of two or more different images
to form a new image by using a certain algorithm to obtain more and better
information about an object or a study area.
Remote sensing image fusion is an effective way to use a large volume of data from
multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and
Quick Bird provide both panchromatic (PAN) images at a higher spatial resolution
and multispectral (MS) images at a lower spatial resolution and many remote sensing
applications require both high spatial and high spectral resolutions, especially for GIS
based applications. An effective image fusion technique can produce such remotely
sensed images. There are several benefits in using image fusion: wider spatial and
temporal coverage, decreased uncertainty, improved reliability, and increased
robustness of system performance.
The objective of information fusion is to improve the accuracy of image interpretation
and analysis by making use of complementary information. Many image fusion
techniques have been developed to merge a Pan image and a MS image into a
multispectral image with high spatial and spectral resolution simultaneously. An ideal
image fusion technique should have three essential factors, i.e. high computational
efficiency, preserving high spatial resolution and reducing colour distortion.
The image fusion is performed at three different processing levels, which are pixel
level, feature level and decision level according to the stage at which the fusion takes
place. In the past few years, many image fusion methods have been proposed, such as
GEOLOGY
Paper: Remote Sensing and GIS
Module: Digital Image Fusion
intensity hue saturation- (IHS), Brovey transform (BT), principal component analysis
(PCA) and wavelets.
2. Concept of image fusion
Data fusion is a process dealing with data and information from multiple sources to
achieve refined /improved information for decision-making. A general definition of
image fusion is given as ‘Image fusion is the combination of two or more different
images to form a new image by using a certain algorithm’. Image fusion is performed
at three different processing levels according to the stage at which the fusion takes place:
2.1. At Pixel level
Image fusion at pixel level means fusion at the lowest processing level
referring to the merging of measured physical parameters. It uses raster data
that is at least co-registered but most commonly geocoded. The geocoding
plays an essential role because miss-registration causes artificial colours or
features in multisensor data sets, which falsify the interpretation. Later on, it
includes the re-sampling of image data to a common pixel spacing and map
projection.
2.2. At Feature level
Fusion at feature level requires the extraction of objects recognised in the
various data sources, e.g. using segmentation procedures. Features correspond
to characteristics extracted from the initial images, which are depending on
their environment such as extent, shape and neighbourhood. These similar
objects from multiple sources are assigned to each other and then fused for
further assessment using statistical approaches or Artificial Neural Networks
(ANN).
2.3. At Decision level
Decision-or interpretation level fusion represents a method that uses value-
added data where the input images are processed individually for information
extraction. The obtained information is then combined applying decision rules
GEOLOGY
Paper: Remote Sensing and GIS
Module: Digital Image Fusion
to reinforce common interpretation, resolve differences, and furnish a better
understanding of the observed objects.
3. Objectives of image fusion
Image fusion is a tool to combine multisource imagery using advanced image
processing techniques. It aims at the integration of disparate and complementary data
to enhance the information apparent in the images as well as to increase the reliability
of the interpretation. This leads to more accurate data and increased utility. It is also
stated that fused data provides for robust operational performance, i.e., increased
confidence, reduced ambiguity, improved reliability and improved classification.
Image fusion is applied to digital imagery in order to:
Image sharpening
Improve geometric corrections
Provide stereo-viewing capabilities for stereo-photogrammetry
Enhance certain features not visible in either of the single data alone
Complement data sets for improved classification
Detect changes using multi-temporal data
Substitute missing information in one image with signals from another sensor
image
Replace defective data.
4. Image Fusion Techniques
The standard methods of image fusion are based on Red-Green-Blue (RGB) to
Intensity-Hue-Saturation (IHS) transformation. The usual steps involved in satellite
image fusion are as follows:
Resize the low-resolution multispectral images to the same size as the
panchromatic image and co-register to coincide on a pixel-by-pixel basis
depending on the height variations in the area contained in the data.
Subsequently, the data can be fused using one of the fusion techniques
described herewith.
GEOLOGY
Paper: Remote Sensing and GIS
Module: Digital Image Fusion
Transform the R, G and B bands of the multispectral image into IHS
components.
Modify the panchromatic image with respect to the multispectral image. This
is usually performed by histogram matching of the panchromatic image with
Intensity component of the multispectral images as reference.
Replace the intensity component by the panchromatic image and perform
inverse transformation to obtain a high-resolution multispectral image.
Following are some methods used for image fusion:
4.1. Numerical Method
4.1.1. Multiplicative Algorithm: In order to improve quality of spatial and
spectral information Multiplicative transformation is a simple
multiplication fusion method. Its fused image can reflect the mixed
message of low-resolution images and high-resolution images. The
fusion algorithm can be written as:
MLijk = (XSijk x PNij)1/2
Where,
MLijk is the fusion image pixel value,
XSijk is pixel value of multispectral image,
PNij is the pixel value of Panchromatic.
Multiplicative that is the simplest fusion technique. The Multiplicative
algorithm stretches the histogram of all the MS bands and decreases the
standard deviation values. This technique also helps in detection of small
targets like cars and trees, and facilitates the mapping of the buildings.
However, multiplicative fusion techniques cause changes in the colours
of the original images and make the photo-interpretation more difficult.
While using this technique for image fusion, colour of the vegetation
changes from green to blue when blue band is used in natural colour
combinations. Multiplicative algorithm improves the spatial resolution
of the input MS image. The resultant image is having darker tone than
the input MS image, which results in loss of shadow (Fig. 2).
GEOLOGY
Paper: Remote Sensing and GIS
Module: Digital Image Fusion
Fig. 2 (a) Original PAN image, (b) Original MS image, (c) MLT-fused