Use of spectral and temporal unmixing for crop identification using multi-spectral data Samia Ali March, 2002
Use of spectral and temporal
unmixing for crop identification
using multi-spectral data
Samia Ali
March, 2002
Use of spectral and temporal unmixing for cropidentification using multi-spectral data
by
Samia Ali
Thesis submitted to the International Institute for Geo-information Science and
Earth Observation in partial fulfilment of the requirements for the degree in Master
of Science in Geoinformatics.
Degree Assessment Board
Chairman Dr. Ing. Yola Georgiadou
First supervisor Mr. Tal Feingersh, M.Sc.
Second supervisor Prof. Dr. F.D. van der Meer
External examiner Dr. Ir. B.G.H. Gorte
INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION
ENSCHEDE, THE NETHERLANDS
Disclaimer
This document describes work undertaken as part of a programme of study at the
International Institute for Geo-information Science and Earth Observation (ITC).
All views and opinions expressed therein remain the sole responsibility of the aut-
hor, and do not necessarily represent those of the institute.
Acknowledgements
I would like to thank Synoptics bv, The Netherlands, for providing Earth Observation and
reference data used for this study. Thanks are due to Mr. Leon Schouten at Synoptics for
sharing his knowledge. I appreciate his patience, he showed during this period, to answer
my questions and queries, whenever I needed.
I am thankful to International Water Management Institute (IWMI) and Dr. S. A. Prathapar
(the then Director IWMI, Pakistan National program) for the study leave, during which I
could complete my M.Sc. at ITC.
I sincerely thank to my supervisors; Mr. Tal Feingersh and Prof. Dr. F.D. van der Meer for
their constant and valuable guidance during thesis period. The provocative discussions
and their constructive comments helped me to complete this study.
I am grateful to all the staff of the Geoinformatics division for the useful knowledge, I
gained at ITC.
Thanks are extended to Dr. Rolf de By for introducing LATEX to us and saving the time,
otherwise needed for layout and formatting a thesis. His constant help to solve related
problems is highly appreciated.
I thank to Drs. Wan Bakx and Ard Blenke for their help, whenever needed. I extend my
thanks to Mr. Gerrit Huurneman, who is ever ready to solve any problem that is bothering
us.
Thanks are extended to all my GFM-2000 classmates for their unforgettable and friendly
company. I enjoyed sharing with them a mix of fun and frustration of the study at ITC.
Beside tough and long study hours at ITC, there have been some stress-free and memora-
ble moments during my stay at Netherlands. I appreciate a great deal the company of my
friends, Rubina, Mobin, Sadia, little Manal, Paul, Saim, Somia, Mounira, Zhao, Aftab, Mu-
bashar, Falak, Adel, Abdel-Rehman. I appreciate the love and effort of my brother, Tariq
and my friend Zeenath, they made to visit me at The Nethrlands.
My deepest gratitude and thanks are extended to my family; my parents, sisters and brot-
hers, for the strength imparted to me by their regular messages, prayers and love. I dedicate
this work to my beloved parents.
i
Acknowledgements
ii
Abstract
The reflectance values of pixels, recorded by remote sensors, often result
from spectral mixture of a number of ground spectral classes, constituting the
area of a pixel. This, the so-called mixed pixel problem, has always been an
obstacle in image classification in deriving accurate land cover classes. This
study suggests the use of a sub-pixel classification technique: spectral linear
unmixing, for an improved crop classification. If mixing is considered linear,
then the resulting pixel reflectance is a linear summation of the individual ma-
terial reflectance multiplied by the surface fraction they constitute.
In addition to the problem of mixed pixels, limited spectral separability
among different agricultural crop types is another problem that causes inaccu-
racy in classification. For this reason, linear unmixing is applied to multitem-
poral Landsat images to take an advantage of the spectral discrepancies shown
by crops over the course of their growing cycle. It is expected that the multi-
temporal profile for each crop’s fraction values will be distinct from each other
due to their respective growth cycle and hence an additional aid to improve
for the unmixing classification results.
These experiments are applied to the municipality of Maasbree in the south
of The Netherlands. Three Landsat images, dating May 14, August 01 and
August 26, are used for this study. Experiments have shown unique informa-
tion over time in the spectral-reflectance profiles and vegetation indices of the
agricultural crops. Root Mean Square (RMS) images of May14 and Aug01 has
shown better accuracy in the unmixing results than for Aug26.
Keywords
Sub-pixel, Linear spectral unmixing, multitemporal profile, endmember
iii
Abstract
iv
Contents
Acknowledgements i
Abstract iii
List of Tables ix
List of Figures xi
1 Introduction 1
1.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Research question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Subpixel Classification Methods 5
2.1 The Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Sub-pixel classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Spectral mixture analysis . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Fuzzy classification . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 A selection from different models . . . . . . . . . . . . . . . . . . . . . 11
2.4 One approach to perform Linear unmixing . . . . . . . . . . . . . . . 14
2.4.1 Endmember selection . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.2 Output of linear unmixing . . . . . . . . . . . . . . . . . . . . . 18
v
Contents
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Study Area and Data 19
3.1 Study area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 Earth Observation data . . . . . . . . . . . . . . . . . . . . . . 20
3.2.2 Field reference data . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.3 Data from the Central Bureau for Statistics (CBS) . . . . . . . . 21
3.2.4 Crop calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.5 Selection criteria . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Data preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Geometric corrections . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.2 Radiometric corrections . . . . . . . . . . . . . . . . . . . . . . 23
3.3.3 Reference data is refined . . . . . . . . . . . . . . . . . . . . . . 25
4 Temporal Analysis 27
4.1 Available Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.1.1 Vegetation Indices . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Temporal-spectral unmixing profile . . . . . . . . . . . . . . . . . . . 29
4.2.1 Temporal-spectral profiles . . . . . . . . . . . . . . . . . . . . . 29
4.2.2 Temporal-NDVI profiles . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5 Results and Discussion 33
5.1 Endmember Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.1.1 Pixel Purity Index (PPI) . . . . . . . . . . . . . . . . . . . . . . 33
5.1.2 Principal Component Analysis (PCA) . . . . . . . . . . . . . . 34
5.1.3 Crop statistics from CBS . . . . . . . . . . . . . . . . . . . . . . 35
5.1.4 Spectral Angle Mapping (SAM) . . . . . . . . . . . . . . . . . . 35
5.2 Linear spectral unmixing . . . . . . . . . . . . . . . . . . . . . . . . . . 40
vi
Contents
5.2.1 Spectra collection . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2.2 Unmixing results . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6 Conclusion and Recommendations 49
6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Bibliography 53
A Endmember selection through PCA 55
B Statistics from linear spectral unmixing 57
vii
Contents
viii
List of Tables
2.1 Applicability of mixture models to different application [11] . . . . . 12
3.1 Satellite remote sensing data . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Landsat ETM+ spectral channels . . . . . . . . . . . . . . . . . . . . . 21
5.1 Confusion matrix for SAM classification . . . . . . . . . . . . . . . . . 39
5.2 Unmixing results showing main statistics for 3 Landsat images . . . 42
B.1 Pixel fractions constituted by Maize . . . . . . . . . . . . . . . . . . . 57
ix
List of Tables
x
List of Figures
2.1 Four cases of mixed pixels [7] . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Geometric representation of mixing model [1] . . . . . . . . . . . . . . 9
2.3 The linear model of spectral mixing [1] . . . . . . . . . . . . . . . . . . 14
2.4 Two dimensional example of the Spectral Angle Mapper [1] . . . . . 17
3.1 Study Area [9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1 Mean field spectra for eight endmembers for three dates (a) May14
(b) Aug01 (c) Aug26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Mean NDVI on three dates at field level for (a) maize(b) wheat . . . 31
4.3 Temporal-NDVI profiles for (a) potato (b) wheat (c) maize (d) sugarbeet 31
5.1 PPI image showing the purest pixels (in white) with an overlay of
crop reference polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.2 SAM rule images for (a) Asperges (b) Maize (c) Potato (d) Leek. . . . 36
5.3 A classification result of the SAM (Aug01) . . . . . . . . . . . . . . . . 37
5.4 Endmember spectra for Maize, Grass, Sugarbeet and Soil . . . . . . . 41
5.5 Fraction images for May14 [(a),(b),(c)] and Aug01 [(d),(e),(f)] . . . . . 43
5.6 Statistics of 3 fraction maps and RMS image for Aug01 calculated
with (a) 6VIR bands (b) bands 5432 . . . . . . . . . . . . . . . . . . . . 44
5.7 RMS images for (a) May14 (b) Aug01 (c) Aug26 . . . . . . . . . . . . . 45
5.8 Composite of 3 endmember fraction images for (a)May14 (b)Aug01
(c)Aug26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
xi
List of Figures
A.1 Eigne Value Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
xii
Chapter 1
Introduction
Earth observation from space as we call it Remote Sensing offers unrivalled ca-
pabilities for understanding, monitoring, forecasting, managing and decision ma-
king about our planet’s resources. It provides a rich harvest of information on our
planet and environment. United Nations’ statistics show that conventional maps
cover only 44 % of the world’s landmasses. Many are also obsolete, unreliable or
inaccurate, and are often difficult to obtain. In contrast, satellite imagery is compre-
hensive, reliable, regularly updated, and has been acquired practically worldwide.
Nevertheless, trustworthiness and reliability of this information depend on the way
we interpret this information.
Interpretation of remote sensing data is a challenging task. A popular and prac-
tised interpretation of digital remote sensing data is by image classification. The
purpose of image classification is to assign each pixel in a digital image to one of
the predefined and limited number of classes (a theme). Hence we obtain a the-
matic map from a digital image where each theme is a particular land cover class
in the scene. Each pixel is allotted this particular theme depending on its digital
number, so the pixel is used as a classification unit.
1.1 Problem definition
Using traditional classification methods, information can be extracted to the pixel
level as described in the previous section. If an Instantaneous Field of View (IFOV)
of a sensor is larger than the feature of interest on the ground, pixels cover more
1
1.2. Motivation
than one feature or cover type. Scientists deal with the problem by labelling the pi-
xel as ”Mixed pixel”. As a result, these pixels are ineffective to give us information
about the association to any specific feature and we lose data without obtaining
information. This is because reflection of a mixed pixel is not representative of a
particular feature but rather a composite of other features along with it (within that
pixel). The interpretation of this mixed pixel can be made by an understanding of
the spectral components within each pixel. The procedure carried out to separate
and identify these spectral components is called subpixel classification.
Subpixel processing/analysis is defined as the capability to detect or classify objects
that are smaller than the size of an image pixel [10].
The following section describes briefly the significance of subpixel classification for
agricultural monitoring and management, and hence the motivation for this re-
search.
1.2 Motivation
The importance of monitoring agricultural production and acquiring relevant sta-
tistics is increasing with today’s rapid development and consequent considerable
changes on the Earth’s surface. The significance of this monitoring for a better
management of land resources and food production is even more acute for those
countries, where agriculture is one of the main resources of their economy.
One important parameter to assess and evaluate an agricultural systems is iden-
tification of various crop types. In many countries, taxation and subsidy systems
depend on the crop type that a farmer grows. Estimating exact crop acreage can be
used to calculate crop yield, which itself is an important parameter to monitor the
efficiency of an agricultural system.
Classification of satellite remote sensing data has been widely used for the crop
identification and classifying an agricultural land into different crop types. Each
pixel in the image is assigned to the most similar crop type and a thematic map is
obtained. However, pixels on the edges of fields and sometimes in a small cropping
system, where the crop fields are fragmented into each other, result into mixed
pixels.
In addition to the problem of mixed pixels, limited spectral separability among
different crop types is another problem, which causes difficulty and errors in clas-
2
Chapter 1. Introduction
sification. These problems are being addressed in this thesis and are investigated
by applying subpixel classification on remote sensing data.
This study is carried out in cooperation with Synoptics bv, the Netherlands (Inte-
grated Remote Sensing and GIS Applications), by provision of the remote sensing
data, field data and experience in crop mapping. ’PiriReis’ is one of their products;
a digital crop map of The Netherlands, which is produced on a yearly basis. ’Piri-
Reis’ provides detailed information on which crop has been grown on a particular
parcel during the past growing season. This experience from Synoptics bv is expec-
ted to be useful for this study as well.
1.3 Research question
Subpixel classification is performed by comparing the observed spectra of pixels in
an image with the pure spectra of the endmembers (an endmember is a pixel com-
ponent which is a pure cover type). Basically, it is performed by segregating a pixel
into its components on the basis of their different spectral characteristics. The pur-
pose of this study is to discriminate different crop types at the sub-pixel level.
It is anticipated that spectral characteristics of many crops are not very distinctive
from each other. It is planned, therefore, to exploit temporal characteristics of crops
in addition to their spectral characteristics, to make the classification of the crops
more significant and accurate.
It is expected that the spectral profile (reflectance curve) of each of the crops does
not follow the same pattern, when studied at different instants in time. Based on
this assumption, it will be tried to answer the following question through this re-
search:
• How is it possible to take an extra benefit from multitemporal data for crop
(sub-pixel) classification?
• Will it improve subpixel classification results and make the identification of
crops more significant?
1.4 Objectives
The main objective of this study is to explore temporal and spectral characteristics of
different crop types for an improved crop classification methodology.
3
1.5. Outline of the thesis
This objective is planned to be achieved in the following sequence of steps:
• Exploration of different theoretical and technical issues in subpixel classifica-
tion procedures (to chose an appropriate one),
• Exploring the temporal significance in an agricultural environment,
• Selection of correct number of cover types (endmembers),
• Obtaining pure spectra from the images for the selected endmembers,
• Subpixel classification will be applied on a series of remote sensing images, in
order to obtain abundance(fraction) maps for each crop,
• Temporal analysis of the spectral unmixing results.
1.5 Outline of the thesis
The organization of this work is described below:
General factors to cause mixed pixels in an image are described in Chapter 2. The
existing methods for soft classification (subpixel classification) are reviewed. Chap-
ter 3 is giving a brief description of the study area, data and materials used for the
study. Chapter 4 is elaborating the importance of temporal factor in crop monito-
ring and classification. An implementation of linear unmixing on Landsat images
is done in Chapter 5 and results are discussed. Chapter 6 is summarizing and con-
cluding the study and recommendations for a future research are suggested.
4
Chapter 2
Subpixel Classification Methods
2.1 The Pixel
A digital image consists of a two dimensional array of individual picture elements
called pixels. Each pixel represents an area on the Earth’s surface and has an inten-
sity value, represented by a digital number. This intensity value is a measure of the
energy reflected (or emitted) from the ground. This value is normally an average
of the whole ground area covered by the pixel.
Resolution of an image is constrained by the pixel size and this pixel size is deter-
mined by the Instantaneous Field of View (IFOV) of the sensor’s optical system.
IFOV is a measure of the ground area viewed by a single detector element in a gi-
ven instance in time. Therefore, more than one land cover type or feature may be
included in an IFOV, resulting in mixed pixels. The number of mixed pixels in an
image is a function of the IFOV of the instrument and the spatial complexity of the
phenomenon being imaged [13].
A mixed pixel in an image can be the consequence of any of the following situations
on the ground [7]:
• Boundaries between two or more mapping units (e.g. field-woodland boun-
dary),
• The intergrade between central concepts of the mappable phenomena (eco-
tone),
• Linear sub-pixel objects (e.g. a narrow road), or
5
2.2. Sub-pixel classification
• small sub-pixel objects (e.g. a house or a tree).
These situations are shown in Figure 2.1.
Figure 2.1: Four cases of mixed pixels [7]
The presence of these mixed pixels is a nuisance when performing classification,
because in the conventional classification procedures, a pixel is considered as an
elementary unit for the analysis. Each pixel is, therefore, assigned to a single feature
or cover type, even though it is not true for a mixed pixel. It introduces inaccuracy
and imprecision in the classification results.
The problem of mixed pixels is dealt by subpixel classification. In Section 2.2 and the
following sub-sections, subpixel classification and general approaches to perform it,
are described. In Section 2.3, the applicability of these models in different fields of
application is presented. In Section 2.4, the method that will be implemented in
this study for crop classification is explained.
2.2 Sub-pixel classification
A considerable amount of work is now available in literature which is based on
rejection of the idea that a pixel can be assigned to a single cover type only [7].
6
Chapter 2. Subpixel Classification Methods
These sub-pixel procedures attempt to extract components of the pixel, recognizing
that more than one land cover type may exist within a pixel. These components of
a pixel are referred to as endmembers as they represent the cases where 100 percent
of the sensor’s IFOV is occupied by a single homogeneous cover type [13].
Subpixel classification is mainly performed by two methods; Spectral Mixture Analy-
sis and Fuzzy Classification.
2.2.1 Spectral mixture analysis
The usual approach to carry out spectral mixture analysis is by modelling of spec-
tral mixtures. Mixture modelling is the process of deriving mixed signals from pure
endmember spectra while spectral unmixing aims at doing the reverse, deriving the
fractions of the pure endmembers from the mixed pixel [20].
Several models have been proposed in the last couple of years to unmix pixels and
determine proportions of their components. The more particular ones are linear,
probabilistic, geometric or geometric-optical and stochastic geometric models [11].
These models are comprised of known and unknown parameters. The known pa-
rameters are always the observed reflectance from the pixel and the pure spectra
of the pixel components (or endmembers). In most of the cases the unknown pa-
rameters (which have to be determined by properly using known parameters) are
the areal proportions of the endmembers.
In the following sections, these mixture models are described briefly.
The Linear model
In linear mixing, it is assumed that the observed pixel reflectance is a linear mix-
ture of pure spectra of all of its component endmembers (in that pixel) multiplied
by their respective areal proportions in each spectral band [18, 16, 11]. Hence, mat-
hematically, the observed reflectance ri for a pixel in band i will be;
ri = f1ai,1 + f2ai,2 + · · · + fcai,c + ei, (2.1)
where e is an error term, f is fraction of an endmember in a pixel, c is possible
number of endmembers in the scene, a is the pure (or characteristic) spectra from
the respective endmember.
If we replace j = 1, · · · , c, then Equation 2.1 can be simplified as:
7
2.2. Sub-pixel classification
ri =c∑
j=1
fjaij + ei (2.2)
Hence for a multispectral image of n bands; i = 1, · · · , n, there will be n linear
equations. In addition to these n equations, there will be another equation, which
is called sum-to-unity constraint equation:
f1 + f2 + · · · + fc = 1 (2.3)
It states that the sum of component proportions for each pixel should sum to 1
(provided that none of the fraction is negative).
Hence we have a system of linear equations, which can be solved in a number
of ways. Input for the model is the observed reflectance and the pure spectra of
components in the pixel. Substituting, these known parameters in these equations,
the areal proportion for c endmembers is determined. The number of unknowns
should be less or equal to the number of equations for a unique solution. Hence an
image of n bands will give n versions of Equation 2.2 and in all n +1 equations (in-
cluding Equation 2.3). Hence, this system is capable of yielding a distinct solution
for c endmembers where c = n + 1. However, if c < n + 1, it is possible to calculate
magnitude of the error term e, using the principle of least-squares.
It is pointed out in the literature, however, that it is not so straight forward and n
is not always the number of bands, but the intrinsic dimensionality of spectral data
which can be revealed by Principal Component Analysis (PCA). PCA is a statistical
procedure for transforming a set of correlated variables into a new set of uncorre-
lated variables by rotating the original axes to new orientations that are orthogonal
to each other and therefore there is no correlation. Hence when PCA is applied
to a multi-band image, it decorrelates the data by transforming DN distributions
around sets of new multi-spaced axes [15]. Hence, in case of , for example, Landsat
TM data, if the fifth and sixth PCs of the data contain nothing but noise then the
true dimensionality of the data is 4 and not 6 [16].
An intuitive way to understand spectral mixing is geometry of feature space. Mi-
xed pixels are visualized as points in n-dimensional spectral space, where n is the
number of bands. In two dimensions, if only two endmembers mix, then the mi-
xed pixels will fall in a line. The pure endmembers will fall at the two ends of the
mixing line. If three endmembers mix, then the mixed pixels will fall inside of a
8
Chapter 2. Subpixel Classification Methods
triangle (Figure 2.2). Mixtures of endmembers ”fill in” between the endmembers
in the absence of noise and with non-negative proportions [1, 16].
All mixed spectra are ”interior” to the pure endmembers, inside the simplex for-
med by the endmember vertices. This ”convex set” of mixed pixels can be used to
determine how many endmembers are present and to determine their spectra [1].
Figure 2.2: Geometric representation of mixing model [1]
The Probabilistic model
Probabilistic models are based on one of several probability techniques such as ma-
ximum likelihood [11]. A series of probabilistic models was developed to achieve
subpixel resolution for analysis of crop acreage by the Environmental Research Ins-
titute of Michigan [14]. A complex Maximum Likelihood algorithm was developed,
based upon the weighted combinations of component class mean vectors and co-
variance matrices. This method was proved promising to calculate the proportions
where the spectral separation was high.
An approximate maximum likelihood technique was developed by [14]; an exam-
ple of the probabilistic model. This technique was applied on 4 bands of Landsat
MSS data. Pure pixels from two different homogeneous geological components
were selected and their reflectance values in the 4 bands were transformed into
a single variable for the analysis using a linear discriminant analysis. This single
transformed value represents an oblique projection of the sample onto the linear
discriminant line. This line in the original four dimensional feature space joins
the multivariate means for the two classes of homogeneous pixels. Squared dis-
tances measured along the discriminant function line are called Mahalanobis or
M-distances. This M-distance is calculated between the the means of two homo-
geneous components X and Y and between each mixed pixel m and the means of
two pure components X and Y respectively. These M-distances are incorporated
9
2.2. Sub-pixel classification
in the following formula to calculate proportion of each of the component in the
mixed pixels. The formula is:
Py = 0.5 + 0.5d(m,x) − d(m, y)
d(x, y), (2.4)
Where:
Py =proportion of component Y in the mixed pixel;
d(x, y) =M-distance between means of homogeneous components X and Y ;
d(m,x) =M-distance between mixed pixel m and mean X ;
d(m, y) =M-distance between mixed pixel m and mean Y ;
Py = 0, if result is negative; and
Py = 1, if the result is greater than 1.0.
A major drawback of this method is that the probabilistic model is limited to deter-
mine proportions of up to two endmembers only in a mixed pixel.
The Geometric and Stochastic-Geometric models
These are two examples of mixture models which are more complex and need more
input than the linear and probabilistic models [11]. In the geometric or geometric
optical models, (when applied for the forest classification), the geometry of tree
crowns, their distribution and the direction of solar illumination are taken into ac-
count in order to evaluate the relative proportions of the crown, shadow and back-
ground in pixels.
The stochastic geometric model is a special case of geometric model in which the
scene geometric parameters are treated as random varieties in order to absorb the
random variabilities in their spatial structure.
2.2.2 Fuzzy classification
Another approach for subpixel classification is Fuzzy classification. It helps in dea-
ling with uncertainty, vagueness and complexity using the concept of fuzzy sets.
Membership conception in fuzzy sets allow us to allocate one entity to more than
one class. Hence, in case of image classification, we can assign a pixel to more than
10
Chapter 2. Subpixel Classification Methods
one attributes or classes and this assignment is described in terms of membership
grade.
In conventional classification procedures, a pixel is assigned to one class only with
a membership grade 1 always. This is why, it is called ’hard’ classification. In fuzzy
classification, membership grades range from 0 to 1; the more similar a pixel is to
an attribute class, the closer will be the membership grade to 1 for this class and
vice versa. The membership grades for an entity (or pixel) assigned to different
classes should always sum to 1.
The similarity is often defined in terms of distance of the unknown pixel to a class
cluster. It can be Euclidian distance (a diagonal where attributes are scaled to have
equal variance or Mahalanobis distance when both variance and covariance are
considered for distance scaling [5].
In fuzzy classification, class boundaries in feature space overlap each other and do
not have ’hard’ boundaries. This overlap of classes is defined by a parameter q,
called fuzzy exponent. The fuzzy exponent, q, for the class overlap can be limited
by the user.
Fuzzy classification can also be performed in two ways as standard image classifi-
cation, that is, supervised and unsupervised classification. In unsupervised classifi-
cation, features are classified merely on basis of their spectral characteristics which
is generally achieved by some clustering techniques. In supervised classification
some prior knowledge is used to create a training set for different classes and this
training set is used to define and identify the spectral characteristics of different
classes in the scene. Fuzzy classification can be classified similarly as:
Fuzzy Clustering (K-means) It is conceptually similar to K-means unsupervised
classification.
Fuzzy Supervised This approach is similar to Maximum Likelihood classification
with a few amendments.
2.3 A selection from different models
A comparative study among different types of mixture models (including Fuzzy
classification) is done by [11]. All these models are alike in the sense that the ob-
served pixel reflectance is taken as a function of both the spectral signature and the
11
2.3. A selection from different models
areal proportion of its component endmembers. The difference is, however, in the
way they include the scene and imaging characteristics into this function. In linear
models, the scene variability is accounted for by means of random residual while
in geometric-optical and stochastic-geometric models, it is based on the analysis of
the scene geometry. In the probabilistic and fuzzy models, it is done through some
statistical method like for instance, maximum likelihood technique.
Table 2.1: Applicability of mixture models to different application [11]
Application Model Applicability
Estimation of... Linear Probabilistic Geometric Stochastic Fuzzy
Optical Geometric
Vegetation versus bare ground
proportions in a dense forest yes yes no no yes
Vegetation versus bare ground
proportions in a sparse forest yes yes yes yes yes
Proportions of different
plant communities yes yes no no yes
Average tree height, size,
and density no no yes no no
Proportions of area
coverage of different crops yes yes no no yes
Proportions of different
soil or rock types yes yes no no yes
Proportions of different
minerals yes yes no no yes
Proportions of miscellaneous
land covers yes yes no no yes
Table 2.1 is giving an overview of different spectral mixture models in terms of their
applicability in different fields of applications [11]. It is obvious from this table that
for our application, which is to identify different crop types and to calculate area
coverage by each crop, linear mixture model (highlighted) can be used.
Four types of mixtures of the materials which are generally found in the real world
are described by [6]:
Linear Mixture The materials in the field of view are optically separated so there is
no multiple scattering between components. The combined signal is simply
12
Chapter 2. Subpixel Classification Methods
the sum of the fractional area times the spectrum of each component. This is
also called areal mixture.
Intimate Mixture An intimate mixture occurs when different materials are in in-
timate contact in a scattering surface, such as the mineral grains in a soil or
rock. Depending on the optical properties of each component, the resulting
signal is a highly non-linear combination of the end-member spectra.
Coatings Coatings occur when one material coats another. Each coating is a scatte-
ring/transmitting layer whose optical thickness varies with material proper-
ties and wavelength.
Molecular Mixtures Molecular mixtures occur on a molecular level, such as two
liquids, or a liquid and a solid mixed together. For examples, water adsorbed
onto a mineral. The close contact of the mixture components can cause band
shifts in the adsorbate, such as water in plants.
Looking through these definitions provided for different type of mixtures, it seems
quite safer to say that a signal from different crop types can be a linear mixture.
Hence this section can be concluded by describing the following points in support
of linear unmixing for this application:
1. If there is no multiple scattering and photons interact with a single material
only, which is possible in large scale areal mixing, then it can be considered
as linear mixing [17].
2. The linear model is easier to implement and does not need many complicated
input parameters like geometric models.
3. Also, this model does not have a restriction like probabilistic models to unmix
a pixel up to two components only.
4. Most classifiers rely on a gaussian probability distribution of the spectral
signature of the training data which often exhibits a non-gaussian distribu-
tion [19].
The methods that will be followed to implement the model, is described in the
following section.
13
2.4. One approach to perform Linear unmixing
2.4 One approach to perform Linear unmixing
A general description of linear mixture model is given in Section 2.2.1. It was des-
cribed, the system of linear equations can be solved in different ways. The method
implemented in this research is matrix-inversion. Equation 2.2 can be expressed in
vector-matrix notation as:
r = Af + e (2.5)
Where the observed spectrum r (a vector) is a product of mixture of pure endmem-
ber spectra A (a matrix) and endmember fraction f (a vector) plus an error (vector)
e.
If n is number of bands and c is number of cover type components (endmembers),
then A is an nxc matrix, f is cx1 vector as f = (f1, f2, · · · , fc)T and r is nx1 vector
as r = (r1, r2, · · · , rn)T . This can be seen in Figure 2.3:
Figure 2.3: The linear model of spectral mixing [1]
Rewriting Equation 2.5 as:
rA−1 = f + e (2.6)
Hence a simple vector-matrix multiplication between the inverse pure spectra ma-
trix and an observed mixed spectrum gives an estimate of the fraction (proportion)
of the endmembers [1].
ei is the residual error in band i. This residual error is the difference between the
measured and the modelled spectrum in each band. In Equation 2.5, r is measured
14
Chapter 2. Subpixel Classification Methods
spectrum and (Af) (or simply r) is the modelled spectrum. In an image of m pixels
and n bands, residuals over all bands for each pixel in the image can be averaged to
give an RMS error, portrayed as an image, which is calculated from the difference
of measured rjk and modelled r′jk pixel spectrum.
RMS =1m
m∑k=1
√√√√ n∑j=1
(rjk − rjk)2
n(2.7)
Implementation of linear unmixing can be performed in two ways; unconstrained
unmixing and constrained unmixing. In unconstrained unmixing, the fraction va-
lues of the endmembers can have any value required to minimize the residual error.
However, in constrained unmixing the fraction values for each pixel are forced to
sum to one and no negative fractions are allowed. In addition to these two condi-
tions, there is another type of condition which is called partially constrained con-
dition. This condition takes unity constraint but allows negative values for the
endmember fractions.
Constraining the data is artificial as it will just apply a linear correction after having
unmixed the data (a scaling of the data).
2.4.1 Endmember selection
Pure features in a mixed pixel are referred to as endmembers of that pixel. The
selection of appropriate endmembers to input into a linear model is very important.
It can be achieved in two ways [20].
1. From a spectral (field or laboratory) library
2. From the purest pixels in the image
Endmembers obtained through the first option are generally referred as ’known’,
while the one from the second option are called as ’derived’. Derived endmem-
bers have preference over the ’known’ because they are collected under the same
atmospheric conditions. It saves from the necessity to atmospherically correct the
image and calibrate the data to reflectance space. Also it sets aside the possibility
of ignoring a pure endmember in the scene.
15
2.4. One approach to perform Linear unmixing
Principal Component Analysis (PCA)
There is a tendency for multiband images to be somewhat redundant wherever
bands are adjacent to each other in the multispectral range. Thus, such bands
are said to be correlated (relatively small variations in DNs for some features).
A statistically-based program, called Principal Components Analysis, decorrelates
the data by transforming DN distributions around sets of new multi-spaced axes.
In case of Landsat TM data, over 90% of the spectral variability is mapped into first
two PCs [20]. Thus a scatterplot of PC1 and PC2 can be used to select spectrally
pure endmembers.
Pixel Purity Index (PPI)
PPI is an algorithm to find spectrally pure pixels from an image [4]. It utilizes
the concept of convex geometry as described in Section 2.2.1. These correspond to
the materials with spectra that combine linearly to produce all of the spectra in the
image.
The PPI is computed by using projections of n-dimensional (n is number of bands
in the image) scatterplots to 2-D space and marking the extreme pixels in each
projection. The output is an image (the PPI Image) in which the digital number
(DN) of each pixel in the image corresponds to the number of times that pixel was
recorded as extreme. Thus bright pixels in the image show the spatial location of
spectral endmembers [1].
From RMS image
Spatial pattern of a root mean squared (RMS) image is an indicator to tell that some
pure endmembers in the scene are still missing as the model input. Hence, addi-
tional endmembers are selected on the basis of clearly visible spatial pattern in the
RMS image until it does not show any obvious systematic spatial pattern of error
distribution [20].
Spectral Angle Mapper (SAM)
Another approach implemented by [18] to select the most important endmembers
from a bulk of endmembers present in a scene is Spectral Angel Mapping tech-
16
Chapter 2. Subpixel Classification Methods
nique (SAM). This technique was used to extract the most important components
constituting the bulk of the spectral variability throughout the data set.
The Spectral Angle Mapper (SAM) compares a reference spectrum (of a pixel, e.g.)
to individual spectra or a spectral library [1]. The algorithm determines the simila-
rity between two spectra by calculating the spectral angle between them, treating
them as vectors in a space with dimensionality equal to the number of bands.
A simplified explanation of this can be given by considering a reference spectrum
and an unknown spectrum from two-band data. The two different materials will
be represented in the 2-D scatter plot by a point for each given illumination, or as a
line (vector) for all possible illuminations (Figure 2.4).
Figure 2.4: Two dimensional example of the Spectral Angle Mapper [1]
If r is the reference spectrum and t is an unknown spectrum (or an endmember
spectrum), then SAM algorithm generalizes this geometric interpretation for an n-
dimensional space (where n is number of bands) by the given equation [1]:
α = cos−1
∑ni=1 tiri
[∑n
i=1 ti2]12 [
∑ni=1 ri
2]12
(2.8)
Hence the output of SAM for each pixel is an angular distance between the two
spectra in radians (ranging from 0 to π2 ). Output of SAM will be a new set of data
17
2.5. Summary
with as many bands as the number of unknown (endmember) spectra given as
input in the algorithm. In addition to this multi-layer image (which is called a
”rule” image), there will be a classified SAM image showing the best SAM match
at each pixel.
Darker pixels in the rule images represent smaller spectral angles, and thus spectra
that are more similar to the endmember spectrum and vice versa. Hence this is a
qualitative estimate of the presence or absence of an endmember in a pixel.
2.4.2 Output of linear unmixing
Subpixel classification does not yield an output in the form of a single thematic map,
as in pixel-based classification methods. In subpixel classification, we get a series
of abundance maps, each map is of the same extent as the original image. These
abundance maps represent areal proportions of each of the endmembers (cover
types) present in each pixel of the input image. ”The result is therefore a mass of
quantitative and not just thematic data” [16].
Linear unmixing provides a root-mean square (RMS) image in the output, along
with the abundance maps. The RMS image results from the difference between
the the observed pixel spectrum and the spectrum reconstructed from the calcula-
ted abundances. The advantage for RMS image is that it accounts for the poorly
classified pixels [18], and is a straightforward tool for the evaluation.
2.5 Summary
This chapter has described shortly the factors that cause mixed pixels in an image,
and then reviewed different methods/models for classification considering the pre-
sence of mixed pixels in an image. From the available methods, spectral linear un-
mixing is found to be adequate for this study. Mixing can be considered a linear
process if: (1) no interaction between materials occurs, each photon sees only one
material, (2) the scale of mixing is very large as opposed to the size of the materials,
and (3) multiple scattering does not occur [20]. Furthermore, it is also described
how to incorporate parameters into this model and to evaluate the results.
18
Chapter 3
Study Area and Data
3.1 Study area
The study area Maasbree is situated in the south of Netherlands, in the province of
Limburg, at a latitude of 51o23N and 5o57E in longitude.
Figure 3.1: Study Area [9]
The municipality is touching the city of Venlo on its west through which the river
Maas is passing. According to the Central Bureau for Statistics (CBS) of The Net-
19
3.2. Data
herlands, the area of this municipality is 51 ·18 km2, water surface covers 3 km2 and
it is inhabited by a population of 12,850 people [12].
The main cover types in Maasbree are forest, urban and agriculture. The main
crop growing season is from April to October. This is a highly heterogeneous area
because of high variety of crops and more particularly vegetables. According to
the information collected by the ground truth survey (Synoptics bv), crops include
potato, sugar beet, maize, strawberry, grass-seed, leek, barley, wheat,different types
of cabbage, asperges et cetra.
3.2 Data
Data used for this study can be categorized in the following three types:
• Earth Observation data
• Field reference data
• CBS reference data
• Crop calendar
These will be described in detail below.
3.2.1 Earth Observation data
One Landsat-5 and two Landsat-7 satellite images are used for this study. These
three Landsat images are acquired during the growing season April–October 2000.
Acquisition dates for these three images are given in Table 3.1.
Table 3.1: Satellite remote sensing data
Satellite Sensor Date
Landsat-5 TM 14/05/00
Landsat-7 ETM+ 01/08/00
Landsat-7 ETM+ 26/08/00
The sensor onboard Landsat-5 is thematic Mapper (TM) while Enhanced Thematic
Mapper plus (ETM+) on Landsat-7. Both sensors have identical sensor characte-
ristics and spectral band width. the design of ETM+ stresses the provision of the
20
Chapter 3. Study Area and Data
data continuity with Landsat-4 and -5. Similar orbits and repeat patterns are used
as well as the the 185 km swath width for imaging [13]. ETM+ is a passive sensor
that measures solar radiation reflected or emitted by the Earth’s surface. The ins-
trument has eight bands sensitive to different wavelengths of visible, infrared, and
thermal radiation (Table 3.2).
Table 3.2: Landsat ETM+ spectral channels
Band Spectral Range Nominal Spectral Spatial
µm Location resolution
1 0.45 – 0.52 Blue 30
2 0.52 – 0.60 Green 30
3 0.63 – 0.69 Red 30
4 0.76 – 0.90 Near Infrared 30
5 1.55 – 1.75 Mid Infrared 30
6 10.4 – 12.5 Thermal Infrared 60
7 2.08 – 2.35 Mid Infrared 30
8 0.52 – 0.90 Panchromatic 15
Hence the difference of ETM+ from the TM sensor, is the improved spatial reso-
lution (from 120m to 60m) of the thermal IR band (Band 6) and the addition of a
panchromatic band (Band 8) which are not being used in this study. Therefore, it
was possible to use data from both the sensors together in this study.
3.2.2 Field reference data
The date of ground truth survey for collection of reference data, used for this study
is August 10, 2000. It is provided by Synoptics bv, The Netherlands, in the format of
a polygon shape file. These polygons are representing different crop types, fallow
land and bare soil.
3.2.3 Data from the Central Bureau for Statistics (CBS)
The spectral resolution of Landsat images (6 bands used in this study) is not enough
to classify an area with an existence of more than 15 crop types (endmembers),
through spectral unmixing. The crop statistics from CBS, was available with Sy-
noptics. This data was, therefore, requested in order to be able to know the main
crops of the area and hence to short-list the endmembers. The database is providing
21
3.3. Data preparation
acreage per crop, per municipality, per are. One are is equivalent to 0.01 hectare. A
shape file to locate these municipalities on the images was also available.
This data could also be used later for assessment of area calculation results.
3.2.4 Crop calendar
Crop calendar provides knowledge of the crop development stages for a particular
area [13]. This information is useful to determine if a particular crop is likely to be
visible on a particular date. This information is available for the following crops:
Maize Soil preparation in April, sowing from April 25th till May 10th, full cover
from beginning of July, harvest from September 20th till November 1st.
Sugar beet Soil preparation and sowing in first week of April, full cover from end
of June/beginning of July, harvest in October/November.
Potato Soil preparation and planting in first half of April, full cover from end of
June/beginning of July, harvest in the end of August.
3.2.5 Selection criteria
Considering availability of the field reference data, crop acreage data from CBS and
clear areas in all the three Landsat images, the municipality of Maasbree is selected.
However a further subset of the area is prepared by masking out urban and forest
cover types to have an image of solely agricultural fields.
3.3 Data preparation
Earth observation data needs to undergo some correction procedures before it can
be used for processing and analysis. This is due to the distortions and degradations
induced to the data during image acquisition. These correction procedures are ge-
nerally termed preprocessing and can be divided into two categories; geometric
corrections and radiometric corrections.
22
Chapter 3. Study Area and Data
3.3.1 Geometric corrections
Geometric corrections are needed for geometric deformation due to the variation
in altitude, velocity of the sensor platform, for variations in scan speed and in the
sweep of the sensors field of view, earth curvature, relief displacement etc [13].
The systematic errors are normally corrected for at the receiving station. Random
distortion needs to be corrected for by the analyst through selection of sufficient
number of ground control points (GCPs) with correct coordinates, usually from
a map or GPS (Global Positioning System) points, which can be localized in the
satellite image. A transformation function is calculated to determine the distorted
image positions corresponding to the correct map positions: an undisturbed output
grid is defined.
After this, each cell in this new grid is assigned a gray level according to the corres-
ponding pixel in the original image, this process is called resampling. The cell size
in the original image and the new grid are not the same, therefore the DN values
can not be assigned by simply overlying the two but it is done through some inter-
polation methods. Commonly used resampling algorithms are:
Nearest neighbour The pixel value is assigned the DN value of the closest pixel in
the original image.
Bilinear interpolation Distance-weighted average is calculated over the four nea-
rest pixels in the original image and this value is assigned to the new pixel.
Cubic convolution In this scheme, a polynomial approach based on the values of
16 surrounding pixels is applied.
The images used in this study are corrected geometrically and resampled by Sy-
noptics bv, however the choice for resampling was told as nearest neighbour. This
resampling most closely preserves the spectral integrity of the image pixels. Bili-
near interpolation and cubic convolution resampling perform spectral averaging of
neighbouring pixels, yielding pixels with spectral properties that are less likely to
provide optimal results.
3.3.2 Radiometric corrections
Radiometric corrections are required because of the factors such as changes in scene
illumination, atmospheric conditions, viewing geometry and instrument response
23
3.3. Data preparation
characteristics which influence the radiance measured by any given system over a
given object [13]. Haze and sun elevation correction are applied on the data used
for this study.
Haze correction
Satellite images of the earth recorded by optical instruments may contain haze and
cloud areas but haze upto a certain optical thickness can be removed in multispec-
tral images. This allows for improved evaluation of satellite imagery, especially for
the applications where multitemporal dataset is used. Haze in an image is due to
the scattered path radiance, and it reduces the image contrast. The routine gene-
rally applied is called dark pixel subtraction which is applied to the dataset used
in this study. The pixel in a given image with the lowest brightness value is assu-
med to have a zero ground reflectance such that its satellite measured radiometric
value represents the path radiance contribution of the atmosphere that needs to be
factored out of the dataset. This method requires suitable dark pixels to exist so-
mewhere on the image. In the images of May14 and Aug26, water bodies and in
Aug01 cloud shadows were used to represent dark pixels in the image and for the
haze correction.
Sun elevation correction
This correction is necessary in applications in which images taken at different times
are used [13]. The sun elevation correction accounts for the seasonal position of
the sun relative to the earth. Through this process, image data acquired under
different solar illumination angles are normalized by calculating pixel brightness
values assuming the sun was at the zenith on each date of sensing. This correction
was applied by dividing each pixel value in a scene by the sine of the solar elevation
angle for the particular time and location of imaging.
DN =DN
Sineδ(3.1)
Where DN is the DN value in the corrected image and δ is the sun elevation angle
(this information is obtained along the images). This correction was applied on the
haze corrected images.
24
Chapter 3. Study Area and Data
3.3.3 Reference data is refined
Pure spectra representing an endmember (crop type in this study) is an important
input for the mixture model. In this study, these endmember spectra are collected
from the scene overlaying the field reference data on the images. Hence the end-
member spectrum for a crop is a mean spectrum of all the pixel spectra covered by
the representative reference polygons of that particular crop.
The procedure of pure spectra collection is considered to be an iterative process
unless a proper output is obtain from the mixture model. (Proper output means
that fraction values for all the endmembers are non-negative and their sum for a
single pixel does not exceed unity and RMS image is not showing a high spatial
variation). Once the mixture model was run, results obtained were not satisfactory.
The reason for this could be the pixels which are at the field boundaries. They are
generally mixed pixels due to a mixed reflection from different crop types in adja-
cent fields. It was decided then to refine these reference polygons to achieve purer
endmember spectra. For this the boundary pixels from all the polygons were exclu-
ded and in addition to that if a pixel among many is giving an odd visualization, it
was also excluded.
25
3.3. Data preparation
26
Chapter 4
Temporal Analysis
4.1 Available Methods
Crop type classification in agricultural areas using remote sensing data has been
employed since more than two decades like many other earth resource applica-
tions. However, in an environment like agriculture which is highly variable in time
and space, feature identification becomes complicated by analysing their spectral
properties. Use of multitemporal data may be a solution and a way to take the ad-
vantage of the spectral discrepancies over time.
Multitemporal data can be exploited through different ways to improve crop classi-
fication. A simple way is to merge two (or more) images from different dates during
the growing season to prepare a product for visual interpretation. An image which
is acquired at the beginning of the season can show fields with bare soil which
means a season crop is being sown and presence of a mature crop would mean that
a previous crop is not being harvested yet. A crop calendar which provides the
knowledge of crop development stages for an area can be helpful to determine the
presence of a particular crop at certain date.
For an automated classification, the multidate images can be combined to prepare
a single product and classification can be performed. Alternatively, principal com-
ponent analysis can be used to reduce the dimensionality of the combined dataset
prior to the classification [13]. For example, to merge two Landsat TM or ETM+
images, it is possible to take first three principal components computed from each
individual image and then merge to create a 6-band dataset for classification.
27
4.1. Available Methods
Another way of dealing with multitemporal data for crop classification is the multi-
temporal profile approach. In this approach, classification is based on physical mo-
delling of the time behaviour of each crop’s spectral response pattern. Temporal
pattern of spectral data represents the phenological development of a crop, that is,
the progress from seedling emergence to maturity of the crop [8]. Hence by relating
the observed temporal-spectral pattern to the expected phenological development
pattern associated with different crops, a crop identity or label can be assigned to
the field. It has been found that the time behaviour of the greenness of annual crops
is sigmoidal. (It is described more in detail in [13].) This profile is relatable with
vegetation indices and biophysical crop parameters. This is briefly reviewed in the
following section.
4.1.1 Vegetation Indices
A vegetation index (VI) is an indicator sensitive to chlorophyll activity and to the
density of vegetation cover [2]. Any VI is formulated to subtract the effect of re-
flectance in visible band from near infrared (NIR) reflectance. Hence, a vegetated
surface will yield high values because of their high NIR reflectance and low visi-
ble reflectance, rock and bare soil will result in 0 values due to similar reflectance
in the two bands and clouds, water and snow have larger visible reflectance than
NIR, thus these feature yield negative index values. The simplest VI is Simple
Ratio (SR = NIR/R) and the most common is Normalized Difference Vegetation
Index (NDV I = NIR − R/NIR + R). A list of the intrinsic, soil-adjusted and
atmospherically corrected indices can be seen in [3].
Temporal significance of the VIs is that it varies in parallel to the biomass of vegeta-
tion coverage. It grows and recedes in phase with the growth cycle of the plant. For
annual crops, the evolution of the vegetation index over the course of a season can
be divided into three key periods: the installation of vegetation (ascending phase),
flowering and formation of fruit (plateau), and maturation of the fruit, senescence,
and harvest (descending phase) [2]. This temporal VI curve will be different for
different crops depending on crop’s phenological cycle and hence being a tool for
crop classification.
Then there are biophysical crop parameters, which play a major role in the des-
cription of vegetation development, like fractional vegetation cover (νc), Leaf Area
Index (LAI), etc [3]. Few studies have been cited by [3], where attempts are made
to relate νc and LAI with VIs. A high correlation (r2) of νc with SR (r2 = 0.90)
28
Chapter 4. Temporal Analysis
and with NDVI (r2 = 0.79) and a linear relationship between LAI and SAVI (Soil
Adjusted Vegetation Index) has been reported.
4.2 Temporal-spectral unmixing profile
In this study, the intension is to observe the multitemporal behaviour of the fraction
values derived through spectral unmixing. As vegetation indices and crop para-
meters (some of them being mentioned in Section 4.1.1), when plotted against time,
have a well recognized relationship with the crop phenological cycle. Similarly, in
this study, general behaviour of temporal profile of an endmember’s fraction va-
lues obtained by applying spectral unmixing to multitemporal Landsat images will
be studied. The multitemporal profiles acquired for different crops will be compa-
red, expecting a relationship with the respective crop’s phenological cycle.
This will be achieved by applying spectral unmixing to the three Landsat ima-
ges acquired during the growing season-2000. These experiments are described in
Chapter 5. In this Chapter, small experimentation is done to observe the variance
shown by data over time and hence the significance of temporal analysis.
4.2.1 Temporal-spectral profiles
Spectra for eight different crop types existing in the study area are plotted for three
available images. These are the mean spectra of various fields of each crop obtained
from the field reference data (Figure 4.1).
The endmember spectra for almost all the crops are showing a similar pattern to
each other on May14 (except for wheat, barley and grass). In the other two dates,
they are manifesting some uniqueness. It can also be seen that leek and barley
curves are not as similar to each other in other two dates as in Aug01. On the other
hand, wheat and barley curves are very close to each other on Aug26, especially in
infrared bands, while revealing quite different behaviour in other two dates.
4.2.2 Temporal-NDVI profiles
NDVI images are prepared for 3 Landsat scenes. Then mean NDVI is extracted for
10 fields of each of maize, sugarbeet, potato and wheat. This mean NDVI for these
fields are plotted for the three dates and are shown in Figure 4.2
29
4.2. Temporal-spectral unmixing profile
Figure 4.1: Mean field spectra for eight endmembers for three dates (a) May14 (b) Aug01 (c) Aug26
30
Chapter 4. Temporal Analysis
(a) (b)
Figure 4.2: Mean NDVI on three dates at field level for (a) maize(b) wheat
(a) (b)
(c) (d)
Figure 4.3: Temporal-NDVI profiles for (a) potato (b) wheat (c) maize (d) sugarbeet
for maize and wheat. NDVI for maize is quite stable from field to field while wheat
is showing a high spatial variation especially on Aug01 and Aug26. It may be
due to multi-sowing dates, muti-variety of crop or varying distribution of plants in
different fields.
The same information is plotted differently in Figure 4.3. The multitemporal-NDVI
profiles for the four crops are exhibiting different pattern from each other. A very
obvious development (descending and ascending) in the vegetation index can be
seen from the symmetrical and uniform curves obtained from wheat fields. Profiles
31
4.3. Summary
for maise and sugarbeet are also quite stable in pattern. But for potato field, a very
high variance is obvious as if they are not from the same crop. It’s possible to
say that classification of potato fields is not as reliable as for wheat, maise and
sugarbeet.
4.3 Summary
In this chapter, significance of multitemporal data for crop classification is described
and the practised approaches are mentioned. Generally, while reviewing literature,
it has been found that an accurate crop classification is almost impossible through
data acquisition on a single date. Later, some experimentation is performed on
multitemporal data to demonstrate the variance shown by data over time and hence
the significance of temporal analysis.
32
Chapter 5
Results and Discussion
Linear spectral unmixing is applied on multitemporal Landsat images. Different steps
and approaches for endmember selection, collection of endmember spectra from
the scene and application of linear unmixing are described in this chapter. Results
from processing are described and discussed.
5.1 Endmember Selection
The maximum number of materials that can be unmixed through linear unmixing
is n − 1, where n is number of bands in the input image. This brings a unique
solution that minimizes the error of the model. Selection of pure pixels representing
these materials is the first step for unmixing. Following the different approaches,
as suggested in Section 2.4.1, Pixel Purity Index (PPI) and Principal Component
Analysis (PCA) are attempted. These are now described.
5.1.1 Pixel Purity Index (PPI)
The Pixel Purity Index (PPI) function is applied to locate the most spectrally pure
or extreme pixels in the scene. It was applied to Landsat image of Aug01. As this
image is closest to the field survey date (August 10). A threshold 2 was entered. It
means it marks all pixels greater than two digital numbers (DN) from the extreme
pixels as being extreme. The PPI procedure was tried with different number of
iterations. A series of 5,000 iterations could mark 550 pure pixels in the scene while
an iteration of 10,000 could not go beyond 575 pixels. Plot of iterations versus
33
5.1. Endmember Selection
number of pure pixels have shown quite stability in the curve after reaching 500
extreme pixels.
The second step was to relate these pure pixels to an endmember (a crop or bare
soil) in order to give an identity to these pixels. For this, field reference polygons
were overlaid on the PPI image. This is shown in Figure 5.1. These pure pixels
selected by PPI did not correspond with the field data except a few in the cabbage
polygon. It was not possible, therefore, to use the output from PPI.
Figure 5.1: PPI image showing the purest pixels (in white) with an overlay of crop reference polygons.
5.1.2 Principal Component Analysis (PCA)
In a second attempt to locate endmembers, a PCA was calculated for Aug01 image.
It was avoided to apply PCA to the stack of multidate layers as probability in-
creases to mix spectral and temporal information. Eigenvalue plot has shown
much more noise and less information in last four PCs as compared to PC1 and
PC2(Appendix A). Pure pixels were, therefore, marked from a scatterplot of PC1
and PC2. The same problem was faced as in case of PPI. These pixels do not relate
to the endmembers in reference data.
The above two approaches have not been proved successful in this research. Field
reference data was used to select pure pixels from the scene and then crop statistics
from CBS and Spectral Angle Mapping (SAM) technique are used to reduce the
endmembers to input in linear spectral unmixing.
34
Chapter 5. Results and Discussion
5.1.3 Crop statistics from CBS
A combined use of the CBS data and SAM to short-list the endmembers from 15 to
3 or 4, which are possible to unmix using a 6-band image. Soil was introduced as
one endmember, as the presence of soil in the background of vegetation is a main
reason of mixed reflection from vegetation.
According to the statistics from CBS, The following crops have comparatively more
acreage in the municipality of Maasbree than the rest of the crops.
1. Potato
2. Maize
3. Sugar beet
4. Leek/Asperges
5. Strawberry
In outcome of SAM, these crops were given more weight.
5.1.4 Spectral Angle Mapping (SAM)
SAM is applied to Landsat ETM+ of Aug01. Spectra of 14 endmembers are obtai-
ned from the field survey data. This reference data is providing several polygons
for each endmember. These polygons are converted to Regions of Interest (ROIs).
Hence a mean spectrum from the respective ROI is obtained for each endmember
and entered into the SAM classifier. Eight of these endmember spectra can be seen
in Figure 4.1. This is the input for the SAM classifier.
The output obtained from SAM is a classified SAM image. This classification image
is one band with coded (nominal) values for all classes. A code shows the best
match for a pixel (for example, potato is coded 10). A second output to the SAM is
a rule image. This is an image with 14 layers, each for one endmember showing the
angular distance of this endmember from the reference spectrum in radians. Hence
smaller values mean more similarity and these are darker pixels in the image and
vice versa. In Figure 5.2, four layers from the rule image are shown. These are
selected arbitrarily to show visually a difference in the endmember images that are
possessing different spectral similarities from the input image (Aug01).
35
5.1. Endmember Selection
(a) (b)
(c) (d)
Figure 5.2: SAM rule images for (a) Asperges (b) Maize (c) Potato (d) Leek.
Looking on these four images, it is quite obvious that potato and maize are contri-
buting more to the scene as compared to asperges and leek. Similarly, the straw-
berry image has shown less spectral similarity, with brighter pixels throughout the
scene. Hence through a visual analysis of these SAM images, potato, maize and
sugar beet can be stronger candidates than asperges, strawberry and prei in the list
of endmembers based on CBS statistics.
The classified SAM image is also evaluated. This image is created by labelling each
pixel with the endmember for which SAM classifier has found the best match. This
image is shown in Figure 5.3.
Looking at the SAM classified image, it is obvious that maize, potato, sugar beet,
and grass are mostly contributing to the scene. Some leek can be seen but only in a
36
Chapter 5. Results and Discussion
Figure 5.3: A classification result of the SAM (Aug01)
particular part (east) of the scene.
However, selection of these endmembers is not finalized by this visual analysis.
SAM classification results are quantified, and it is confirmed that maize, potato,
grass and sugarbeet are four main crops in the scene.
Crop Pixel (%)
Maize 28.57
Potato 14.94
Grass 11.98
Sugar beet 10.19
Leek 5.34
Hence grass has also emerged as an important endmember in addition to Potato,
37
5.1. Endmember Selection
38
Chapter 5. Results and Discussion
Maize and Sugarbeet.
Accuracy of SAM classification results are assessed by a conventional confusion
matrix. Field reference data is used for this assessment. A summary of the matrix
is given in Table 5.1. Though, the overall accuracy of SAM classification is only
44 · 73%, but looking individually to endmembers, independent of the overall ac-
curacy, some endmembers have shown a reasonable accuracy. For example, maize,
sugarbeet and grass. Especially, Producer’s accuracy for grass and sugarbeet is
84 · 62% and 75 · 80% respectively. However, potato has shown very little reliability
(Producer’s accuracy = 20 · 45 % and User’s accuracy = 34 · 62). It is due to multi-
varieties of the crop. The reference data is not enough to represent these different
varieties of potato. This is why, potato is dropped from the list of the endmembers.
Table 5.1: Confusion matrix for SAM classification
Reference Classified Correctly Producer’s User’s
Class name total pixelx total pixels classified Accuracy Accuracy
% %
Asperges 183 87 53 28 · 96 60 · 92
Soil 64 38 5 7 · 81 13 · 16
Barley 14 51 9 64 · 29 17 · 65
Cabagerd 37 16 13 35 · 14 81 · 25
Cabagest 28 16 4 14 · 29 25 · 00
Cabagewt 70 48 16 22 · 86 33 · 33
Fallow 16 46 0 0 · 00 0 · 00
Grass 13 27 11 84 · 62 40 · 74
Maize 376 345 239 63 · 56 69 · 28
Potato 132 78 27 20 · 45 34 · 62
Leek 169 89 54 31 · 95 60 · 67
Sugar beet 219 277 166 75 · 80 59 · 83
Tree crops 48 44 21 43 · 75 47 · 73
Wheat 37 50 11 29 · 73 22 · 00
Unclassified - 194 - - -
Total 1406 1406 629 - -
Overall Classification Accuracy = 44 · 7368%
39
5.2. Linear spectral unmixing
Hence a final selection of the endmembers made for linear unmixing is maize,
grass, sugar beet and bare soil. Soil is selected even though it has shown very
low accuracy as to see a mixed effect of soil with vegetation.
5.2 Linear spectral unmixing
Linear spectral unmixing is applied on the above endmembers to complete dataset
of multitemporal Landsat images. Unconstrained unmixing is chosen. It is preferred
over constrained unmixing as there is no use of artificially constraining the mixing.
It will just apply a linear correction after having unmixed the data.
Hence the advantage of unconstrained unmixing is that we can assess the results.
If there are negative abundances for any of the endmembers, or the abundances
for all of the endmembers for the same pixel sum to a quantity greater than 1, then
the unmixing doesn’t make any physical sense. One reason for this may be the
incorrect selection of endmembers. Hence, it is better to run unmixing iteratively
to examine the abundance images and RMS error image. Ideally, the RMS image
should not have high errors, and all of the abundance images are non-negative and
sum to less than one.
This iterative method is much more accurate than trying to artificially constrain the
mixing, as in this way, it is possible to detect the errors of the model.
5.2.1 Spectra collection
The spectra for the four endmembers were collected from the scene by overlaying
field reference data, as described in Section 5.1.4.
The four endmember spectra for Landsat ETM+, dated Aug26, are shown in Fi-
gure 5.4. These are the representative spectra of the endmembers, used in the un-
mixing process, to classify them. These endmembers were obtained after refining
the reference data by excluding boundary pixels from the polygons.
5.2.2 Unmixing results
Linear spectral unmixing to the four endmembers, maize, grass, sugar beet and bare
soil is applied to multitemporal Landsat images several times. Different combina-
tions of three and four endmembers altogether were tried. Fraction values for the
40
Chapter 5. Results and Discussion
Figure 5.4: Endmember spectra for Maize, Grass, Sugarbeet and Soil
endmembers are overflown and RMS image is showing a high spatial range. Fina-
lly, it is applied to maize, sugar beet and bare soil. Fraction images for these three
endmembers for May14 and Aug01 are shown in Figure 5.5. Brighter pixels are
showing higher abundance of an endmember and vice versa.
Theoretically, the fraction values should be in range from 0 to 1, as, for example, a
fraction value of 0.2 means 20 % of that endmember in that particular pixel. Hence
the sum of all the endmember fractions for a pixel should not exceed 1. It could not
be achieved. Similarly an RMS image which is showing at each pixel a difference
of modelled and measured pixel spectrum, are compared for the three dates. RMS
image for Aug26 is showing a high spatial pattern, which means some endmembers
are missing in the model. RMS image for May14 and Aug01 have shown compara-
tively better results. Statistics for the three fraction images of maize, sugarbeet and
soil and the RMS image for the three dates are shown in Table 5.2.
An unmixing applied on a spectral subset (bands 5432) instead of using all the 6
VIR Landsat bands, has shown a dramatic fall in the RMS values. There is some
improvement in the endmember fractions as well. This is shown in Figure 5.6. It
means that the extra two bands are just a source of redundant information in the
data and inclusion of them does not provide an extra benefit in the analysis.
Three endmember fraction maps are combined and a composite map is prepared
41
5.3. Discussion
Table 5.2: Unmixing results showing main statistics for 3 Landsat images
May14 Minimum Maximum Mean Standard
deviation
Maize −18 · 04 88 · 39 −0 · 96 5 · 55
Sugarbeet −85 · 68 18 · 52 1 · 26 5 · 43
Soil −1 · 99 6 · 25 0 · 55 0 · 76
RMS 0 · 00 86 · 16 3 · 63 2 · 55
Aug01
Maize −27 · 04 6 · 17 0 · 28 1 · 96
Sugarbeet −5 · 09 21 · 58 0 · 20 1 · 41
Soil −0 · 37 7 · 57 0 · 53 0 · 70
RMS 0 · 00 89 · 64 3 · 76 3 · 96
Aug26
Maize −26 · 00 5 · 53 0 · 63 1 · 61
Sugarbeet −4 · 82 22 · 74 0 · 00 1 · 30
Soil −0 · 70 4 · 32 0 · 31 0 · 41
RMS 0 · 00 132 · 16 4 · 20 6 · 07
for each date. These classification maps are shown in Figure 5.8. Three fraction ima-
ges obtained for maize, sugarbeet and soil has shown a positive relationship with
the crop calendar information. Unmixing results of May14 has shown more propor-
tion of soil and very little proportion of the two crops. This is because, sugarbeet
is sown just in first week of April and maize in last week of April (Section 3.2.4).
These crops have shown progress in their proportions towards August as per ex-
pectation. Hence implementation of linear unmixing to multitemporal images has
shown an additional aid to identify a particular crop.
The three RMS images obtained for the three dates are shown in Figure 5.7.
5.3 Discussion
Due to negative fractions values in the unmixing results, we could not use these
values for our further analysis and the results are limited to visual interpretation.
Three composite maps shown in Figure 5.8, are confirming the fact that sugar beet
and maize which are sown in beginning April and beginning May have shown little
42
Chapter 5. Results and Discussion
(a) maize (d) maize
(b) sugarbeet (e) sugarbeet
(c) soil (f) soil
Figure 5.5: Fraction images for May14 [(a),(b),(c)] and Aug01 [(d),(e),(f)]
43
5.3. Discussion
(a) (b)
Figure 5.6: Statistics of 3 fraction maps and RMS image for Aug01 calculated with (a) 6VIR bands
(b) bands 5432
presence in the May 14th image and are appearing as fields in August images. It is
also obvious from the fraction images in Figure 5.5, that the fraction image for soil
has turned very dark from may to august, confirming that mostly, the soil has been
covered by the crops in August images.
These negative and non-unity values may be explained by a high spatial and tem-
poral variability of the nature of the phenomenon being mapped. Difference in the
sowing dates for a crop in scene can cause a spectral variability for the same crop,
similarly distribution of crop plants may not be uniformly spread in the scene, it
may be dense in a field and sparse in another which will also be received by the
sensor as highly variable signals.
Also, the selected endmembers (crops) that are tried to be unmixed on a sub-pixel
level, can hardly be regarded as statistically independent of each other and are
linearly scaled versions of each other. In a complex system of linear equations such
as presented by the spectral unmixing technique, such linear scaling will result in
un-precise estimates of fractions and in extreme case in singular matrices and hence
leading to errors in estimates of their fractions.
44
Chapter 5. Results and Discussion
Figure 5.7: RMS images for (a) May14 (b) Aug01 (c) Aug26
45
5.3. Discussion
46
Chapter 5. Results and Discussion
Figure 5.8: Composite of 3 endmember fraction images for (a)May14 (b)Aug01 (c)Aug26
47
5.3. Discussion
48
Chapter 6
Conclusion and Recommendations
6.1 Conclusions
The main objective of this study was crop classification at sub-pixel level. Tempo-
ral properties of the crops were intended to be used in addition to their spectral
properties, to improve classification accuracy.
The assumption made in this study was that spectral characteristics of many crops
may not be very distinctive from each other. This spectral similarity makes the
crop classification accuracy relatively low. It was presumed that the problem can
be overcome by utilizing the spectral variation possessed by each crop differently
at different instants in time. This assumption is being tested (Chapter 4) and confir-
med that spectra from different crops relate differently with each other during the
growing season. For example, temporal-NDVI profiles for wheat and maize fields
have shown a very clear pattern. One can easily distinguish that these profiles are
from two different crop types due to their uniqueness.
Linear spectral unmixing is applied to three images dating May 14, Aug 01 and
Aug 26 (2000). The study area is highly heterogeneous because of high variety of
crops. But the maximum number of materials that can be unmixed through linear
unmixing is n − 1, where n is number of bands in the input image. A Pixel Purity
Index (PPI) algorithm was used to find spectrally pure pixels in the scene, but pure
pixels identified by PPI were not the desired endmembers. Hence, more important
endmembers were selected on the basis of their spectral similarity to the scene th-
rough Spectral Angle Mapper (SAM). Maize, grass and sugarbeet has shown more
spectral similarity with the scene. Soil was added as the forth endmember. Maize,
49
6.2. Recommendations
sugarbeet and grass has shown satisfactory accuracy through SAM classification.
Quantitative information from unmixing is not satisfactory. This is due to limitation
in unmixing that it needs a well-formulated spectral mixture, which is a complete
representative of the scene. Inability to provide a complete set of characteristic
spectra may result in inaccurate fractions. In this study, because of limited number
of Landsat bands (6 per one instant), the unmixing classifier was forced to unmix
only four endmembers, which was not the case in reality. In addition, small crop
fields in the area (in an extreme case, e.g., a field is equivalent to 4 pixels only) have
more probability to give mixed spectra.
Three fraction images obtained for maize, sugarbeet and soil has shown a positive
relationship with the crop calendar information. Unmixing results of May14 has
shown more proportion of soil and very little proportion of the two crops. This
is because, sugarbeet is sown just in first week of April and maize in last week of
April (Section 3.2.4). These crops have shown progress in their proportions towards
August as per expected. Hence implementation of linear unmixing to multitempo-
ral images has shown an additional aid to identify a particular crop.
The extent to which use of multitemporal data improves classification accuracy is a
function of the particular cover types involved and both the number and timing
of the various dates of imagery used. Hence, in case of crop classification, a crop
calendar can be used for the planning of number and timing for image and ground
data acquisition.
6.2 Recommendations
• Results in this study has shown that spectral-temporal profiles and fraction
values derived through unmixing have a positive relationship with the crop
growth cycle. Further work can be done to explore the behaviour of multi-
temporal-fraction profiles for different crops. The expectation is that the mul-
titemporal-fraction profile of each endmember will show a relationship with
the respective crop’s phenological cycle.
• When endmember spectra have a limited uniqueness, linear unmixing clas-
sifier does not work properly. In such situation, it will try to give proportion
for each endmember in every pixel across an image, which may not be reality
very often. It is, therefore, recommended that a combination of pixel-based
50
Chapter 6. Conclusion and Recommendations
and subpixel classification should be tried. Subpixel classification can be ap-
plied to the unclassified mixed pixels yielded by pixel-based classification.
51
6.2. Recommendations
52
Bibliography
[1] ENVI Tutoroials 2000. Envi version 3.4. Environment for Visualising Images.
With Software, 2000.
[2] AgriQuest. Vegetation indices. http://www.agri-quest.com, 2002. Last visited
02–2002.
[3] Wim G. M. Bastiaanssen. Remote Sensing in Water Resources Management: The
State of the Art. International Water Management Institute, Colombo, first edi-
tion, 1998.
[4] J. W. Boardmann, F. A. Kruse, and R. O. Green. Mapping target signatures via
partial unmixing of aviris data. Proceedings of the fifth JPL Airborne Earth Science
Workshop, JPL Publications 95-1, 1:23–26, 1995.
[5] Peter A. Burrough and Rachael A. McDonnell. Principles of Geographical In-
formation Systems, chapter Fuzzy Sets and Fuzzy Geographical Objects, pages
265–291. Oxford University Press, first edition, 1998.
[6] Roger N. Clark. Spectroscopy of rocks and minerals, and principles of spec-
troscopy. http://speclab.cr.usgs.gov/PAPERS.refl-mrs/refl4.html, 1999. Last
revised 06–1999.
[7] P. Fisher. The pixel: a snare and a delusion. Int. J. Remote Sensing, 18(3):679–
685, 1997.
[8] Karin Hall-Konyves. Remote Sensing of Cultivated Lands in the South of Sweden.
Doctoral dissertation, Department of Physical Geography, The Royal Univer-
sity of Lund, Sweden, January 1988.
[9] Ralf Hartemink. World database. http://www.ngw.nl/indexgb.htm, 2002.
Accessed 02–2002.
53
Bibliography
[10] Robert L. Huguenin, Mark A. Karaska, Donald van Blaricom, and John R. Jen-
sen. Subpixel classification of bald cypress and tupelo gum trees in thematic
mapper imagery. Photogrammetric Engineering & Remote Sensing, 63(6):717–725,
June 1997.
[11] Charles Ichku and Arnon Karnieli. A review of mixture modelling techniques
for sub-pixel land cover estimation. Remote Sensing Reviews, 13:161–186, 1996.
[12] Paul Klein. The Netherlands municipalities. http://www.metatopos.org/,
2002. Last updated 01–2002.
[13] Thomas M. Lillesand and Ralph W. Kiefer. Remote Sensing and Image Interpre-
tation. John Wiley & Sons, New York, Chichester, Singapore, fourth edition,
2000.
[14] Stuart E. Marsh, Paul Switzer, and Ronald J.P. Lyon. Resolving the percentage
of component terrains within single resolution elements. Photogrammetric Eng.
and Remote Sensing, 46(8):1079–1086, Aug. 1980.
[15] NASA. Remote sensing tutorial. http://rst.gsfc.nasa.gov, 2002. Accessed 01–
2002.
[16] J. J. Settle and N. A. Drake. Linear mixing and the estimation of ground cover
proportions. Int. J. Remote Sensing, 14(6):1159–1177, 1993.
[17] R. B. Singer and T. B. McCord. Mars: Large scale mixing of bright and dark sur-
face materials and implications for analysis of spectral reflectance. In 10th Lu-
nar and Planetary Science Conference, pages 1,835–1,848, Houston, U.S.A., March
19–23, 1979.
[18] Freek van der Meer. Spectral unmixing of landsat thematic mapper data. Int.
J. Remote Sensing, 16(16):3189–3194, 1995.
[19] Freek van der Meer. Spectral unmixing: A new tool in geological image roces-
sing. In SIRDC Conference on the Applications of Remotely Sensed Data and GIS in
Environmental and Natural Resources Assessment in Africa:Harare,Zimbabwe, pa-
ges 131–134, March 15–22, 1996.
[20] Freek van der Meer. Spatial Statistics for Remote Sensing, chapter Image Classi-
fication through Spectral Unmixing, pages 185–193. Kluwer Academic Publis-
hers, first edition, 1999.
54
Appendix A
Endmember selection through
PCA
Figure A.1: Eigne Value Plot
55
56
Appendix B
Statistics from linear spectral
unmixing
Table B.1: Pixel fractions constituted by Maize
DN Pixels Total Pixel Accumulated
(Pixel Fraction) # pixels % %
-27.0473 1 1 0.0026 0.0026
-26.2656 1 2 0.0026 0.0052
-24.4416 1 3 0.0026 0.0077
-24.0507 1 4 0.0026 0.0103
-23.3993 1 5 0.0026 0.0129
-22.4873 1 6 0.0026 0.0155
-21.8358 1 7 0.0026 0.0180
-21.4450 1 8 0.0026 0.0206
-21.3147 1 9 0.0026 0.0232
-21.0541 2 11 0.0052 0.0283
-20.4027 1 12 0.0026 0.0309
-20.1421 2 14 0.0052 0.0361
-20.0118 2 16 0.0052 0.0412
-19.8815 1 17 0.0026 0.0438
-19.7512 2 19 0.0052 0.0490
57
DN Pixels Total Pixel Accumulated
(Pixel Fraction) # pixels % %
-19.6210 2 21 0.0052 0.0541
-19.4907 1 22 0.0026 0.0567
-19.3604 1 23 0.0026 0.0593
-19.2301 1 24 0.0026 0.0618
-19.0998 2 26 0.0052 0.0670
-18.9695 1 27 0.0026 0.0696
-18.8392 2 29 0.0052 0.0747
-18.7089 1 30 0.0026 0.0773
-18.5787 3 33 0.0077 0.0850
-18.4484 1 34 0.0026 0.0876
-18.3181 3 37 0.0077 0.0954
-18.1878 2 39 0.0052 0.1005
-18.0575 1 40 0.0026 0.1031
-17.7969 3 43 0.0077 0.1108
-17.6667 4 47 0.0103 0.1211
-17.5364 2 49 0.0052 0.1263
-17.4061 5 54 0.0129 0.1392
-17.1455 1 55 0.0026 0.1417
-16.6244 1 56 0.0026 0.1443
-16.4941 4 60 0.0103 0.1546
-16.3638 2 62 0.0052 0.1598
-16.1032 2 64 0.0052 0.1649
-15.9729 2 66 0.0052 0.1701
-15.8426 2 68 0.0052 0.1752
-15.7123 1 69 0.0026 0.1778
-15.5821 1 70 0.0026 0.1804
-15.3215 1 71 0.0026 0.1830
-15.1912 1 72 0.0026 0.1855
-15.0609 1 73 0.0026 0.1881
-14.9306 1 74 0.0026 0.1907
-14.8003 1 75 0.0026 0.1933
-14.5398 1 76 0.0026 0.1959
-14.4095 1 77 0.0026 0.1984
-14.2792 1 78 0.0026 0.2010
-14.0186 1 79 0.0026 0.2036
58
Appendix B. Statistics from linear spectral unmixing
DN Pixels Total Pixel Accumulated
(Pixel Fraction) # pixels % %
-13.8883 1 80 0.0026 0.2062
-13.7580 2 82 0.0052 0.2113
-13.6278 3 85 0.0077 0.2190
-13.4975 1 86 0.0026 0.2216
-13.2369 1 87 0.0026 0.2242
-13.1066 1 88 0.0026 0.2268
-12.9763 1 89 0.0026 0.2294
-12.8460 2 91 0.0052 0.2345
-12.7158 1 92 0.0026 0.2371
-12.5855 1 93 0.0026 0.2397
-12.3249 2 95 0.0052 0.2448
-12.1946 1 96 0.0026 0.2474
-12.0643 1 97 0.0026 0.2500
-11.9340 3 100 0.0077 0.2577
-11.8037 1 101 0.0026 0.2603
-11.6735 2 103 0.0052 0.2654
-11.2826 3 106 0.0077 0.2732
-11.1523 2 108 0.0052 0.2783
-11.0220 1 109 0.0026 0.2809
-10.7614 1 110 0.0026 0.2835
-10.6312 1 111 0.0026 0.2861
-10.5009 1 112 0.0026 0.2886
-10.3706 3 115 0.0077 0.2964
-10.2403 3 118 0.0077 0.3041
-10.1100 2 120 0.0052 0.3092
-9.9797 3 123 0.0077 0.3170
-9.8494 2 125 0.0052 0.3221
-9.7192 3 128 0.0077 0.3299
-9.5889 2 130 0.0052 0.3350
-9.4586 1 131 0.0026 0.3376
-9.3283 2 133 0.0052 0.3427
-9.1980 5 138 0.0129 0.3556
-9.0677 4 142 0.0103 0.3659
-8.9374 7 149 0.0180 0.3840
-8.6769 7 156 0.0180 0.4020
59
DN Pixels Total Pixel Accumulated
(Pixel Fraction) # pixels % %
-8.5466 4 160 0.0103 0.4123
-8.4163 2 162 0.0052 0.4175
-8.2860 2 164 0.0052 0.4226
-8.1557 3 167 0.0077 0.4304
-8.0254 1 168 0.0026 0.4329
-7.8951 5 173 0.0129 0.4458
-7.7648 2 175 0.0052 0.4510
-7.6346 8 183 0.0206 0.4716
-7.5043 5 188 0.0129 0.4845
-7.3740 3 191 0.0077 0.4922
-7.2437 7 198 0.0180 0.5103
-7.1134 13 211 0.0335 0.5438
-6.9831 10 221 0.0258 0.5695
-6.8528 7 228 0.0180 0.5876
-6.7226 10 238 0.0258 0.6133
-6.5923 13 251 0.0335 0.6468
-6.4620 8 259 0.0206 0.6675
-6.3317 13 272 0.0335 0.7010
-6.2014 20 292 0.0515 0.7525
-6.0711 15 307 0.0387 0.7912
-5.9408 15 322 0.0387 0.8298
-5.8105 28 350 0.0722 0.9020
-5.6803 20 370 0.0515 0.9535
-5.5500 27 397 0.0696 1.0231
-5.4197 34 431 0.0876 1.1107
-5.2894 28 459 0.0722 1.1829
-5.1591 40 499 0.1031 1.2859
-5.0288 36 535 0.0928 1.3787
-4.8985 44 579 0.1134 1.4921
-4.7683 44 623 0.1134 1.6055
-4.6380 56 679 0.1443 1.7498
-4.5077 63 742 0.1624 1.9122
-4.3774 84 826 0.2165 2.1286
-4.2471 73 899 0.1881 2.3168
-4.1168 88 987 0.2268 2.5436
60
Appendix B. Statistics from linear spectral unmixing
DN Pixels Total Pixel Accumulated
(Pixel Fraction) # pixels % %
-3.9865 118 1105 0.3041 2.8476
-3.8562 91 1196 0.2345 3.0822
-3.7260 138 1334 0.3556 3.4378
-3.5957 134 1468 0.3453 3.7831
-3.4654 139 1607 0.3582 4.1413
-3.3351 178 1785 0.4587 4.6000
-3.2048 186 1971 0.4793 5.0794
-3.0745 230 2201 0.5927 5.6721
-2.9442 210 2411 0.5412 6.2133
-2.8139 243 2654 0.6262 6.8395
-2.6837 252 2906 0.6494 7.4889
-2.5534 271 3177 0.6984 8.1873
-2.4231 283 3460 0.7293 8.9166
-2.2928 300 3760 0.7731 9.6897
-2.1625 345 4105 0.8891 10.5788
-2.0322 378 4483 0.9741 11.5529
-1.9019 403 4886 1.0386 12.5915
-1.7717 386 5272 0.9947 13.5862
-1.6414 430 5702 1.1081 14.6944
-1.5111 415 6117 1.0695 15.7638
-1.3808 458 6575 1.1803 16.9441
-1.2505 465 7040 1.1983 18.1425
-1.1202 499 7539 1.2859 19.4284
-0.9899 525 8064 1.3530 20.7814
-0.8596 574 8638 1.4792 22.2606
-0.7294 598 9236 1.5411 23.8017
-0.5991 687 9923 1.7704 25.5721
-0.4688 763 10686 1.9663 27.5384
-0.3385 831 11517 2.1415 29.6799
-0.2082 943 12460 2.4302 32.1101
-0.0779 982 13442 2.5307 34.6408
0.0524 1158 14600 2.9842 37.6250
0.1827 1160 15760 2.9894 40.6144
0.3129 1281 17041 3.3012 43.9156
0.4432 1144 18185 2.9481 46.8637
61
DN Pixels Total Pixel Accumulated
(Pixel Fraction) # pixels % %
0.5735 1250 19435 3.2213 50.0850
0.7038 1295 20730 3.3373 53.4223
0.8341 1410 22140 3.6336 57.0560
0.9644 1501 23641 3.8682 60.9241
1.0947 1635 25276 4.2135 65.1376
1.2249 1642 26918 4.2315 69.3691
1.3552 1615 28533 4.1619 73.5311
1.4855 1458 29991 3.7573 77.2884
1.6158 1505 31496 3.8785 81.1669
1.7461 1324 32820 3.4120 84.5789
1.8764 1138 33958 2.9327 87.5116
2.0067 973 34931 2.5075 90.0191
2.1370 878 35809 2.2627 92.2817
2.2672 798 36607 2.0565 94.3382
2.3975 610 37217 1.5720 95.9102
2.5278 449 37666 1.1571 97.0673
2.6581 326 37992 0.8401 97.9074
2.7884 232 38224 0.5979 98.5053
2.9187 177 38401 0.4561 98.9614
3.0490 117 38518 0.3015 99.2630
3.1792 86 38604 0.2216 99.4846
3.3095 63 38667 0.1624 99.6469
3.4398 39 38706 0.1005 99.7474
3.5701 18 38724 0.0464 99.7938
3.7004 17 38741 0.0438 99.8376
3.8307 18 38759 0.0464 99.8840
3.9610 10 38769 0.0258 99.9098
4.0913 9 38778 0.0232 99.9330
4.2215 11 38789 0.0283 99.9613
4.3518 3 38792 0.0077 99.9691
4.4821 3 38795 0.0077 99.9768
4.6124 1 38796 0.0026 99.9794
4.7427 1 38797 0.0026 99.9820
4.8730 2 38799 0.0052 99.9871
5.0033 1 38800 0.0026 99.9897
62
Appendix B. Statistics from linear spectral unmixing
DN Pixels Total Pixel Accumulated
(Pixel Fraction) # pixels % %
5.1336 1 38801 0.0026 99.9923
5.7850 1 38802 0.0026 99.9949
6.0456 1 38803 0.0026 99.9974
6.1758 1 38804 0.0026 100.0000
63