Top Banner
2015 ICEO&SI and ICLEI Resilience Forum , JUNE 28-30, Kaohsiung. Taiwan PAPER No.Assessment of the Grey-Level Co- Occurrence Matrix for Land Cover Classification using Multi-spectral UAV image Thanh Tung Do 1 , Tien Yin Chou 2 1 Master in Urban Planning and Spatial Information, Feng Chia University, 100 Wenhwa Rd, Situn Dist., Taichung 40724, Taiwan R.O.C, [email protected] 2 GIS Research Center, Feng Chia University, 100 Wenhwa Rd, Situn Dist., Taichung 40724, Taiwan R.O.C, [email protected] ABSTRACT Texture features based on the grey-level co-occurrence matrix method are extracted from an UAV near-infrared image by using four second-order statistic, eight window sizes, and two quantization levels. The four UAV multi-spectral bands are combined with each textural band individually and with all four textural bands. From these combination, a supervised classification method based on the maximum-likelihood algorithm is chosen to classify the land cover into five classes. The classification accuracy is measured by kappa coefficients calculated from confusion matrices. The results show that the addition of texture features to the spectral image provides a significant improvement in the classification accuracy of each land cover type as compared with the classification obtained from the spectral image only. I. INTRODUCTION The application of Unmanned Aerial Vehicle (UAV) has increased considerably in recent years due to their greater availability and the miniaturization of sensors, GPS, inertial measurement units, and the other [7]. The advantages of UAV compared to manned aircraft systems are that UAV can be used in high risk situation without endangering a human life and inaccessible areas, at low altitude and at flight profiles close to the objects where manned systems cannot be flown [1]. Furthermore, in cloudy and drizzly weather condition, the data acquisition with UAV is still possible, when the distance to the object permits flying below the clouds. Moreover, supplementary advantages are the real- time capability and the ability for fast data acquisition, while transmitting the image, video and data in real time to the ground station. With very high spatial resolution (0.14 x 0.14m) and multispectral bands (R-G-B-NIR), the level of detail present in the UAV image has increased considerably when compared to the other multispectral satellite images. For visual interpretation, a finer spatial resolution permits better land cover discrimination. However, the increased amount of detail creates new problems for information extraction using automated classification techniques [3]. The finer spatial resolution increases the spectral-radiometric variation of land cover types.
7

Assessment of the Grey-Level Co-Occurrence Matrix for Land Cover Classification using Multi-spectral UAV image.docx

Sep 03, 2015

Download

Documents

Su Su
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

Paper Title (use style: paper title)

Assessment of the Grey-Level Co-Occurrence Matrix for Land Cover Classification using Multi-spectral UAV imageThanh Tung Do1, Tien Yin Chou21Master in Urban Planning and Spatial Information, Feng Chia University, 100 Wenhwa Rd, Situn Dist., Taichung 40724, Taiwan R.O.C, [email protected] 2GIS Research Center, Feng Chia University, 100 Wenhwa Rd, Situn Dist., Taichung 40724, Taiwan R.O.C, [email protected]

ABSTRACTTexture features based on the grey-level co-occurrence matrix method are extracted from an UAV near-infrared image by using four second-order statistic, eight window sizes, and two quantization levels. The four UAV multi-spectral bands are combined with each textural band individually and with all four textural bands. From these combination, a supervised classification method based on the maximum-likelihood algorithm is chosen to classify the land cover into five classes. The classification accuracy is measured by kappa coefficients calculated from confusion matrices. The results show that the addition of texture features to the spectral image provides a significant improvement in the classification accuracy of each land cover type as compared with the classification obtained from the spectral image only.

2015 ICEO&SI and ICLEI Resilience Forum , JUNE 28-30, Kaohsiung. TaiwanPAPER No.

INTRODUCTIONThe application of Unmanned Aerial Vehicle (UAV) has increased considerably in recent years due to their greater availability and the miniaturization of sensors, GPS, inertial measurement units, and the other [7]. The advantages of UAV compared to manned aircraft systems are that UAV can be used in high risk situation without endangering a human life and inaccessible areas, at low altitude and at flight profiles close to the objects where manned systems cannot be flown [1]. Furthermore, in cloudy and drizzly weather condition, the data acquisition with UAV is still possible, when the distance to the object permits flying below the clouds. Moreover, supplementary advantages are the real-time capability and the ability for fast data acquisition, while transmitting the image, video and data in real time to the ground station.With very high spatial resolution (0.14 x 0.14m) and multispectral bands (R-G-B-NIR), the level of detail present in the UAV image has increased considerably when compared to the other multispectral satellite images. For visual interpretation, a finer spatial resolution permits better land cover discrimination. However, the increased amount of detail creates new problems for information extraction using automated classification techniques [3]. The finer spatial resolution increases the spectral-radiometric variation of land cover types.There are two major approaches to tackle that problems in relation to the increased internal variance. The first is applying mathematical transformation to the original spectral data to remove the excess spectral information. The second approach considers the internal spectral variance of classes as a valuable information that can be used as an additional information in characterizing and identifying land covers.Spectral, textural and contextual features are three fundamental pattern elements used in human interpretation of color photographs. Spectral features describe the average tonal variation in various bands of the visible and/or infrared portion of an electromagnetic spectrum, whereas textural contain information about the spatial distribution of tonal variations within a band. Contextual features contain information derived from blocks of image data surrounding the area being analyzed. When small image areas from black and white images are independently processed by a machine, then textures are most important [2].Texture is an important characteristic for the analysis of many types of images. It presents the first level of spatial properties that can be extracted from an image. It can define as the relationships between grey levels in neighboring pixels which contribute to the entire appearance of the image. In statistical texture analysis, texture features are extracted from the statistical distribution of observed combination of intensities at specified positions relative to each other in the image. According to the number of intensity points (pixels) in each combination, statistics are classified into first-order, second-order and higher-order statistics. The Grey Level Co-occurrence Matrix (GLCM) is one of the most popular methods to extract second order statistical texture features. Third and higher order textures consider the relationships among three or more pixels. These are theoretically possible but not commonly implemented due to calculation time and interpretation difficulty.The GLCM contains the relative frequencies with which two neighboring pixels occur on the image, one with grey level i, and the other with grey level j. Several statistical measures, such as contrast, entropy, and angular second moment can be estimated from the GLCM to describe specific textural features of the image [2]. Each textural feature can be used to create a new texture image/band which can combine with original spectral feature/band for classification.When classifying the regions of an image by using GLCM method, there are several factors to consider: the spectral band, the quantization level of the image, the moving window size, the distance and angle for co-occurrence computation and the statistics used as texture measures.In this study, five land cover types are classified from original multispectral UAV image combined with its textural feature/bands to evaluate the influence of texture features based on GLCM method on the classification accuracy. Thus, the major objectives of this study are: i) to evaluate the influence of the window size, the quantization level, and the statistics used as texture measures on classification accuracy; ii) to measure the influence of the window size and the quantization level on extracting the texture features.METHODOLOGYThe study site is an area located in Zhuoshui River side, in the Yunlin County, Taiwan (Fig. 1). The UAV image is acquired in July 2013. This area is a rural area with most of land cover types related to vegetation and agricultural field.

UAV true-color image of study siteTexture bands extraction and band combinationsBased on GLCM method, sixty-four texture bands (Fig. 2) were created by using the original UAV near-infrared band at a spatial resolution of 0.14 x 0.14m. This spectral band exhibits better contrast between land cover types than the visible spectral band (R-G-B band).

Creation of texture band and band combinationThe quantization level of 16 and 32 were chosen for texture bands creation. Eight window sizes from 3 x 3 pixels to 41 x 41 pixels also were chosen for testing. This selection permits to cover some range of the land category spatial pattern dimension on the UAV image and to assess the influence of window sizes on classification accuracy.During the process of co-occurrence matrix computation, the distance between pixels was kept constant at one. Based on the assumption that no land cover type exhibit a preferential texture directionality, the co-occurrence matrix over four main angles (0o, 45o, 90o and 135o) was averaged. Four second-order statistics were calculated from co-occurrence matrix including: the contrast (CON), the angular second moment (ASM), the correlation (COR), and the entropy (ENT) ((1) to (4)).

(1)

(2)

(3)

(4)

where is th entry in a normalized grey-level co-occurrence matrix; is number of distinct grey-level in quantized image; , , , are the means and standard deviation of and ; is ith entry the marginal probability matrix obtained by summing the rows of [2].The texture images/bands were normalized on a 256 grey-level scale using a linear transformation [6].Classification accuracy assessmentThe four UAV multispectral bands were combined with each texture image individually and all four texture images together (Fig. 1). These combinations are classified by using a supervised classification method based on the maximum likelihood algorithm. These classifications were repeated for each window size and quantization. The classification from original UAV multispectral image also was done for assessing the contribution of texture features in the discrimination of land cover types. The land cover classification scheme is including: bare soil, dense vegetation, agriculture, grassland, and residential areas. Most of the classes are following the USGS scheme and emphasizing the pattern and spatial variability of the image [8]. The sample (signatures collected from the image) areas were selected as the training sites by using the on-screen digitized features. A total of 250 randomly sample (50 for each cover type) were chosen by using another kind of RGB UAV image at a spatial resolution of 0.06 x 0.06m as reference image. These areas were systematically and proportionally selected throughout the whole image.To measure the classification accuracy, the Kappa Coefficient was calculated from confusion matrices. This coefficient can measure the agreement between estimated land cover classification and reality land cover or to determine if the values contained in an error matrix represent a result significantly better than random [5]. Kappa coefficient is computed as:

(5)

where N is the total number of site in the matrix, r is the number of row in the matrix, is the number in row i and column i, is the total for row i, and is the total for column i [5].To calculate the agreement between classified and the reference data for an individual class, the conditional Khat coefficient was calculated.

(6)

where is the number of observations correctly classified for a particular category, and N is the total number of observations in the entire error matrix.RESULT AND DISCUSSIONClassification accuracy improvement by adding texture bandsThe classification accuracy calculated from original multispectral UAV image for each land cover type is low, especially for the bare soil areas (Table I). The results show that the classification accuracy is considerably improved when the texture features is added to the original spectral image. The most significant improvement is in the classification of grassland (from 69 to 97%), following by bare soil (from 47 to 73%), agriculture (from 70 to 84%), and residential areas (from 82 to 97%).Classification accuracy comparisonCover typesOriginal UAVTexture*featureUAV & texture

Bare soil0.478 bds-41/160.73

Grassland0.69ENT-25/320.97

Dense Vegetation0.788bds-25/320.87

Agriculture0.70ASM-13/320.84

Residential0.828bds-33/320.97

* 8bds=8 bands (4 spectral bands, 4 texture bands), first number is window size, second number is quantization levelThe texture combination that provides the best classification accuracy change greatly from one cover type to another. For bare soil cover type, the combination of 4 spectral bands and 4 texture bands provides the highest classification accuracy. In other hands, the combination between 4 spectral bands and one second-order statistic CON provides the best classification accuracy for grassland cover type.Influence of window size and texture feature on classification accuracyThe window size is a very important factor that is responsible for most of the variation in the image classification process. To evaluate the influence of window size on classification accuracy, the means (from two quantization levels) of five kappa coefficients obtained for each cover type were calculated for eight window sizes (Fig. 3).

Mean kappa coefficient of each land cover type

Discrimination between agriculture and bare soil at window size of 25 x 25 pixel

Discrimination between grassland and dense vegetation at window size of 25 x 25 pixelThe results show that the classification accuracy is changing from one window size to the others. It seems to exist a window size that maximizes the classification accuracy for each land cover type. The window size of 25 x 25 pixels can be seen as the most suitable to obtain accurate classification results for more than one land cover type. The smaller window sizes do not show the satisfactory results. These window sizes may not capture the pattern of most classes. The improvement of the discrimination between each cover type by adding a texture feature to the spectral band can be described through the statistics of training data. The discrimination between agriculture field and bare soil, grassland and dense vegetation area, grassland and residential areas are extracted (Fig. 4, 5, and 6). The statistical separability is very low when using the multispectral bands alone. It is significantly improved when the texture features added to the original spectral images. The class separability increases because the unique texture pattern characterizes each class. In case of several cover types, the signatures are still overlapping. However, by using multi-window size of texture features, the separabilty is improved when compared with the results obtained from the spectral bands only. The ENT and ASM provide good separabiltiy between agriculture, bare soil and grass land, while the ENT at window size of 41 x 41 pixels provide good separability between grassland and residential areas.

Discrimination between grassland and residential area at window size of 41 x 41 pixel

Mean grey level values of texture images at quantization level of 16

Mean grey level values of texture images at quantization level of 32Influence of quantization level and window size on the texture features extractionTo evaluate the relationship between window size, quantization level and the creation of texture features, the histogram of four texture bands at eight window size and two quantization level was extracted (Fig. 7 and 8). The observations show that the trend of texture features extracted from 16 and 32 quantization levels are almost similar and contain basically the same information. The ENT and CON values increase progressively with an increase in the window size, whereas the ASM and COR values decrease with the increasing of window size. The variations of all four texture images are significant from window size of 3 x 3 pixels to 13 x 13 pixels, but do not vary much over the other window sizes. The ENT image has the smallest variation, while the COR image has the highest.CONCLUSIONIn this study, the textural approach based on GLCM method is using to obtain a significant improvement in land cover classification from multispectral UAV image. The classification accuracy is influenced by all three factor: window size, statistics and quantization level. The classification accuracy is considerably improved when the texture features is added to the original spectral image. In further work, its required to evaluate the influence of variables directly associated with GLCM method such as inter-pixel angle and inter-pixel distance on characterizing a particular cover type from an UAV image. Its also important to extract texture features from more window size and second order statistics for identifying the best combinations of spectral and textural image to maximize the classification accuracy.Acknowledgment We gratefully acknowledge the funding support and data support from GIS Research Center, Feng Chia University, Taiwan.ReferencesA. Rango, S. Laliberte, C.Steele, E. Herrick, B. Bestemeyer, T. Schmugge, A. Roanhorse, and V. Jenkins, Using unmanned aerial vehicles for rangelands: Current applications and future potentials, Environment Practice, 8:159-168, 2006.M. Haralick, K.Shanmugam and Itshak Dinsten, Textural feature for image classification, IEEE Transactions on System, Man, and Cybernetics, Vol. SMC-3, No. 6, November 1973.J. Marceau, J. Howarth, M. Dubois and J. Gratton, Evaluation of the Grey-Level Co-Occurrence Matrix method for land-cover classification using SPOT imagery, IEEE Transactions on Geoscience and Remote Sensing, Vol. 28, No. 4, July 1990.P. Mohanaiah, P. Sathyanarayana, L. GuruKumar, Image texture feature extraction using GLCM approach, International Journal of Scientific and Research Publications, Vol. 3, Issue 5, May 2013.R. Jensen, Introductory digital image processing, Prentice Hall, 3rd edition, May 10 2004.R. Wang, Advanced methods in grey level segmentation, http://fourier.eng.hmc.edu/e161/lectures/digital_image/node9.html, December 2004. S. Laliberte, E. Herrick, A. Rango and C. Winters, Acquisition, or thorectification, and object-based classification of Unmanned Aerial Vehicle (UAV) imagery for rangeland monitoring, Photogrammetric Engineering & Remote Sensing, Vol. 76, No. 6, June 2010, pp. 661-672.USGS Land Cover Institute (LCI): http://landcover.usgs.gov/classes.php, December 2012.