Top Banner
Spatial Resolution
94
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 18707_Unit 3

Spatial Resolution

Page 2: 18707_Unit 3

Radiometric resolution

Page 3: 18707_Unit 3

Multispectral Vs Hyperspectral Image

Page 4: 18707_Unit 3

Concepts of Aerial Photography

Page 5: 18707_Unit 3

• Scale of photograph :

Page 6: 18707_Unit 3

Fiducial Marks• small registration marks exposed on the edges of a photograph

Page 7: 18707_Unit 3

Flight plan

Page 8: 18707_Unit 3

Overlap in flight plan

Page 9: 18707_Unit 3

Aerial Photograph mosaic

Page 10: 18707_Unit 3

Stereoscopy

• The ability to see and appreciate depth of field through the perception of parallax.

Page 11: 18707_Unit 3

Photogrammetry using Stereo pairs

Page 12: 18707_Unit 3

Modern Stereoscopy

Page 13: 18707_Unit 3

Visual Image Interpretation

• act of examining photographic images for the purpose of identifying objects and judging their significance.

• Elements of visual image interpretation :• location• size• Shape• shadow• tone/color• Texture• pattern• height/depth• site/situation/association

Page 14: 18707_Unit 3
Page 15: 18707_Unit 3

Size

• Size of objects in an image is a function of scale.

• It is important to assess the size of a target relative to other objects in a scene, as well as the absolute size, to aid in the interpretation of that target

Page 16: 18707_Unit 3

Shape

• Shape refers to the general form, structure, or outline of individual objects.

• Shape can be a very distinctive clue for interpretation.

Page 17: 18707_Unit 3

Tone/Color

• Tone refers to the relative brightness or colour of objects in an image.

• Variations in tone also allows the elements of shape, texture, and pattern of objects to be distinguished.

Page 18: 18707_Unit 3

Tone vs Color

Page 19: 18707_Unit 3

Texture

• Texture refers to the arrangement and frequency of tonal variation in particular areas of an image.

• Rough textures would consist of a mottled tone where the grey levels change abruptly in a small area.

• Smooth textures would have very little tonal variation.

Page 20: 18707_Unit 3

Pattern

• Pattern refers to the spatial arrangement of visibly discernible objects.

• Typically an orderly repetition of similar tones and textures will produce a distinctive and ultimately recognizable pattern

Page 21: 18707_Unit 3

Shadow

• Shadow is also helpful in interpretation as it may provide an idea of the profile and relative height of a target or targets which may make identification easier.

• Shadows can also reduce or eliminate interpretation in their area of influence

Page 22: 18707_Unit 3

Site /Situation/Association

• Association takes into account the relationship between other recognizable objects or features in proximity to the target of interest

Page 23: 18707_Unit 3
Page 24: 18707_Unit 3
Page 25: 18707_Unit 3

Visual Image interpretation

Advantages

Interpreter’s knowledge are available

Excellent in spatial information extraction

Limitations

Time consuming

Individual difference

Simultaneous analysis of multiple bands/images is difficult

Serious biased error may introduce due to analyst’s own assumption orexpectation

Page 26: 18707_Unit 3

Digital Image interpretation

• Digital image processing is the application of algorithms on digital

images to perform processing, analysis, and information extraction.

• Data must be recorded and available in digital form.

• Data recorded on a photographic film can also be converted into

digital form, but a very few of the digital processing techniques can be

applied on them.

Page 27: 18707_Unit 3

Advantages of Digital Image Processing

• Short processing time

• Re-productivity

• Extraction of physical quantities

• Data commonly transmitted or converted to digital format

• Analysis of individual points (pixel)

• Analysis of multiple bands/images in a single platform

• Handling of large data volumes

• Accuracy assessments

Page 28: 18707_Unit 3

Digital Image

Page 29: 18707_Unit 3

Pixels

• The smallest two-dimensional non-divisible element of an image is called pixel.

• Each pixel stores a digital number (DN) measured by the sensor.

• Represents individual areas scanned by the sensor.

• Smaller pixel size offers greater spatial accuracy.

Page 30: 18707_Unit 3

Process of Digital Image Processing

Page 31: 18707_Unit 3

Pre - processing

• In their raw form, remotely sensed data generally contain flaws or deficiencies. The correction of deficiencies and the removal of flaws present in the data are termed pre-processing.

• Pre-processing includes:• Radiometric corrections

• Geometric corrections

• Miscellaneous pre-processing

Page 32: 18707_Unit 3

Radiometric Correction

• The main purpose for applying radiometric corrections is to reduce the influence of errors or inconsistencies in image brightness values.

• Radiometric errors and inconsistencies are often referred to as “noise”.

• Noise means any undesirable variation in image brightness.

Page 33: 18707_Unit 3

De striping

• Striping occurs if a detector goes out of adjustment.

• Individual detectors appear lighter or darker than their neighboring detectors

• Correction is applied by selecting one sensor as a standard and adjusting the brightness for all other pixels.

• Independent reference value of brightness can also be used.

Page 34: 18707_Unit 3

Removal of Missing Scan Lines

• Missing scan line occurs when a detector either completely fails to function, or becomes temporarily saturated during a scan.

• Corrected by replacing the bad line with a line of estimated data file values, which is based on the lines above and below it.

Page 35: 18707_Unit 3

Random Noise Removal

• Odd pixels that have varying DNs frequently in images, and if they are not systematic, they can be considered as random noise.

• Noisy pixels can be replaced by substituting an average value of neighboring DNs.

Page 36: 18707_Unit 3

Atmospheric Correction

• Atmospheric effects are not considered as errors.

• They are part of the signal received by the sensing device.

• A number of algorithms have been developed to correct atmospheric effects.

Courtesy: CCRS

Before After

Page 37: 18707_Unit 3

Examples of atmospheric correction

Page 38: 18707_Unit 3

Geometric Correction

• Digital images often contain systematic and non-systematic geometric errors that arise from the earth curvature, platform motion, relief displacement, non-linearities in scanning motion, the earth rotation, etc.

• Digital images are not geographically referenced.

• Removing these errors is known as geometric correction.

Page 39: 18707_Unit 3

Systematic Correction

• Systematic errors:• Scan skew

• Known mirror velocity variation

• Earth-rotation skew

• Platform velocity variation

• Systematic distortions are corrected by applying formulas derived by modelling the sources of distortions mathematically.

Page 40: 18707_Unit 3

Non Systematic Errors

• Establishing the relationship between two different coordinate systems.

• Two approaches:

• Image to ground geo-correction (georeferencing)

• Image to image correction (registration)

Page 41: 18707_Unit 3
Page 42: 18707_Unit 3

Image to Ground Geo-correction

Correction is the process of giving an image a Real World

coordinate system

116 41 52.20 W, 33 50 03.23 N

(Longitude, Latitude)

Page 43: 18707_Unit 3

Image to Image Correction

Assigning the co-ordinate system of one image to a second image of

the same area

Fitting the co-ordinate

system to another image

Page 44: 18707_Unit 3

Subsetting

Breaking out a portion of a

large file/image into one or

more smaller files/images.Area of interest

Page 45: 18707_Unit 3

Mosaicking

Combining multiple scenes to cover a larger area.

Page 46: 18707_Unit 3

Image Enhancement

• Image enhancement can be defined as the conversion of the image quality to a better and more understandable level for feature extraction or image interpretation.

• Enhancement is generally applied to single-band images or separately to individual bands

• Principle objective is to process an image so that the result is more suitable than the original image for a specific application.

Page 47: 18707_Unit 3

Procedures

• Two type of procedures :• Point operations

• Local operations

Point operations : When we change the value of each individual pixel independent of all other pixels.

Local Operations : When we change the value of individual pixels in the context of the values of neighboring pixels.

Page 48: 18707_Unit 3

Image Reduction

2x image reduction30 55 35 76 48 89 98 36 33 76

87 34 55 98 45 75 62 98 78 12

12 69 87 36 87 69 47 69 78 98

39 87 95 22 36 14 65 68 38 35

69 97 65 98 32 72 91 38 78 65

66 54 57 85 95 36 96 16 3 9

5 75 98 75 98 78 89 65 88 9

54 92 80 60 34 43 78 53 58 50

Original image

Reduced image

30

30

35

35

48

48

98

98

33

33

12

12

87

87

87

87

47

47

78

78

66 57 95 96 3

54 80 34 78 58

66 57 95 96 3

54 80 34 78 58

Page 49: 18707_Unit 3

Image Magnification

2x image magnification

Original image Magnified image

30 35 48 98 33

12 87 87 47 78

66 57 95 96 3

54 80 34 78 58

30 30

30 30

35 35

3535

48 48

48 48

98 98

98 98

33 33

33 33

12 12

12 12

87 47

66 57 95

54 80 34

87

87 87

87 87

87 87

47

47 47

78 78

78 78

54

54 54

80

80 80

34

34 34

78

78

78

78

58

58

58

58

66

66 66 57

57

57 95

95

95

96

96

96

96

3

3

3

3

Page 50: 18707_Unit 3

Color Compositing

• A color image can be generated by compositing three selected bands of multi-band image and with the use of three primary colors (RGB).

• Display contains three color guns : RGB , hence only 3 bands can be seen at a time.

• Color Composites :• Additive Color composites : which use RGB• Subtractive Color composites : use three pigments of RGB : cyan , magenta

and yellow

Page 51: 18707_Unit 3
Page 52: 18707_Unit 3

True Color Image vs False color image

• True Color : A combination where image captured in blue band is passed though blue gun , red from red gun and green from green gun.

• False color : when one of the primary color is not present , mathematical combinations are made. Eg if blue is not present :

RED = Red band

GREEN = 0.75 × Green band + 0.25 × NIR band

BLUE = 0.75 × Green band – 0.25 × NIR band

Infrared Color Composite : BLUE = Green Band

GREEN = Red Band

RED = Infrared Band

Page 53: 18707_Unit 3

True color vs False color

Page 54: 18707_Unit 3

Infrared color composite

Page 55: 18707_Unit 3

Transect Extraction

Pixels that lie on a transect can be

measured and displayed to compare

spectral or spatial differences.

Band-1

Band-2

Band-3Band-4

Page 56: 18707_Unit 3

Contrast Enhancement

• The range of brightness values present on an image is referred to as contrast.

• Contrast enhancement is a process that makes the image features stand out more clearly by making optimum use of the color intensities available on the display or output device.

• Look-up Table (LUT) – computer stores new values are stored in LUT and utilizes these values to display the image.

Page 57: 18707_Unit 3

Histogram

• A histogram is a graph of data frequency or distribution.

• A statistical graphic representation of the range of tones from dark to light and associated number of pixels for each tone for an image.

Page 58: 18707_Unit 3

Image contrast and Histogram

• Contrast: The range of brightness values present on an image

Page 59: 18707_Unit 3

Image Contrast

Page 60: 18707_Unit 3

Contrast Enhancement

Contrast manipulations involve changing the range of values in an image in order

to increase the contrast.

Page 61: 18707_Unit 3

Contrast Enhancement

Page 62: 18707_Unit 3

Types of contrast enhancement

• Linear contrast enhancement

• Minimum-maximum linear contrast stretch

• Percentage linear contrast stretch

• Average and standard deviation stretch

• Piecewise linear contrast stretch

• Nonlinear contrast enhancement

• Histogram equalization

• Histogram normalization

• Reference stretch

• Density slicing

• Thresholding

Page 63: 18707_Unit 3

Minimum – Maximum Stretch

Page 64: 18707_Unit 3

Saturation Stretch

Page 65: 18707_Unit 3

Average and Standard Deviation Stretch

Page 66: 18707_Unit 3

Piecewise Stretch

Page 67: 18707_Unit 3

Different Contrast Enhancements

No stretch Min-max stretch 5% tail trim

20% tail trim 1 x standard deviation 2 x standard deviation

Page 68: 18707_Unit 3

Filtering

• Filtering is a process of changing the spatial frequency.

• Purposes:

• To improve interpretability of image data

• To highlight or suppress specific features of an image based on their spatial frequency

• Methods:

• Convolution filtering

• High Pass

• Low Pass

• Edge Detection

Page 69: 18707_Unit 3

Spatial Frequency

Zero spatial frequency Low spatial frequency High spatial frequency

Page 70: 18707_Unit 3

Convolution Filtering

• It means moving a window of set of pixels in dimensions (3x3 , 5x5) over each pixel in the image, applying a mathematical calculation using the pixel values under that window, and replacing the central value with that pixel.

• This window is known as “ convolution kernel”

Page 71: 18707_Unit 3

Low Pass filtering

• Low-frequency kernels are applied to decrease the spatial frequency (provides smooth image).

Page 72: 18707_Unit 3

Low Pass Filtering

Before filtering After filtering

Page 73: 18707_Unit 3

High Pass Filter

• It is the opposite of low pass filter ( sharpens the image)

• High-frequency kernels are applied to increase the spatial frequency (provides sharper image).

Page 74: 18707_Unit 3
Page 75: 18707_Unit 3

High Pass Filtering

Before filtering After filtering

Page 76: 18707_Unit 3

Edge Detection Filtering

• They are used to highlight linear features like roads, field boundaries,etc.

• Zero Sum kernels are used and division is not applied.

• This generally causes the output values to be zero in areas where all input values are equal, low in areas of low spatial frequency and extreme in areas of high spatial frequency.

Page 77: 18707_Unit 3

Edge Detection

Before filtering After filtering

Page 78: 18707_Unit 3

Image Transformation

• Image transformations generate ‘new’ images from two or more sources which highlight particular features or properties of interest, better than the original input images.

• Common transformations:

• Image arithmetic operations

• Principal component transformation (PCT)

• Tasselled cap transformation (TCT)

• Colour space transformation (CST)

• Fourier transformation

• Image fusion

Page 79: 18707_Unit 3

Arithmetic Operations

• Addition

• Subtraction

• Multiplication

• Division

• The images may be separate spectral bands from a single multi-spectral data set or they may be individual bands from image data sets that have been collected at different dates.

Page 80: 18707_Unit 3

Image Addition (Averaging)

• Addition is getting new DN value of a pixel in the output image by averaging the DN values of corresponding pixels of input images

Page 81: 18707_Unit 3

Image Subtraction (Change Detection)

• It is the process of subtracting the DN value of one image from corresponding pixel of other image.

Page 82: 18707_Unit 3

Change Detection

• 1987 1997 Change 1987

Page 83: 18707_Unit 3

Image Multiplication

• In this process pixel-by-pixel multiplication of two images is performed.

Page 84: 18707_Unit 3

Indices and Rationing

• Used to create output image by mathematically combining the DN values of different bands.

• Eg. Vegetation index DNNIR / DNR

Normalized differential vegetation index (DNNIR – DNR) / (DNNIR + DNR)

Page 85: 18707_Unit 3

Concept of Indices

Page 86: 18707_Unit 3

Image Classification

• Image classification is sorting pixels into a finite number of individual classes, or categories of data, based on their DN values.

Continuous image Thematic image

Page 87: 18707_Unit 3

Supervised vs Unsupervised Classification

• Supervised • Have a set of desired classes in mind and then create the appropriate

signatures from the data. • when one wants to identify relatively few classes• when one has selected training sites that can be verified with ground truth

data• when one can identify distinct, homogeneous regions that represent each

class.

• Unsupervised• Classes to be determined by spectral distinctions that are inherent in the data

and define the classes later. • when one wants to define many classes easily, and then identify classes.

Page 88: 18707_Unit 3

Training for Classification

• Computer system must be trained to recognize patterns in image data.

• Process of defining the criteria by which these patterns are recognized.

• Supervised Training is controlled by the analyst. • Select pixels that represent patterns and instruct the computer system to

identify pixels with similar characteristics.

• More accurate but requires high skill.

• Unsupervised Training is computer-automated.• Specify number of classes and the computer uncovers statistical classes.

• Less accurate and less skill required.

Page 89: 18707_Unit 3

Supervised Classification

Signature

Collection

Signature

EvaluationClassification

Raw

DataPrepro-

-cessing

Page 90: 18707_Unit 3

Signature

A set of pixels selected to represent each primary land-cover.

• Knowledge

Field Data

Personal Experience

Photos

Previous Studies

Lake

River

Agriculture

Forest

Page 91: 18707_Unit 3

Minimum Distance Classifier

Page 92: 18707_Unit 3

Unsupervised Classification

• Clustering algorithms are used in unsupervised classification.

• Spectral values of pixels are grouped first and then matched by the analyst to any specified class.

• Analyst specifies how many classes are required and threshold values of variation within the clusters and among the clusters.

• If a cluster needs to be broken , or clusters need to be combined –analyst can take the decision.

Page 93: 18707_Unit 3

Accuracy Assessment

Accuracy assessment of remote sensing product is a feedback

system for checking and evaluating the objectives and the results.

Page 94: 18707_Unit 3

Contingency MatrixContingency matrix for accuracy assessment