Top Banner
Almost everything one needs to know about image matching systems E. H. Conrow The Aerospace Corporation, El Segundo, California 90245 J. A. Ratkovic The Rand Corporation, Santa Monica, California 90406 1. Introduction The missile map -matching problem for guidance updating or target homing is shown in Figure 1. The problem as defined here consists of locating the position of a sensor image relative to a reference map which is stored onboard the vehicle's computer. Once the match location is found the relative location between the two map centers can be used to update the vehicle's navigational position. The two important performance considerations are the avoidance of false fixes as measured by their frequency of occurrence and the accuracy with which the position fix can be made. This paper describes the overall design and evaluation of map- matching systems. (For additional back- ground information, the reader is referred to references 1- 10, ) Figure 2 shows the layout of a typical system design. Here the system parameters associated with the reference data, sensor environment and vehicle are integrated to determine a model of the image dynamics. This model used in conjunction with a signature prediction model is used to construct the reference map or set of reference maps to be stored on the vehicle. In the map- matching problem a number of errors can develop between the sensed image onboard the missile platform and the image reference map stored in the vehicle computer. Those errors can be categorized into four generic classes depending on their impact on the composition of the sensed image relative to the reference map. Global errors which impact all elements in the sensed image are generally accommodated by preprocessing while all other types of errors must be accommodated by the choice of the matching algorithm. The scene selection process is important for determining that the reference map area contains sufficient information of a nonredundant nature to successfully perform the matching task. The scene selection process consists of a two part screening process. The first part consists of various mathematical tests which determine to a first level the amount of independent information and redundancy within the scene. The second part consists of simulation to determine the acceptability of the scene under real world flight conditions. Finally, a system verification process is required to determine from the nature of the matching data whether a successful match has taken place and if not, what appropriate action should be taken. This paper is divided into four additional sections. Section 2 describes the problems associated with describing a scene mathematically and with the time and spatial- varying nature of scene signature for various sensor types. This section describes the nature of environmental factors on image dynamics and their impact which can be measured in terms of predictive errors, nonstructured errors, and contrast reversals for various sensor wavelengths. Finally, remedies are discussed which can mitigate the effects of errors due to image dynamics. Section 3 describes the problems associated with reference map construction and discusses the scene selection process by which "good" reference scenes are progressively screened out from those that are "not so good ", Section 4 describes the compatibility of various classes of algorithms to accommodate each of the four categories of error sources. Ultimately, since the magnitude of these errors is sensor dependent, this section crosscorrelates algorithm appropriateness for each sensor wavelength. Finally, Section 5 describes the mathematical process behind developing measures for system perform- ance. This section is divided into two parts. In the first part the general scene /error model is discussed and a mathematical approach (through a list of assumptions and approximations) is outlined which can be used to predict the probability of false match occurrence based on a number of system parameters. In the second part of the section, one is concerned with (given a particular scene) using the statistical data from the map -matching algorithm to estimate the system performance in near real -time. 426 / SPIE Vol. 238 Image Processing for Missile Guidance (1980) Almost everything one needs to know about image matching systems E. H. Conrow The Aerospace Corporation, El Segundo, California 90245 J. A. Ratkovic The Rand Corporation, Santa Monica, California 90406 1. Introduction The missile map-matching problem for guidance updating or target homing is shown in Figure 1. The problem as defined here consists of locating the position of a sensor image relative to a reference map which is stored onboard the vehicle's computer. Once the match location is found the relative location between the two map centers can be used to update the vehicle's navigational position. The two important performance considerations are the avoidance of false fixes as measured by their frequency of occurrence and the accuracy with which the position fix can be made. This paper describes the overall design and evaluation of map-matching systems. (For additional back- ground information, the reader is referred to references 1 - 10, ) Figure 2 shows the layout of a typical system design. Here the system parameters associated with the reference data, sensor environment and vehicle are integrated to determine a model of the image dynamics. This model used in conjunction with a signature prediction model is used to construct the reference map or set of reference maps to be stored on the vehicle. In the map-matching problem a number of errors can develop between the sensed image onboard the missile platform and the image reference map stored in the vehicle computer. Those errors can be categorized into four generic classes depending on their impact on the composition of the sensed image relative to the reference map. Global errors which impact all elements in the sensed image are generally accommodated by preprocessing while all other types of errors must be accommodated by the choice of the matching algorithm. The scene selection process is important for determining that the reference map area contains sufficient information of a nonredundant nature to successfully perform the matching task. The scene selection process consists of a two part screening process. The first part consists of various mathematical tests which determine to a first level the amount of independent information and redundancy within the scene. The second part consists of simulation to determine the acceptability of the scene under real world flight conditions. Finally, a system verification process is required to determine from the nature of the matching data whether a successful match has taken place and if not, what appropriate action should be taken. This paper is divided into four additional sections. Section 2 describes the problems associated with describing a scene mathematically and with the time and spatial-varying nature of scene signature for various sensor types. This section describes the nature of environmental factors on image dynamics and their impact which can be measured in terms of predictive errors, nonstructured errors, and contrast reversals for various sensor wavelengths. Finally, remedies are discussed which can mitigate the effects of errors due to image dynamics. Section 3 describes the problems associated with reference map construction and discusses the scene selection process by which "good" reference scenes are progressively screened out from those that are "not so good". Section 4 describes the compatibility of various classes of algorithms to accommodate each of the four categories of error sources. Ultimately, since the magnitude of these errors is sensor dependent, this section crosscorrelates algorithm appropriateness for each sensor wavelength. Finally, Section 5 describes the mathematical process behind developing measures for system perform- ance. This section is divided into two parts. In the first part the general scene/error model is discussed and a mathematical approach (through a list of assumptions and approximations) is outlined which can be used to predict the probability of false match occurrence based on a number of system parameters. In the second part of the section, one is concerned with (given a particular scene) using the statistical data from the map-matching algorithm to estimate the system performance in near real-time. 426 / SPIE Vol. 238 Image Processing for Missile Guidance (1980) DownloadedFrom:http://proceedings.spiedigitallibrary.org/on12/10/2013TermsofUse:http://spiedl.org/terms
28
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 0238_426

Almost everything one needs to know about image matching systems

E. H. ConrowThe Aerospace Corporation, El Segundo, California 90245

J. A. RatkovicThe Rand Corporation, Santa Monica, California 90406

1. Introduction

The missile map -matching problem for guidance updating or target homing is shown in Figure 1. Theproblem as defined here consists of locating the position of a sensor image relative to a reference mapwhich is stored onboard the vehicle's computer. Once the match location is found the relative locationbetween the two map centers can be used to update the vehicle's navigational position. The two importantperformance considerations are the avoidance of false fixes as measured by their frequency of occurrenceand the accuracy with which the position fix can be made.

This paper describes the overall design and evaluation of map- matching systems. (For additional back-ground information, the reader is referred to references 1- 10, ) Figure 2 shows the layout of a typicalsystem design. Here the system parameters associated with the reference data, sensor environment andvehicle are integrated to determine a model of the image dynamics. This model used in conjunction with asignature prediction model is used to construct the reference map or set of reference maps to be stored onthe vehicle.

In the map- matching problem a number of errors can develop between the sensed image onboard themissile platform and the image reference map stored in the vehicle computer. Those errors can becategorized into four generic classes depending on their impact on the composition of the sensed imagerelative to the reference map. Global errors which impact all elements in the sensed image are generallyaccommodated by preprocessing while all other types of errors must be accommodated by the choice of thematching algorithm. The scene selection process is important for determining that the reference map areacontains sufficient information of a nonredundant nature to successfully perform the matching task. Thescene selection process consists of a two part screening process. The first part consists of variousmathematical tests which determine to a first level the amount of independent information andredundancy within the scene. The second part consists of simulation to determine the acceptability of thescene under real world flight conditions. Finally, a system verification process is required to determinefrom the nature of the matching data whether a successful match has taken place and if not, what appropriateaction should be taken.

This paper is divided into four additional sections. Section 2 describes the problems associated withdescribing a scene mathematically and with the time and spatial- varying nature of scene signature forvarious sensor types. This section describes the nature of environmental factors on image dynamics andtheir impact which can be measured in terms of predictive errors, nonstructured errors, and contrastreversals for various sensor wavelengths. Finally, remedies are discussed which can mitigate the effectsof errors due to image dynamics.

Section 3 describes the problems associated with reference map construction and discusses the sceneselection process by which "good" reference scenes are progressively screened out from those that are"not so good ",

Section 4 describes the compatibility of various classes of algorithms to accommodate each of the fourcategories of error sources. Ultimately, since the magnitude of these errors is sensor dependent, thissection crosscorrelates algorithm appropriateness for each sensor wavelength.

Finally, Section 5 describes the mathematical process behind developing measures for system perform-ance. This section is divided into two parts. In the first part the general scene /error model is discussedand a mathematical approach (through a list of assumptions and approximations) is outlined which can beused to predict the probability of false match occurrence based on a number of system parameters. In thesecond part of the section, one is concerned with (given a particular scene) using the statistical data fromthe map -matching algorithm to estimate the system performance in near real -time.

426 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Almost everything one needs to know about image matching systems

E. H. ConrowThe Aerospace Corporation, El Segundo, California 90245

J. A. RatkovicThe Rand Corporation, Santa Monica, California 90406

1. Introduction

The missile map-matching problem for guidance updating or target homing is shown in Figure 1. The problem as defined here consists of locating the position of a sensor image relative to a reference map which is stored onboard the vehicle's computer. Once the match location is found the relative location between the two map centers can be used to update the vehicle's navigational position. The two important performance considerations are the avoidance of false fixes as measured by their frequency of occurrence and the accuracy with which the position fix can be made.

This paper describes the overall design and evaluation of map-matching systems. (For additional back­ ground information, the reader is referred to references 1 - 10, ) Figure 2 shows the layout of a typical system design. Here the system parameters associated with the reference data, sensor environment and vehicle are integrated to determine a model of the image dynamics. This model used in conjunction with a signature prediction model is used to construct the reference map or set of reference maps to be stored on the vehicle.

In the map-matching problem a number of errors can develop between the sensed image onboard the missile platform and the image reference map stored in the vehicle computer. Those errors can be categorized into four generic classes depending on their impact on the composition of the sensed image relative to the reference map. Global errors which impact all elements in the sensed image are generally accommodated by preprocessing while all other types of errors must be accommodated by the choice of the matching algorithm. The scene selection process is important for determining that the reference map area contains sufficient information of a nonredundant nature to successfully perform the matching task. The scene selection process consists of a two part screening process. The first part consists of various mathematical tests which determine to a first level the amount of independent information and redundancy within the scene. The second part consists of simulation to determine the acceptability of the scene under real world flight conditions. Finally, a system verification process is required to determine from the nature of the matching data whether a successful match has taken place and if not, what appropriate action should be taken.

This paper is divided into four additional sections. Section 2 describes the problems associated with describing a scene mathematically and with the time and spatial-varying nature of scene signature for various sensor types. This section describes the nature of environmental factors on image dynamics and their impact which can be measured in terms of predictive errors, nonstructured errors, and contrast reversals for various sensor wavelengths. Finally, remedies are discussed which can mitigate the effects of errors due to image dynamics.

Section 3 describes the problems associated with reference map construction and discusses the scene selection process by which "good" reference scenes are progressively screened out from those that are "not so good".

Section 4 describes the compatibility of various classes of algorithms to accommodate each of the four categories of error sources. Ultimately, since the magnitude of these errors is sensor dependent, this section crosscorrelates algorithm appropriateness for each sensor wavelength.

Finally, Section 5 describes the mathematical process behind developing measures for system perform­ ance. This section is divided into two parts. In the first part the general scene/error model is discussed and a mathematical approach (through a list of assumptions and approximations) is outlined which can be used to predict the probability of false match occurrence based on a number of system parameters. In the second part of the section, one is concerned with (given a particular scene) using the statistical data from the map-matching algorithm to estimate the system performance in near real-time.

426 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 2: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

REFERENCE MAPM ELEMENTS

1

SENSOR MAP 'N ELEMENTS

SENSOR MAPCENTER

TARGET LOCATION

0o CJpt%)Q

MISSILE CHECKPOINT

REFERENCEMAP

SENSORIMAGE

GIVEN:

1) A REFERENCE MAP (M ELEMENTS)IN WHICH THE TARGET IS LOCATED

2) A SENSOR MAP (N ELEMENTS)WHICH IS CONTAINED SOMEWHEREWITHIN THE BOUNDARIES OF THEREFERENCE MAP

FIND:

THE DISPLACED POSITION (K, L)AT WHICH THE TWO MAPS ARECOINCIDENT

OTHERCHECKPOINTS

Figure 1. The map- matching problem

System Parameters Design Parameters

Reference DataWavelengthResolutionOrientation

SensorWavelengthResolution

EnvironmentAtmosphere

Terrain PhysicalElectrical

P rope rtie s

VehicleSensor OrientationFlight

SignaturePrediction

ImageDynamics Model

Reference MapConstruction

TERMINALREF MAP

Problem Areas Accommodating Processes

Global errors Preprocessing1

Scene Scene SelectionComposition Process

Algorithm Choice

System Verification

Regional, localand nonstructurederrors

- Flight P- rofile

Figure 2. Overall map - matching system design

SPIE Vol 238 /mage Processing for Missile Guidance (1980) / 427

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

GIVEN:

1) A REFERENCE MAP (M ELEMENTS) IN WHICH THE TARGET IS LOCATED

A SENSOR MAP (N ELEMENTS) WHICH IS CONTAINED SOMEWHERE WITHIN THE BOUNDARIES OF THE REFERENCE MAP

THE DISPLACED POSITION (K, L) AT WHICH THE TWO MAPS ARE COINCIDENT

REFERENCE MAP M ELEMENTS

SENSOR MAP XX CENTER

REFERENCE MAP

Figure 1. The map-matching problem

System Parameters

Reference Data Wavelength Resolution Orientation

SensorWavelength Resolution

Environment Atmosphere

Terrain Physical & Electrical Properties

VehicleSensor Orientation Flight

Design Parameters Problem Areas

Signature Prediction

Image Dynamics Model

Reference Map Construction

- Global errors

-SceneComposition

-Regional, local and non structured errors

-Flight Profile

Accommodating Processes

Preprocessing

Scene Selection Process

Algorithm Choice

System Verification

Figure 2. Overall map-matching system design

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 427

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 3: 0238_426

CONROW, RATKOVIC

Image dynamics and its impact on comparison of sensed image and reference map

Figure 3 depicts the impact of dynamic changes in the scene signature on the map- matching system. Asindicated in the figure, the sensor /image interaction is influenced by a number of environmental factors.These factors, combined with inherent time -varying material physical and electrical properties, produce anoscillation in the scene signature. Dynamic changes in the sensor scene when compared to a time -stationary reference scene can cause significant errors to exist between the two maps. These errors, ifunaccounted for, are generally a major cause of failure in map- matching systems.

It is the purpose of this section of the paper to:

1. describe the scene composition,2. discuss the impact of environmental and inherent scene factors on signature dynamics,3. discuss and quantify the nature of the map difference errors when sensed and reference map

are compared, and4. outline remedies for accommodating map difference errors in the system.

As the influence of the environmental and inherent scene factors is wavelength or frequency dependent,the discussion will focus on the most common active and passive sensor categories (i. e. , optical /near IR,middle IR, thermal IR, and microwave).

The following section will describe the reference map selection process including methods for choosingreference maps to reduce the map difference problem. The subsequent section will discuss the role ofvarious types of algorithms in accommodating map difference algorithm and other types of system errors.

Sensor Scene (Terrain) MapComparison

Sensor Environmental Scene Time Comparis on MapDescription influences Description varying of reference differencewavelenth - atmosphere - homogeneous -scene map & sensed errors(frequency)

- groundresolution

- meteorology regionsresolution &texture depen-dent elements

signature image - contrastreversalspredictionnon-structured

Inherent timevarying sceneproperties

Referencemap

(non -timevarying)

- physical- electrical

Reference map generation- wavelength conversion

(if necessary)- geometry conversion

Figure 3. The impact of image dynamics on map comparison

2. 1 Composition of the scene

The scene is the most complex component of the map- matching problem and the most difficult to model.Scenes can be described in the visual domain (the eyeball process) as being composed of a set of features.Actual sensor data, broken down by resolution elements, are described by a set of intensity values. Thereare regions of intensity values in the scene which can be considered analogous to features in the visualdomain. These are homogeneous regions* within the scene which can be considered equivalent to features(because a feature can be defined by a single homogeneous region or set of homogeneous regions). From aphysical standpoint, homogeneous regions are areas in which the signature (reflectance for visual and radar,emitted power for middle and thermal IR, and altitude for terrain contours) is expected to remain fairlyconstant, e.g., a grassy field in which all the elements in the region are expected to have the same meanvalue but not necessarily a constant value.

There exist variations in the intensity level within a homogeneous region. Neglecting the possibility ofsensor noise, this signature variation can be attributed to either scene resolution constraints or texturevariation within the region. Scene resolution constraints can cause a perturbation in the signature of the

e We define a homogeneous region to be a set of spatially connected pixels or elements which possess thestatistical property of at least first -order stationarity (constant mean intensity level over the region) andpossibly second -order stationarity (mean and variance constant and autocorrelation independent of position).

428 / SP /E Vol 238 Image Processing for Missile Guidance (1980)

2.

CONROW, RATKOVIC

Image dynamics and its impact on comparison of sensed image and reference map

Figure 3 depicts the impact of dynamic changes in the scene signature on the map-matching system. As indicated in the figure, the sensor/image interaction is influenced by a number of environmental factors. These factors, combined with inherent time-varying material physical and electrical properties, produce an oscillation in the scene signature. Dynamic changes in the sensor scene when compared to a time- stationary reference scene can cause significant errors to exist between the two maps. These errors, if unaccounted for, are generally a major cause of failure in map-matching systems.

It is the purpose of this section of the paper to:

1. describe the scene composition,2. discuss the impact of environmental and inherent scene factors on signature dynamics,3. discuss and quantify the nature of the map difference errors when sensed and reference map

are compared, and4. outline remedies for accommodating map difference errors in the system.

As the influence of the environmental and inherent scene factors is wavelength or frequency dependent, the discussion will focus on the most common active and passive sensor categories (i. e. , optical/near IR, middle IR, thermal IR, and microwave).

The following section will describe the reference map selection process including methods for choosing reference maps to reduce the map difference problem. The subsequent section will discuss the role of various types of algorithms in accommodating map difference algorithm and other types of system errors.

Sensor Scene (Terrain)

Environmental influences- atmosphere"- meteorology

Scene Description- homogeneous

regions- resolution &

texture depen­ dent elements

Time varying

"scene signature

MapComparison Comparison of reference map &; sensed image

Mapdifference

"errors- contrast

reversals- prediction- non-

structured

Inherent time varying scene properties- physical- electrical

Referencemap

(non-time varying)

Reference map generation- wavelength conversion

(if necessary)- geometry conversion

Figure 3. The impact of image dynamics on map comparison

2. 1 Composition of the scene

The scene is the most complex component of the map-matching problem and the most difficult to model. Scenes can be described in the visual domain (the eyeball process) as being composed of a set of features. Actual sensor data, broken down by resolution elements, are described by a set of intensity values. There are regions of intensity values in the scene which can be considered analogous to features in the visual domain. These are homogeneous regions* within the scene which can be considered equivalent to features (because a feature can be defined by a single homogeneous region or set of homogeneous regions). From a physical standpoint, homogeneous regions are areas in which the signature (reflectance for visual and radar, emitted power for middle and thermal IR, and altitude for terrain contours) is expected to remain fairly constant, e. g. , a grassy field in which all the elements in the region are expected to have the same mean value but not necessarily a constant value.

There exist variations in the intensity level within a homogeneous region. Neglecting the possibility of sensor noise, this signature variation can be attributed to either scene resolution constraints or texture variation within the region. Scene resolution constraints can cause a perturbation in the signature of the

* We define a homogeneous region to be a set of spatially connected pixels or elements which possess the statistical property of at least first-order stationarity (constant mean intensity level over the region) and possibly second-order stationarity (mean and variance constant and autocorrelation independent of position).

428 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 4: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

region. For instance, one can consider the grassy field not to be uniform but instead to have a few fallentree trunks and shrubs dispersed within it. If the ground resolution of the sensor is of the same magnitudeas the size of the shrubs and tree turnks, then we would expect variations in the intensity of the grassyregions due to these objects.' It should be noted that if the resolution of the sensor were to increase to thepoint that dimensions of objects within the grassy field covered several sensor resolution elements, thenthese objects would be considered homogeneous regions in themselves. In our tree trunk example, furtherincrease in sensor resolution would result eventually in the moss on the fallen tree trunk becoming a 'homogeneous region. Obviously, the process of identifying homogeneous regions could continue ad infinitumas the sensor resolution was increased.

Thus, we can further categorize a homogeneous region in the physical domain by the number of resolu-tion elements containing objects which contribute to a signature variation and in the statistical domain bythe number of statistically independent elements which comprise the region. The "scene resolution" con-cept (11) provides a useful framework for analyzing the statistical variation of a region' *. We shall definethis scene resolution as the ratio of the average of the number of sensor resolution cells to that required tomake up the equivalent of one independent element in the imaged map. As discussed above, sensor resolu-tion constraints are one contributing factor to "scene resolution " - -the other being texture.

Texture, caused by physical and electrical material variations, can exist even within purely homogeneousregions. The three primary sources of homogeneous material texture are: illuminator- target- detectorgeometry, which includes slope and slope azimuth; directional reflectance and absorptance described byelectromagnetic theory (Fresnel's equations) and surface roughness effects. Texture produced by theseprocesses can be virtually resolution independent in comparison to those observed within a resolution depen-dent homogeneous region (i. e. , see previous discussion). A more detailed presentation of homogeneousmaterial texture is given in the appendix.

2.2 Signature dynamics

In order to estimate the intensity magnitude and oscillations that occur in sensor imagery, it is firstnecessary to understand the relevant physical and electrical material properties and governing atmosphericand meteorological parameters present. A summary of the governing material properties for each sensorregion is given in Table 1. Similarly, relevant atmospheric and meteorological parameters for eachspectral region are given in Table 2.

Contrast reversals are of importance to the mission planner because of the potentially decorrelatingeffect they can have on map- matching system performance. A summary of the relevant parameters in eachspectral region that can induce these effects is given in Table 3. The diurnal and seasonal impact onreference area signature characteristics is also important since it provides the mission planner with atime -frame estimate of when region level shifts, hence contrast reversals, are likely to occur. A summaryof the time -cycle impact on reference area signature characteristics for each spectral region is given inTable 4.

The impact of physical and electrical material properties and atmospheric and meteorological effects ontime -varying reference area signature characteristics will now be presented for each sensor region. Theimpact of snow /ice /water on the reference area signature will not be considered here. An estimate of themagnitude of contrast reversals it can induce within typical reference areas is given in Section 2. 3.

Passive optical /near IR. The governing material and atmospheric properties in the passive optical/near IR interval (. 4k -- 1. 6µ) are short wavelength reflectance, incident irradiance, atmospheric attenuation,and path radiance, respectively. Contrast reversals in this spectral region are primarily due to changes inmaterial reflectance due to seasonal effects from the vegetation growth cycle.

The atmospheric effects, particularly attenuation and path radiance, govern the degree of observed con-trast for a given imaged scene. The effect of atmospheric attenuation is to uniformly reduce the receivedradiance across the scene. Path radiance, however, introduces additive energy into the imaged resolutionelement via direct or indirect atmospheric scattering that originated outside of it. The net effect of thesetwo terms is to lower the observed scene signal -to -noise ratio (SNR) for a given sensor. They are usuallythe limiting operational factors in this wavelength region. Complicating operational performance pre-dictions in this interval is the fact that the values of most of the governing material and atmosphericproperties are generally a strong function of wavelength.

Passive middle IR. The governing material properties in the passive middle IR (3µ -' 5µ) region aremiddle IR reflectance (hence emittance) and thermal inertia (thermal conductivity over the square root ofthermal diffusivity). The predominant atmospheric properties are attenuation, and

Presuming, of course, that the signature of the objects was different from the grass at the wavelength ofthe sensor.

** The scene resolution is computed by determining the number of independent (NI) elements in the imageand then dividing this quantity into N, the total number of resolution elements in the scene (i. e. , N /NI)

SP /E Vol 238 /mage Processing for Missile Guidance (1980) / 429

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

region. For instance, one can consider the grassy field not to be uniform but instead to have a few fallen tree trunks and shrubs dispersed within it. If the ground resolution of the sensor is of the same magnitude as the size of the shrubs and tree turnks, then we would expect variations in the intensity of the grassy regions due to these objects. * It should be noted that if the resolution of the sensor were to increase to the point that dimensions of objects within the grassy field covered several sensor resolution elements, then these objects would be considered homogeneous regions in themselves. In our tree trunk example, further increase in sensor resolution would result eventually in the moss on the fallen tree trunk becoming a homogeneous region. Obviously, the process of identifying homogeneous regions could continue ad infinitum as the sensor resolution was increased.

Thus, we can further categorize a homogeneous region in the physical domain by the number of resolu­ tion elements containing objects which contribute to a signature variation and in the statistical domain by the number of statistically independent elements which comprise the region. The "scene resolution" con­ cept (11) provides a useful framework for analyzing the statistical variation of a region**. We shall define this scene resolution as the ratio of the average of the number of sensor resolution cells to that required to make up the equivalent of one independent element in the imaged map. As discussed above, sensor resolu­ tion constraints are one contributing factor to "scene resolution"~-the other being texture.

Texture, caused by physical and electrical material variations, can exist even within purely homogeneous regions. The three primary sources of homogeneous material texture are: illuminator-target-detector geometry, which includes slope and slope azimuth; directional reflectance and absorptance described by electromagnetic theory (Fresnel's equations) and surface roughness effects. Texture produced by these processes can be virtually resolution independent in comparison to those observed within a resolution depen­ dent homogeneous region (i. e. , see previous discussion). A more detailed presentation of homogeneous material texture is given in the appendix.

2. 2 Signature dynamics

In order to estimate the intensity magnitude and oscillations that occur in sensor imagery, it is first necessary to understand the relevant physical and electrical material properties and governing atmospheric and meteorological parameters present. A summary of the governing material properties for each sensor region is given in Table 1. Similarly, relevant atmospheric and meteorological parameters for each spectral region are given in Table 2.

Contrast reversals are of importance to the mission planner because of the potentially decorrelating effect they can have on map-matching system performance. A summary of the relevant parameters in each spectral region that can induce these effects is given in Table 3. The diurnal and seasonal impact on reference area signature characteristics is also important since it provides the mission planner with a time-frame estimate of when region level shifts, hence contrast reversals, are likely to occur. A summary of the time-cycle impact on reference area signature characteristics for each spectral region is given in Table 4.

The impact of physical and electrical material properties and atmospheric and meteorological effects on time-varying reference area signature characteristics will now be presented for each sensor region. The impact of snow/ice/water on the reference area signature will not be considered here. An estimate of the magnitude of contrast reversals it can induce within typical reference areas is given in Section 2.3.

Passive optical/near IR. The governing material and atmospheric properties in the passive optical/ near IR interval (. 4p. -* 1. 6p.) are short wavelength reflectance, incident irradiance, atmospheric attenuation, and path radiance, respectively. Contrast reversals in this spectral region are primarily due to changes in material reflectance due to seasonal effects from the vegetation growth cycle.

The atmospheric effects, particularly attenuation and path radiance, govern the degree of observed con­ trast for a given imaged scene. The effect of atmospheric attenuation is to uniformly reduce the received radiance across the scene. Path radiance, however, introduces additive energy into the imaged resolution element via direct or indirect atmospheric scattering that originated outside of it. The net effect of these two terms is to lower the observed scene signal-to-noise ratio (SNR) for a given sensor. They are usually the limiting operational factors in this wavelength region. Complicating operational performance pre­ dictions in this interval is the fact that the values of most of the governing material and atmospheric properties are generally a strong function of wavelength.

Passive middle IR. The governing material properties in the passive middle IR (3(j. •-» 5|j.) region are middle IR reflectance (hence emittance) and thermal inertia (thermal conductivity over the square root of thermal diffusivity). The predominant atmospheric properties are attenuation, and

* Presuming, of course, that the signature of the objects was different from the grass at the wavelength of the sensor.

** The scene resolution is computed by determining the number of independent (Nj) elements in the image and then dividing this quantity into N, the total number of resolution elements in the scene (i. e. , N/Nj).

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 429

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 5: 0238_426

Sensor RegionOptical /Near IR

CONROW, RATKOVIC

Table 1. Governing Physical and Electrical Material Properties(Decreasing Order of Importance)

Type Physical ElectricalP Surface Roughness - .41, --1.61, Reflectance

and Imaging Geometry

A Surface Roughnessand Imaging Geometry

.41,-1.6 g Reflectance

Middle IR P Thermal Inertia - 3 N -3g ReflectanceImaging Geometry - .4[-'-1. 6 µ AbsorptanceSurface Roughness - 31,- --5N Emittance

Thermal IR

Microwave

A Surface Roughness - 31-- 5 N Reflectanceand Imaging Geometry

P Thermal Inertia - .4i --1.6µ AbsorptanceImaging GeometrySurface Roughness - 8 121, Emittance

A Surface Roughnessand Imaging Geometry

81,-- 121, Reflectance

P Surface Roughness - Microwave ReflectanceImaging Geometry and EmittanceThermal Inertia - .4P1-1.6N- Absorptance

A Surface Roughnessand Imaging Geometry

Microwave Reflectance

a A= Active systemP= Passive systemDirectional electrical properties exist in each case which vary with surface roughness and imaging(illuminator - surface - sensor) geometry.

Table

Sensor RegionOptical /Near I

Middle IR

2. Atmospheric and Meteorological Impact on Sensed Imagery

Type Parameter Impact on Imagery'P - Small to strong for path radiance and attenuation

A - Small to strong for path radiance and attenuation

P Small to moderate for path radiance- Small to strong for attenuation- Small to moderate for reradiation- Small to moderate for latent and sensible heat transfer

depending on wind speed, precipitable water, andatmospheric and ground temperatures.

A - Small to strong for attenuation

Thermal IR P - Small for path radiance- Small to strong for attenuation and reradiation depending

on species, concentration, diameter and temperature ofaerosol distribution present.

- Small to moderate for latent and sensible heat transferdepending on wind speed, precipitable water content andatmospheric and ground temperatures.

Microwave

A - Small to strong for attenuation

P - Small for attenuation unless rain is present- Small to moderate for oxygen or water (absorption and)

reradiation depending on species concentration present.- Small to moderate for latent and sensible heat transfer

(impacts moisture availability, hence material emittanceand thermal energy balance).

A - Small for attenuation unless rain is present

Assumes operation under cloud cover with no precipitation

- Atmospheric attenuation is dependent on the species, concentration and diameter of aerosol distributionspresent and atmospheric pressure (governs molecular species concentration).

430 / SPIE Vol 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

Table 1. Governing Physical and Electrical Material Properties (Decreasing Order of Importance)

Sensor Region Type ' Physical Electrical

Optical/Near IR P Surface Roughness - . 4)^ 1. 6 K Reflectanceand Imaging Geometry

A Surface Roughness - . 4H- *1. 6 M- Reflectance and Imaging Geometry

Middle IR P Thermal Ine rtia - 3 H 5 M- Reflectance

Imaging Geometry - . 4 ^ 1. 6 M- Absorptance

Surface Roughness - 3 M- - 5 H Emittance

A Surface Roughness - 3H- - 5 M- Reflectance and Imaging Geometry

Thermal IR P Thermal Ine rtia - . 4 M 1. 6 M- AbsorptanceImaging Geometry Surface Roughness - 8 h1- - 12 M- Emittance

A Surface Roughness - 8 M- 12 H- Reflectance and Imaging Geometry

Microwave P Surface Roughness - Microwave ReflectanceImaging Geometry and EmittanceThe rmal Ine rtia - . 4^ -1.6M- Absorptance

A Surface Roughness - Microwave Reflectance

_____________ and Imaging Geometry

* A= Active system P= Passive system

Directional electrical properties exist in each case -which vary -with surface roughness and imaging

(illuminator - surface - sensor) geometry.

Table 2. Atmospheric and Meteorological Impact on Sensed Imagery

Sensor Region Type Parameter Impact on Imagery*Optical/Near I P - Small to strong for path radiance and attenuation

A - Small to strong for path radiance and attenuation

Middle IR P - Small to moderate for path radiance- Small to strong for attenuation- Small to moderate for reradiation- Small to moderate for latent and sensible heat transfer

depending on wind speed, precipitable water, and atmospheric and ground temperatures.

A - Small to strong for attenuation

Thermal IR P - Small for path radiance- Small to strong for attenuation and reradiation depending

on species, concentration, diameter and temperature of aerosol distribution present.

- Small to moderate for latent and sensible heat transfer depending on wind speed, precipitable water content and atmospheric and ground temperatures.

A - Small to strong for attenuation

Microwave P - Small for attenuation unless rain is present- Small to moderate for oxygen or water (absorption and)

reradiation depending on species concentration present.- Small to moderate for latent and sensible heat transfer

(impacts moisture availability, hence material emittance and thermal energy balance).

A - Small for attenuation unless rain is present

*- Assumes operation under cloud cover with no precipitation

- Atmospheric attenuation is dependent on the species, concentration and diameter of aerosol distributions

present and atmospheric pressure (governs molecular species concentration).

430 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 6: 0238_426

Sensor Region

Optical /Near IR

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Table 3. Sources of Image Contrast Reversal

Type Cause

P and A Optical /Near IRVegetation Reflectance

Middle IR P - Material Thermal InertiaDiurnal 35 µ SolarIrradiance Component

Thermal IR

A - Middle IRVegetation Reflectance

P - Thermal Inertia

A - Thermal IRVegetation Reflectance

Microwave P Atmospheric Re radiation- Thermal Inertia

A - MicrowaveVegetation Reflectance

Snow /Ice /Water complex can produce contrast reversals in each imaging region

Table 4. Diurnal and Seasonal Environmental Impact on Sensed Imagery

Sensor Region Type Time Cycle Impact on Imagery

Optical /Near IR P Diurnal Small to strong (depends on spectral andabsolute level of illumination imagery isobtained under).

Middle IR

Thermal IR

Microwave

Seasonal Small for spectral irradiance changes(sun's declination angle) but moderate forillumination le vel.

Moderate to strong over vegetation cycle

A Seasonal - Moderate to strong over vegetation cycle

P Diurnal Strong: short and middle wavelengthirradiance drives the rmal inertia,and direct reflected Middle IRcomponent.

Seasonal - Moderate for spectral and absolute irradiancelevel (hence thermal inertia) differences fromsun's declination angle.

Small to moderate over vegetation cycle

A Seasonal - Small to moderate over vegetation cycle

P Diurnal - Strong: short wavelength irradiance drivesthe rmal ine rtia.

Seasonal - Same as passive Middle IR

A Seasonal - Small to moderate over vegetation cycle

P Diurnal Small to moderate for high microwave emit -tance objects (thermal inertia can dominate).

Seasonal

Small for low microwave emittance objects(sky temperature dominates).

Small to moderate for high microwave emit -tance objects (sun's declination angle).Small for low microwave emittance objectsSmall to moderate over vegetation cycle de-pending on canopy and soil moisture content.

A Seasonal - Moderate over vegetation cycle depending onbackscatter coefficient.

SPIE Vol 238 Image Processing for Missile Guidance (1980) / 431

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Table 3. Sources of Image Contrast Reversal

Sensor Region

Optical/Near IR

Middle IR

Thermal IR

Microwave

P and A

A

P

A

A

Cause

- Optical/Near IRVegetation Reflectance

- Material Thermal Inertia Diurnal 3^ *~5 p. Solar Irradiance Component

- Middle IRVegetation Reflectance

- Thermal Inertia

- Thermal IRVegetation Reflectance

- Atmospheric Re radiation Thermal Inertia

MicrowaveVegetation Reflectance

Snow/Ice/Water complex can produce contrast reversals in each imaging region

Table 4. Diurnal and Seasonal Environmental Impact on Sensed Imagery

Sensor Region

Optical/Near IR

Middle IR

Thermal IR

Microwave

A

P

A

P

A

P

Seasonal

Diurnal

Seasonal

Seasonal

Diurnal

Time Cycle Impact on Imagery

Diurnal - Small to strong (depends on spectral and absolute level of illumination imagery is obtained under).

Seasonal - Small for spectral irradiance changes(sun's declination angle) but moderate for illumination level.

- Moderate to strong over vegetation cycle

Seasonal - Moderate to strong over vegetation cycle

Diurnal - Strong: short and middle wavelengthirradiance drives thermal inertia, and direct reflected Middle IR component.

Seasonal - Moderate for spectral and absolute irradiance level (hence thermal inertia) differences from sun's declination angle.

- Small to moderate over vegetation cycle

- Small to moderate over vegetation cycle

Strong: short wavelength irradiance drives thermal inertia.

Same as passive Middle IR

Small to moderate over vegetation cycle

- Small to moderate for high microwave emit- tance objects (thermal inertia can dominate).

- Small for low microwave emittance objects (sky temperature dominates).

Seasonal - Small to moderate for high microwave emit­ tance objects (sun's declination angle).

- Small for low microwave emittance objects

- Small to moderate over vegetation cycle de­ pending on canopy and soil moisture content.

Seasonal - Moderate over vegetation cycle depending on backscatter coefficient.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 431

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 7: 0238_426

CONROW, RATKOVIC

reradiation. The principal meteorological interaction parameters are latent and sensible heat transfer(evaporation and convection respectively). The material thermal inertia is related to the net rate of heatexchange at the surface between the air /material interface. Thermal inertia effects driven by the absorbedshort wavelength incident irradiance during the daytime generally predominate the emitted power componentat night. During the day, both reflected solar middle IR and thermal inertia components contribute to theobserved signature. Contrast reversals in this spectral region are primarily related to material thermalinertia, where the smaller the magnitude of this parameter, the greater the temperature (hence emissivepower) oscillation. Contrast reversals can also be induced in this spectral region when a large solar andatmospheric 3¡i - 5µ flux is present, coupled with a low to moderate material surface temperature. Here,the time -varying nature of the downwelling flux passes through a cycle of small to large to smallcoincident with the solar zenith angle. If the 3g - 5µ solar reflected component is larger than that due tomaterial emission, a time -varying contrast reversal can result.

The limiting atmospheric case due to attenuation occurs when an image is obtained through an atmospherewith a moderate to large diameter aerosol distribution. In this spectral interval atmospheric attenuationfrom aerosols usually predominates over reradiation from preciptable water during the daytime, but atnight the reverse is possible.

Sensible and latent heat transfer can impact the imaged spectral signature in this region by altering theground temperature. This in turn impacts the image signature, particularly at night when the groundemission component predominates.

Passive thermal IR. The governing material properties in the passive thermal IR (8µ - 14g ) region areshort wavelength reflectance (typically . 4 N. - 1. 6 µ ), thermal IR emittance and thermal inertia. The princi-pal atmospheric properties and meteorological interaction parameters are identical to those in the middleIR region.

In this spectral region, material thermal inertia is the sole cause of observed ground signature contrastreversals. Since a negligible amount of thermal IR energy emitted by the sun penetrates the atmosphere,short wavelength solar irradiance driven, thermal inertia effects predominate the image over the diurnalcycle.

Atmospheric attenuation in a dry, cloud -free atmosphere is small in this spectral region. A substantialamount of reradiation (hence image contrast reduction) can occur, however, when a humid, warm atmos-phere is present due to increasing emissive power with precipitable water and atmospheric temperature.Such conditions will often form the cloud -free limiting case for sensor operation in this spectral region.

Sensible and latent heat transfer becomes important when a significant difference in atmospheric andground temperature exists,coupled with a non- zero wind speed and relative humidity. These heat transfercomponents can produce a noticeable signature oscillation for a reference area imaged under widely varyingmeteorological conditions. Furthermore, the magnitude of these parameters are often difficult to evaluatedue to the lack of the necessary ground truth data.

Passive microwave. The governing material properties in the passive microwave imaging (.3 cm to3.0cm)region are passive microwave reflectance and thermal inertia. The principal atmospheric parameter hereis the contribution of precipitable water to the sky brightness temperature. Sensible and latent heat trans-fer components tend to have little impact on the observed signatures unless high microwave emittancematerials predominate.

Contrast reversals are only possible in this spectral interval in two cases. The first involves materialswith low microwave reflectances. Here, the material microwave emittance (times ground temperature)component predominates and the resulting energy balance, hence imagery, behaves similarly to that in thethermal IR region. In the second and much rarer case, a reversal will occur when the sky brightnesstemperature is greater than the material temperature. This is generally only possible under cloud coverconditions when a substantial amount of precipitable water exists along with a low to moderate groundtemperature (-273°K -.--2900K). Here, the emitted energy from the precipitable water becomes greaterthan that from the reference material. As a consequence, materials with a high microwave reflectance(i. e. , metal and water) can have greater apparent brightness temperatures than those with a high micro-wave emittance (i. e. , soil). This results in a reversal over the expected case where a dry atmosphere ispresent.

Contrary to general belief, the only materials that can not exhibit the first type of contrast reversal dis-cussed above in the 35 GHz and 94 GHz bands are metal and water, since most materials possess highmicrowave emittances in these regions at small scan angles. As a consequence, regional error shifts (and insome cases contrast reversals) can result since many common materials (i. e. , vegetation, soil, concreteand rock), exhibit microwave emittance, hence thermal inertia dominated time -varying oscillations. Pre-diction of vegetation and soil signature magnitudes and their oscillations can be very difficult, however,because of the impact of moisture availability on microwave material emittance. Like in the infraredregions, additional instability in the microwave signature can occur due to atmospheric reradiation effects;particularly for metal and water which possess low and moderate microwave emittances respectively.Passive microwave signature variations for these materials are generally much larger than in the infraredfor similar conditions which produce atmospheric reradiation.

432 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

reradiation. The principal meteorological interaction parameters are latent and sensible heat transfer (evaporation and convection respectively). The material thermal inertia is related to the net rate of heat exchange at the surface between the air/material interface. Thermal inertia effects driven by the absorbed short wavelength incident irradiance during the daytime generally predominate the emitted power component at night. During the day, both reflected solar middle IR and thermal inertia components contribute to the observed signature. Contrast reversals in this spectral region are primarily related to material thermal inertia, where the smaller the magnitude of this parameter, the greater the temperature (hence emissive power) oscillation. Contrast reversals can also be induced in this spectral region when a large solar and atmospheric 3p. " 5p. flux is present, coupled with a low to moderate material surface temperature. Here, the time-varying nature of the downwelling flux passes through a cycle of small to large to small coincident with the solar zenith angle. If the 3[j. Sjj. solar reflected component is larger than that due to material emission, a time-varying contrast reversal can result.

The limiting atmospheric case due to attenuation occurs when an image is obtained through an atmosphere with a moderate to large diameter aerosol distribution. In this spectral interval atmospheric attenuation from aerosols usually predominates over reradiation from preciptable water during the daytime, but at night the reverse is possible.

Sensible and latent heat transfer can impact the imaged spectral signature in this region by altering the ground temperature. This in turn impacts the image signature, particularly at night when the ground emission component predominates.

Passive thermal IR. The governing material properties in the passive thermal IR (8fi - 14 p. ) region are short wavelength reflectance (typically . 4 jj. -* 1. 6 p. ), thermal IR emittance and thermal inertia. The princi­ pal atmospheric properties and meteorological interaction parameters are identical to those in the middle IR region.

In this spectral region, material thermal inertia is the sole cause of observed ground signature contrast reversals. Since a negligible amount of thermal IR energy emitted by the sun penetrates the atmosphere, short wavelength solar irradiance driven, thermal inertia effects predominate the image over the diurnal cycle.

Atmospheric attenuation in a dry, cloud-free atmosphere is small in this spectral region. A substantial amount of reradiation (hence image contrast reduction) can occur, however, when a humid, warm atmos­ phere is present due to increasing emissive power with precipitable water and atmospheric temperature. Such conditions will often form the cloud-free limiting case for sensor operation in this spectral region.

Sensible and latent heat transfer becomes important when a significant difference in atmospheric and ground temperature exists,coupled with a non-zero wind speed and relative humidity. These heat transfer components can produce a noticeable signature oscillation fo r a reference area imaged under widely varying meteorological conditions. Furthermore, the magnitude of these parameters are often difficult to evaluate due to the lack of the necessary ground truth data.

Passive microwave. The governing material properties in the passive microwave imaging (.3cm to 3.Ocm) region are passive microwave reflectance and thermal inertia. The principal atmospheric parameter here is the contribution of precipitable water to the sky brightness temperature. Sensible and latent heat trans­ fer components tend to have little impact on the observed signatures unless high microwave emittance materials predominate.

Contrast reversals are only possible in this spectral interval in two cases. The first involves materials with low microwave reflectances. Here, the material microwave emittance (times ground temperature) component predominates and the resulting energy balance, hence imagery, behaves similarly to that in the thermal IR region* In the second and much rarer case, a reversal will occur when the sky brightness temperature is greater than the material temperature. This is generally only possible under cloud cover conditions when a substantial amount of precipitable water exists along with a low to moderate ground temperature (~273°K . ~290°K) 0 Here, the emitted energy from the precipitable water becomes greater than that from the reference material. As a consequence, materials with a high microwave reflectance (i. e. , metal and water) can have greater apparent brightness temperatures than those with a high micro­ wave emittance (i. e. , soil). This results in a reversal over the expected case where a dry atmosphere is present.

Contrary to general belief, the only materials that can not exhibit the first type of contrast reversal dis­ cussed above in the 35 GHz and 94 GHz bands are metal and water, since most materials possess high microwave emittances in these regions at small scan angles. As a consequence, regional error shifts (and in some cases contrast reversals) can result since many common materials (i. e. , vegetation, soil, concrete and rock) exhibit microwave emittance, hence thermal inertia dominated time-varying oscillations. Pre­ diction of vegetation and soil signature magnitudes and their oscillations can be very difficult, however, because of the impact of moisture availability on microwave material emittance. Like in the infrared regions, additional instability in the microwave signature can occur due to atmospheric reradiation effects; particularly for metal and water which possess low and moderate microwave emittances respectively. Passive microwave signature variations for these materials are generally much larger than in the infrared for similar conditions which produce atmospheric reradiation.

432 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 8: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Active systems. For imaging lasers and radars the governing material electrical property is reflectance(or the backscatter coefficient). Atmospheric absorption and scattering (attenuation) is often the limitingenvironmental factor for laser imaging systems, although it is usually small for radars. These systemsare at least directly insensitive to many of the complex time -varying physical atmospheric and meteorologi-cal effects that impact passive systems (i. e. , thermal inertia, solar irradiance, and latent and sensibleheat transfer).

An additional class of active sensors existed that use the spectral transmitted beam in a phase modulatedcarrier or range -gated form. The advantage of these sensors is that they can be relatively insensitive toall material and meteorological properties and generally are only limited by atmospheric effects. In thefirst case a frequency modulated signal is placed on an optical laser carrier beam. Very accurate targetranging and depth information is possible by detecting the phase front distortions of the returned beaminduced by the object.

The second type of system is operated in a ranging form by measuring the two -way propagation time tothe ground or target (down and forward- looking respectively). A common down -looking form of this systemis a radar altimeter used in Terrain Contour Matching (TERCOM). A widely used forward -looking form isthe laser rangefinder used in tactical armored vehicles.

For both types of systems two governing performance factors exist. First, the reflected object energymust be high enough to produce an acceptable SNR. This often limits the operational distance because ofbeam dispersion, and atmospheric effects (lasers only). The second involves the beam pattern itself. Ifit is too large in diameter versus the imaged object, phase information becomes ambiguous for the first typeof system. For the altimeter or rangefinder system this also poses a problem due to an increasinguncertainty in knowing the object that produced the first or strongest return. Their principal disadvantagesinclude hardware complexity and lack of maturity (phase modulated laser) and potential inaccuracy(altimeters and rangefinders) against point targets due to reference imaging requirements and their usualone -dimensional configuration. When targeting conditions permit and operation essentially invariant toenvironmental conditions is necessary, these two types of sensor systems should be strongly considered.

Atmospheric and meteorological parameters. A summary of the relevant atmospheric and meteorologi-cal parameters on sensed terrain imagery is given in Table 2. Here, molecular absorption has not beendirectly considered. It is at least implied, however, since the atmospheric windows utilized for remotesensing exist in regions where these effects are small. Molecular absorption band characteristics vary withtemperature and pressure for a given species. Aerosol absorption and scattering are less specific, sincethey also vary with the diameter distribution present.

From Table 2 it is apparent that path radiance effects caused by aerosol water decrease noticeablybeyond the optical /near and middle IR regions. This is a result of the aerosol diameter distributions typi-cally present and the small amount of solar irradiance that exists in the thermal IR and passive microwaveregions. Reradiation becomes increasingly important with wavelength, and in passive microwave imagery itis the dominant rain -free relevant atmospheric parameter. Latent and sensible heat transfer are the pre-dominant meteorological remote sensing parameters, and can have a moderate impact on the resultingenergy balance present in middle and thermal IR imaging and alter the resulting emittances of some passivemicrowave materials (particularly soil).

Time -cycle impact. Large oscillations and possibly contrast reversals in material signatures often occurduring diurnal and seasonal time -frames. A summary of the relevant phenomena for active and passivesensor systems is given in Table 4. Two factors are evident from the data given. First, the performanceof each passive sensor system can be altered by the level and spectral distribution of incident solar irradi-ance in the atmosphere and at the ground plane. Second, the spectral reflectance, thermal inertia, andmoisture availability associated with vegetation growth cycles on land can significantly impact imaged signa-tures in every spectral band on a seasonal basis. Only phase -modulated or range -gated lasers and radarsappear to be relatively immune to this problem as long as deciduous trees are absent.

Two items have been omitted from Table 4 for simplicity. The magnitude and type of atmospheric andmeteorological effects present prior to and at the moment of imaging are represented by a joint diurnal -seasonal time cycle probability distribution function. Likewise, the presence of snow /ice /water within thereference area can also be described by another joint diurnal- seasonal probability distribution function.These two distributions are very complex (perhaps presently indeterminate) and only moderately correlatedwith time. Consequently, at best it is only possible to approximate the impact of these factors on spectral,time -varying reference area signatures.

2. 3 Map difference errors

From a systems point of view one can categorize all the map differences as affecting:

1. the spatial shape of homogeneous regions,2. the relative mean intensity levels between homogeneous regions, and3. the absolute intensity level of a region.

SP /E Vol 238 /mage Processing for Missile Guidance (1980) / 433

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Active systems. For imaging lasers and radars the governing material electrical property is reflectance (or the backscatter coefficient). Atmospheric absorption and scattering (attenuation) is often the limiting environmental factor for laser imaging systems, although it is usually small for radars. These systems are at least directly insensitive to many of the complex time-varying physical atmospheric and meteorologi­ cal effects that impact passive systems (i. e. , thermal inertia, solar irradiance, and latent and sensible heat transfer).

An additional class of active sensors existed that use the spectral transmitted beam in a phase modulated carrier or range-gated form. The advantage of these sensors is that they can be relatively insensitive to all material and meteorological properties and generally are only limited by atmospheric effects. In the first case a frequency modulated signal is placed on an optical laser carrier beam. Very accurate target ranging and depth information is possible by detecting the phase front distortions of the returned beam induced by the object.

The second type of system is operated in a ranging form by measuring the two-way propagation time to the ground or target (down and forward-looking respectively). A common down-looking form of this system is a radar altimeter used in Terrain Contour Matching (TERCOM). A widely used forward-looking form is the laser rangefinder used in tactical armored vehicles.

For both types of systems two governing performance factors exist. First, the reflected object energy must be high enough to produce an acceptable SNR. This often limits the operational distance because of beam dispersion, and atmospheric effects (lasers only) 0 The second involves the beam pattern itself. If it is too large in diameter versus the imaged object, phase information becomes ambiguous for the first type of system. For the altimeter or rangefinder system this also poses a problem due to an increasing uncertainty in knowing the object that produced the first or strongest return. Their principal disadvantages include hardware complexity and lack of maturity (phase modulated laser) and potential inaccuracy (altimeters and rangefinders) against point targets due to reference imaging requirements and their usual one-dimensional configuration. When targeting conditions permit and operation essentially invariant to environmental conditions is necessary, these two types of sensor systems should be strongly considered.

Atmospheric and meteorological parameters. A summary of the relevant atmospheric and meteorologi­ cal parameters on sensed terrain imagery is given in Table 2. Here, molecular absorption has not been directly considered. It is at least implied, however, since the atmospheric windows utilized for remote sensing exist in regions where these effects are small. Molecular absorption band characteristics vary with temperature and pressure for a given species. Aerosol absorption and scattering are less specific, since they also vary with the diameter distribution present.

From Table 2 it is apparent that path radiance effects caused by aerosol water decrease noticeably beyond the optical/near and middle IR regions. This is a result of the aerosol diameter distributions typi­ cally present and the small amount of solar irradiance that exists in the thermal IR and passive microwave regions. Reradiation becomes increasingly important with wavelength, and in passive microwave imagery it is the dominant rain-free relevant atmospheric parameter. Latent and sensible heat transfer are the pre­ dominant meteorological remote sensing parameters, and can have a moderate impact on the resulting energy balance present in middle and thermal IR imaging and alter the resulting emittances of some passive microwave materials (particularly soil).

Time-cycle impact. Large oscillations and possibly contrast reversals in material signatures often occur during diurnal and seasonal time-frames. A summary of the relevant phenomena for active and passive sensor systems is given in Table 4. Two factors are evident from the data given. First, the performance of each passive sensor system can be altered by the level and spectral distribution of incident solar irradi­ ance in the atmosphere and at the ground plane. Second, the spectral reflectance, thermal inertia, and moisture availability associated with vegetation growth cycles on land can significantly impact imaged signa­ tures in every spectral band on a seasonal basis. Only phase-modulated or range-gated lasers and radars appear to be relatively immune to this problem as long as deciduous trees are absent.

Two items have been omitted from Table 4 for simplicity. The magnitude and type of atmospheric and meteorological effects present prior to and at the moment of imaging are represented by a joint diurnal- seasonal time cycle probability distribution function. Likewise, the presence of snow/ice/water within the reference area can also be described by another joint diurnal-seasonal probability distribution function. These two distributions are very complex (perhaps presently indeterminate) and only moderately correlated with time. Consequently, at best it is only possible to approximate the impact of these factors on spectral, time-varying reference area signatures.

2. 3 Map difference errors

From a systems point of view one can categorize all the map differences as affecting:

l c the spatial shape of homogeneous regions,

2. the relative mean intensity levels between homogeneous regions, and3. the absolute intensity level of a region.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 433

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 9: 0238_426

CONROW, RATKOVIC

In the vernacular these effects are commonly referred to as nonstructured errors, contrast reversals,and predictive coding errors, respectively. A combination of these factors are generally present in sensorimagery and induce errors in the map -matching process due to their complex nature.

Nonstructured errors can be broken down into two categories. In the first case, the impact of the pertur-bations is predictable, although it may not be possible or desirable to prepare a large number of referencescenes for compensation. Errors in this class include shadows, which can lead to contrast reversalswithin the affected region. Their location can be calculated given the illuminator- target -vehicle geometrycombined with the terrain topography. Errors of the second type are more difficult to predict, hence toproduce compensating images. These errors include terrain areas obscured by clouds and snow /ice /water.Here, the joint probability space -time error distribution affecting the reference area (hence each resolutionelement) is virtually unknown.

The net effect of changes in the atmospheric, meteorological and physical and electrical material pro-perties is to produce variations in the intensity level of one homogeneous region relative to another. If theintensity level shifts are severe, contrast reversals between regions can result. An estimate of theexpected range of contrast ratio reversals between representative natural materials is given in Table 5.Maximum values and the governing parameter are given in two cases for each spectral region. In the firstcase, contrast reversal ranges due to physical atmospheric and meteorological parameters are given. Inthe second case, those produced by snow /ice /water are presented.

Table 5. Estimates of Contrast Reversal Magnitudes and Their Causes

Sensor Region Type Normal Contrast Snow /Ice /Water InducedReversal Range Contrast Reversal Range

Optical /Near IR P and A 54.4 db (vegetation /soil reflectance) 56.6 db (snow /soil reflectance)

Middle IR P e, 8 db or 1. 2 X 10 -3-35' 1. 2 db 3.2cm -Sr

or 7 X 10 Zcm -Sr

Thermal IR

Microwave

( soil thermal inertia) (wet soil /soil)

A <. 7 db (vegetation /soil reflectance) <.9 db (snow /soil reflectance)

P <, 8 db or 1.4 X 10-3 Z =1.6 db or 5.4 X 10 -3cm -5r cm2 Sr

(soil thermal inertia) (wet soil /soil)

A <- -,4 db (vegetation /soil reflectance) <, 5 db (snow /soil reflectance)

P <.zdbor1.6X10 -11 2 _5.2.4dbor3.1X10 -10 w

cm -Sr cm2 -SrEa Band, clear or cloudy sky Ka Band, clear sky (wet snow /soil)( soil thermal inertia)

A possible but small (tree /field) ` -13 db, X Band (wet snow /soil)

Strictly speaking, signature variations caused by snow /ice /water are predictive errors. The effect ofthis complex is to produce random space and time -varying signature boundaries, hence artifical homo-geneous regions, within the reference area. As a result, contrast reversals can occur within the sensorimage due to signature variations between homogeneous regions created by the snow /ice /water and thosefrom the nominal, underlying material signature. Preprocessing techniques that emphasize homogeneousregions in the sensor scene can produce catastrophic map- matching failures when snow /ice /water are likel}to exist.

434 / SP /E Vo/. 238 /mage Processing for Missile Guidance (1980)

CONROW, RATKOVIC

In the vernacular these effects are commonly referred to as nonstructured errors, contrast reversals, and predictive coding errors, respectively. A combination of these factors are generally present in sensor imagery and induce errors in the map-matching process due to their complex nature.

Nonstructured errors can be broken down into two categories. In the first case, the impact of the pertur­ bations is predictable, although it may not be possible or desirable to prepare a large number of reference scenes for compensation. Errors in this class include shadows, which can lead to contrast reversals within the affected region. Their location can be calculated given the illuminator-target-vehicle geometry combined with the terrain topography. Errors of the second type are more difficult to predict, hence to produce compensating images. These errors include terrain areas obscured by clouds and snow/ice/water. Here, the joint probability space-time error distribution affecting the reference area (hence each resolution element) is virtually unknown.

The net effect of changes in the atmospheric, meteorological and physical and electrical material pro­ perties is to produce variations in the intensity level of one homogeneous region relative to another. If the intensity level shifts are severe, contrast reversals between regions can result. An estimate of the expected range of contrast ratio reversals between representative natural materials is given in Table 5. Maximum values and the governing parameter are given in two cases for each spectral region. In the first case, contrast reversal ranges due to physical atmospheric and meteorological parameters are given. In the second case, those produced by snow/ice/water are presented.

Table 5. Estimates of Contrast Reversal Magnitudes and Their Causes

Sensor Region

Optical/Near IR

Middle IR

Type

P and A

P

Normal Contrast Reversal Range

Snow/Ice/Water Induced Contrast Reversal Range

<4.4 db (vegetation/soil reflectance) ^6.6 db (snow/soil reflectance)

<. 8 db or 1. 2 X 10-3

cm -Sr

(soil thermal inertia)

-1. 2 db or 3. 7 X 10" 3

(wet soil/soil)

cm -Sr

Thermal IR

Microwave

A

P

A

P

<. 7 db (vegetation/soil reflectance)

^. 8 db or 1.4 X 10"

(soil thermal inertia)

-. 9 db (snow/soil reflectance)

^1. 6 db or 5.4 X 10~ 3cm -S r

(wet soil/soil)

-.4 db (vegetation/soil reflectance) <.5 db ( snow/ soil reflectance)

<. 2 db or 1. 6 X 10 -11 w

cm -Sr

Ka Band, clear or cloudy sky (soil thermal inertia)

possible but small (tree/field)

^2.4 db or 3. 1 X 10 -10 w

cm -Sr

Ka Band, clear sky (wet snow/soil)

-13 db, X Band (wet snow/soil)

Strictly speaking, signature variations caused by snow/ice/water are predictive errors. The effect of this complex is to produce random space and time-varying signature boundaries, hence artifical homo­ geneous regions, within the reference area. As a result, contrast reversals can occur within the sensor image due to signature variations between homogeneous regions created by the snow/ice/water and those from the nominal, underlying material signature. Preprocessing techniques that emphasize homogeneous regions in the sensor scene can produce catastrophic map-matching failures when snow/ice/water are likeh to exist.

434 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 10: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

For the passive optical /near IR and all active cases, the output is given in db change in materialreflectance. For the passive middle and thermal IR, and microwave cases, results are presented in bothwatts /cm2- steradian and db of radiance change between regions. Results are similarly given in the snow/ice /water cases except the signature of the perturbing state is compared directly to a nominal material.Results for the passive middle and thermal IR, and microwave cases were determined with the aid of asophisticated atmospheric boundary layer model. Contrast reversal ranges were not computed for man-made materials because of the complexities introduced by geometry and internal heating (for the middle andthermal IR cases).

Contrast reversals produced by means other than snow /ice /water will first be examined. From Table 5,it is clear that the vegetation cycle can produced significant contrast reversals against soil (as well as othermaterial) backgrounds for active and passive optical /near IR and active middle and thermal IR imagingsystems. The largest reversals in the passive middle and thermal IR cases, however, are generally pro-duced by solar irradiance driven thermal inertia differences between materials present. Contrast reversalscan also occur in the passive microwave region due to the vegetation cycle, where the primary contributingfactors are soil and plant moisture availability. Reversals or intensity shifts between homogenous materialregions are primarily produced in this spectral interval by moisture availability and thermal inertia effectswhich impact the microwave emittance and ground temperature (hence the emissive power ground component)depending on the climatic conditions present.

When contrast reversals due to predictive errors from snow /ice /water are examined, it is clear that themagnitudes produced by this cause are greater than those from the corresponding non -snow /ice -water (vege-tation cycle and thermal inertia) cases in every instance. Although these values may serve as reasonableupper bounds, the mission planner should be aware that changes in the snow /ice /water state can producesubstantial signature vairations over a short to long time -frame due to the complicated physical andelectrical properties of this material complex.

From this, it is clear that no imaging spectral region is immune from contrast reversal effects. It ispossible, however, to reduce their magnitude, or in some cases eliminate them entirely if careful nominalsignature prediction is utilized together with criteria for eliminating regions where large signature oscilla-tions will surely exist. A more detailed discussion of this problem is given in Section 3.3.

As indicated in Figure 3, a reference generation process is used to develop an image for map- matchingpurposes. Obviously, to ensure systems performance this processing step must have the highest degree ofaccuracy possible. Two types of predictive errors can arise from less than a perfect process. The first isthe result of having to synthetically create imagery in a given spectral region when source data are unavail-able. The second involves utilizing real or synthetic reference imagery selected or generated with one setof environmental parameters but used against another where a significant signature divergence has occurred.The mission planner should use a nominal rather than abnormal reference image when large signature per-turbations are possible which can not be accurately predicted.

When spectral band conversion is necessary the materials within the reference area must first be identi-fied. The synthetic image signature is generated by using the known physical and electrical properties ofthe identified materials in conjunction with the specified illuminator- target- detector geometry.

A compilation of factors influencing the accuracy of reference image prediction versus the actual scenesignature is presented in Table 6. An estimate was made of the expected prediction errors for homogeneousregions within representative reference areas for each spectral region and is given in Table 7. Reasonableuncertainty values of perturbation components given in Tables 2, 4, and 6 were used to generate these esti-mated regional errors. Although these values should only be used as a guide, they can provide the missionplanner with an estimate of which map- matching algorithms can not be used with certain forms of spectralimagery. This is due to the performance breakdown of some algorithm classes with increased regionalerrors. The estimated regional errors in Table 7 include contributions from material identification whereappropriate.

Results given in Table 7 were calculated using diurnal, seasonal, and yearly time -varying signature esti-mations for a hypothetical reference area composed of 45 percent vegetation, 30 percent soil, 20 percentconcrete, and 5 percent rock. Snow /ice /water complex materials were excluded from this analysis.Vegetation possesses the only time -varying dielectric signature variation (excluding snow /ice /water) in theoptical /near IR region. As a consequence the error bounds given in Table 7 for active and passive types inthis interval should be evaluated accordingly when other vegetation proportions are present. Although not afactor for an active system, large actual versus predicted error bounds for passive optical /near IR systemscan exist if diurnal operation is desired due to significant spectral irradiance variations present in dayversus ambient night light.

As in the optical /near IR case, the primary source of estimated versus actual regional error bounds inactive middle and thermal IR, and microwave images is due to the time - varying vegetation signature pre-sent. In these intervals, however, the general lack of source data necessitates using a materialidentification step in producing synthetic reference imagery. The resulting errors in this procedure coupledwith the lack of a complete physical and electrical material properties catalog produce errors in the signa-ture translation process.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 435

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

For the passive optical/near IR and all active cases, the output is given in db change in material reflectance. For the passive middle and thermal IR, and microwave cases, results are presented in both watts/cm^-steradian and db of radiance change between regions. Results are similarly given in the snow/ ice/water cases except the signature of the perturbing state is compared directly to a nominal material. Results for the passive middle' and thermal IR, and microwave cases were determined with the aid of a sophisticated atmospheric boundary layer model. Contrast reversal ranges were not computed for man- made materials because of the complexities introduced by geometry and internal heating (for the middle and thermal IR cases).

Contrast reversals produced by means other than snow/ice/water will first be examined. From Table 5, it is clear that the vegetation cycle can produced significant contrast reversals against soil (as well as other material) backgrounds for active and passive optical/near IR and active middle and thermal IR imaging systems. The largest reversals in the passive middle and thermal IR cases, however, are generally pro­ duced by solar irradiance driven thermal inertia differences between materials present. Contrast reversals can also occur in the passive microwave region due to the vegetation cycle, where the primary contributing factors are soil and plant moisture availability. Reversals or intensity shifts between homogenous material regions are primarily produced in this spectral interval by moisture availability and thermal inertia effects which impact the microwave emittance and ground temperature (hence the emissive power ground component) depending on the climatic conditions present.

When contrast reversals due to predictive errors from snow/ice/water are examined, it is clear that the magnitudes produced by this cause are greater than those from the corresponding non-snow/ice-water (vege­ tation cycle and thermal inertia) cases in every instance. Although these values may serve as reasonable upper bounds, the mission planner should be aware that changes in the snow/ice/water state can produce substantial signature vairations over a short to long time-frame due to the complicated physical and electrical properties of this material complex.

From this, it is clear that no imaging spectral region is immune from contrast reversal effects. It is possible, however, to reduce their magnitude, or in some cases eliminate them entirely if careful nominal signature prediction is utilized together with criteria for eliminating regions where large signature oscilla­ tions will surely exist. A more detailed discussion of this problem is given in Section 3.3.

As indicated in Figure 3, a reference generation process is used to develop an image for map-matching purposes. Obviously, to ensure systems performance this processing step must have the highest degree of accuracy possible. Two types of predictive errors can arise from less than a perfect process. The first is the result of having to synthetically create imagery in a given spectral region when source data are unavail­ able. The second involves utilizing real or synthetic reference imagery selected or generated with one set of environmental parameters but used against another where a significant signature divergence has occurred The mission planner should use a nominal rather than abnormal reference image when large signature per­ turbations are possible which can not be accurately predicted.

When spectral band conversion is necessary the materials within the reference area must first be identi­ fied. The synthetic image signature is generated by using the known physical and electrical properties of the identified materials in conjunction with the specified illuminator-target-detector geometry.

A compilation of factors influencing the accuracy of reference image prediction versus the actual scene signature is presented in Table 6. An estimate was made of the expected prediction errors for homogeneous regions within representative reference areas for each spectral region and is given in Table 7. Reasonable uncertainty values of perturbation components given in Tables 2, 4, and 6 were used to generate these esti­ mated regional errors. Although these values should only be used as a guide, they can provide the mission planner with an estimate of which map-matching algorithms can not be used with certain forms of spectral imagery. This is due to the performance breakdown of some algorithm classes with increased regional errors. The estimated regional errors in Table 7 include contributions from material identification where appropriate.

Results given in Table 7 were calculated using diurnal, seasonal, and yearly time-varying signature esti­ mations for a hypothetical reference area composed of 45 percent vegetation, 30 percent soil, 20 percent concrete, and 5 percent rock. Snow/ice/water complex materials were excluded from this analysis. Vegetation possesses the only time-varying dielectric signature variation (excluding snow/ice/water) in the optical/near IR region. As a consequence the error bounds given in Table 7 for active and passive types in this interval should be evaluated accordingly when other vegetation proportions are present. Although not a factor for an active system, large actual versus predicted error bounds for passive optical/near IR systems can exist if diurnal operation is desired due to significant spectral irradiance variations present in day versus ambient night light.

As in the optical/near IR case, the primary source of estimated versus actual regional error bounds in active middle and thermal IR, and microwave images is due to the time-varying vegetation signature pre­ sent. In these intervals, however, the general lack of source data necessitates using a material identification step in producing synthetic reference imagery. The resulting errors in this procedure coupled with the lack of a complete physical and electrical material properties catalog produce errors in the signa­ ture translation process.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 435

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 11: 0238_426

CONROW, RATKOVIC

Table 6. Parameter Error Impact on Intensity Estimate Accuracy(Decreasing Order of Importance)

Type

Optical /Near IR

Middle IR

Thermal IR

Microwave

Pas sive

Imaging weathe rslope /slope azimuthseasonalmoisture availabilitysurface variationsdiurnalreflectance knowledge

Imaging weatherthe rmal ine rtiadiurnalpre- imaging weatherslope /slope azimuthmoisture availabilityseasonalsubsurface variationsmoisture availabilityalbedo knowledgesurface variationsemittance knowledge

Imaging weathe rthermal inertiamoisture availabilityslope/slope azimuthpre- imaging weatherseasonalsubsurface variationsalbedo knowledgesurface variationsdiurnalernittance knowledge

Moisture availabilitythermal inertiaimaging weathersurface variationsemittance knowledge

Active

Imaging weathe rslope /slope azimuthseasonalsurface variationsmoisture availabilityreflectance knowledge

Imaging weatherslope /slope azimuthsurface variationsseasonalreflective knowledgemoisture availability

Imaging weatherslope /slope azimuthsurface variationsseasonalmoisture availabilityreflective knowledge

Slope /slope azimuthsurface variationsseasonalmoisture availabilityreflectance knowledge

In the passive middle and thermal IR cases, the primary source of estimated versus actual regionalerror bounds is due to the ground emission component governed by the thermal inertia of the materials pre-sent. In addition to the large regional error present between most day /night pairs analyzed in these casesis the fact that a high degree of anticorrelation, indicative of the inherent contrast reversals, also existed.These effects are generally noted when materials with a moderate to high range of thermal inertias arepresent within a reference area. As in the active cases previously mentioned, material identification errorsand data gaps in material property libraries also contribute to the regional errors present.

As previously discussed in this section, when materials with high microwave emittances are presentwithin a reference area, the resulting time -varying passive microwave signature can behave similarly tothat in the middle and thermal IR regions. If materials with moderate to low microwave emittances are pre-sent, the variation in the ground temperature component of the apparent brightness temperature due tomaterial thermal inertia effects is damped, and a greater degree of regional error stability results.

In the case of range or phase - modulated sensors, the principal source of predicted versus actual regionalerrors is due to the time -varying nature of vegetation signatures (particularly deciduous trees) presentwithin the reference area. A high degree of reference stability is possible with these sensor types if care-ful reference scene screening is utilized.

436 / SP /E Vol. 238 /mage Processing for Missile Guidance (1980)

CONROW, RATKOVIC

Table 6.

Type

Optical/Near IR

Middle IR

Thermal IR

Microwave

Parameter Error Impact on Intensity Estimate Accuracy (Decreasing Order of Importance)

Passive

Imaging weatherslope/slope azimuthseasonalmoisture availabilitysurface variationsdiurnalreflectance knowledge

Imaging weather thermal inertia diurnalpre-imaging weather slope/slope azimuth moisture availability seasonalsubsurface variations moisture availability albedo knowledge surface variations emittance knowledge

Imaging weatherthermal inertiamoisture availabilityslope/slope azimuthpre-imaging weatherseasonalsubsurface variationsalbedo knowledgesurface variationsdiurnalemittance knowledge

Moisture availability thermal inertia imaging weather surface variations emittance knowledge

Active

Imaging weathe rslope/slope azimuthseasonalsurface variationsmoisture availabilityreflectance knowledge

Imaging weather slope/slope azimuth surface variations seasonalreflective knowledge moisture availability

Imaging weatherslope/slope azimuthsurface variationsseasonalmoisture availabilityreflective knowledge

Slope/slope azimuthsurface variationsseasonalmoisture availabilityreflectance knowledge

In the passive middle and thermal IR cases, the primary source of estimated versus actual regional error bounds is due to the ground emission component governed by the thermal inertia of the materials pre­ sent. In addition to the large regional error present between most day/night pairs analyzed in these cases is the fact that a high degree of anticorrelation, indicative of the inherent contrast reversals, also existed. These effects are generally noted when materials with a moderate to high range of thermal inertias are present within a reference area. As in the active cases previously mentioned, material identification errors and data gaps in material property libraries also contribute to the regional errors present.

As previously discussed in this section, when materials with high microwave emittances are present within a reference area, the resulting time-varying passive microwave signature can behave similarly to that in the middle and thermal IR regions. If materials with moderate to low microwave emittances are pre­ sent, the variation in the ground temperature component of the apparent brightness temperature due to material thermal inertia effects is damped, and a greater degree of regional error stability results.

In the case of range or phase-modulated sensors, the principal source of predicted versus actual regional errors is due to the time-varying nature of vegetation signatures (particularly deciduous trees) present within the reference area. A high degree of reference stability is possible with these sensor types if care­ ful reference scene screening is utilized.

436 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 12: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Table 7. Estimated Versus Actual Regional Error Bounds

Spectral Interval TypeExpected Error BoundsLow High

Optical /Near IR P 15% 25%A 10% 20%

Middle IR P 20% 2 100%A 15% 30%

Thermal IR P 15% ? 100%A 10% 30%

Microwave P 15% ?100%A 20% 35%

Range or Phase- A Small except when deciduous trees presentModulated Sensors

Exclusive of snow/ice /wate r complex

The average difference between the actual mean intensitylevel difference among regions and the predicted intensitylevel difference among regions divided by the actualintensity range period spread among regions.

When vegetation fraction is replaced by metal, errorbound range is 15% to 30 %.

2.4 Remedies

Contrast reversals, nonstructured, and predictive errors can cause map- matching performance degrada-tions even when other error types (i. e. , geometric distortions) are minimal. There are, however, fourdifferent remedy categories that can minimize the impact of these errors on map- matching systemsperformance. They include: (1) accurate nominal signature prediction, (2) proper scene selection,(3) algorithm flexibility, and (4) adaptive performance prediction.

Accurate nominal signature prediction is desirable to reduce level shifts, hence minimize contrastreversals between homogeneous regions within the reference area. Errors present in the signature model,choice of nominal atmospheric conditions or material identification process (if utilized) will all contributeto reduced map- matching performance. Although preprocessing the reference and /or sensor scenes canpotentially reduce the impact of global and local bias and gain changes, the results are quite sensitive toaccurately predicting the correct time -varying spatial and intensity structure of the image. If appliedimproperly, preprocessing steps can actually reduce rather than increase systems performance. An addi-tional discussion of these factors is given in Section 3.

Proper scene selection is important for two major reasons. Areas that are prone to have large signaturevariations in a given spectral region due to contrast reversals, nonstructured or prediction errors should beidentified and eliminated if possible in the scene selection process. As a consequence, an accuratereference scene screening procedure is desirable so that an estimate of the probability of false fix (Pff) canbe determined under a variety of environmental conditions for a given area. It is necessary here to evaluatethe area for intrascene redundancy under an expected operational SNR. If an unacceptably high Pff results,the candidate reference image should be rejected. A more detailed presentation of these topics is alsogiven in Section 3,,

It is desirable to utilize map- matching algorithms that offer a degree of insensitivity to environmentalperturbations, geometric distortions and SNR while accurately locating the true match point. Eachalgorithm class (correlation, feature extraction, and hybrid) has its own advantages and disadvantagesdepending on the type of imagery processed and the magnitude of the distortions present. A more detaileddiscussion of this topic is given in Section 4.

Since map- matching algorithm performance begins to break down with increasing distortion presentin sensor imagery, it is desirable to utilize a technique that provides a confidence estimate of the quality ofthe fix. A generally used method is to examine the surface statistics produced by the map- matchingalgorithm correlation of reference and sensor scenes. Utilizing a simple technique, however, that does notcompensate for the original scene properties or the impact of the algorithm itself on the resulting surfacedistribution can be inaccurate when typical distortions are present. A more detailed discussion of adaptiveperformance predictions is given in Section 5.2.

SPIE Vol 238 Image Processing for Missile Guidance (1980) / 437

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Table 7. Estimated Versus Actual Regional Error Bounds

Expected Error Bounds

Spectral Interval Type Low High

Optical/Near IR P 15% 25%A 10% 20%

Middle IR P 20% > 100%A 15% 30%

Thermal IR P 15% >100%A 10% 30%

Microwave P 15% >100%***A 20% 35%

Range or Phase- A Small except when deciduous trees present Modulated Sensors

* Exclusive of snow/ice/water complex

** The average difference between the actual mean intensity level difference among regions and the predicted intensity level difference among regions divided by the actual intensity range period spread among regions.

*** When vegetation fraction is replaced by metal, error bound range is 15% to 30%.

2. 4 Remedies

Contrast reversals, nonstructured, and predictive errors can cause map-matching performance degrada­ tions even when other error types (i. e. , geometric distortions) are minimal. There are, however, four different remedy categories that can minimize the impact of these errors on map-matching systems performance. They include: (1) accurate nominal signature prediction, (2) proper scene selection, (3) algorithm flexibility, and (4) adaptive performance prediction.

Accurate nominal signature prediction is desirable to reduce level shifts, hence minimize contrast reversals between homogeneous regions within the reference area. Errors present in the signature model, choice of nominal atmospheric conditions or material identification process (if utilized) will all contribute to reduced map-matching performance. Although preprocessing the reference and/or sensor scenes can potentially reduce the impact of global and local bias and gain changes, the results are quite sensitive to accurately predicting the correct time-varying spatial and intensity structure of the image. If applied improperly, preprocessing steps can actually reduce rather than increase systems performance. An addi­ tional discussion of these factors is given in Section 3.

Proper scene selection is important for two major reasons. Areas that are prone to have large signature variations in a given spectral region due to contrast reversals, nonstructured or prediction errors should be identified and eliminated if possible in the scene selection process. As a consequence, an accurate reference scene screening procedure is desirable so that an estimate of the probability of false fix (Pff) can be determined under a variety of environmental conditions for a given area. It is necessary here to evaluate the area for intrascene redundancy under an expected operational SNR. If an unacceptably high Pff results, the candidate reference image should be rejected. A more detailed presentation of these topics is also given in Section 3 0

It is desirable to utilize map-matching algorithms that offer a degree of insensitivity to environmental perturbations, geometric distortions and SNR while accurately locating the true match point. Each algorithm class (correlation, feature extraction, and hybrid) has its own advantages and disadvantages depending on the type of imagery processed and the magnitude of the distortions present. A more detailed discussion of this topic is given in Section 4.

Since map-matching algorithm performance begins to break down with increasing distortion present in sensor imagery, it is desirable to utilize a technique that provides a confidence estimate of the quality of the fix. A generally used method is to examine the surface statistics produced by the map-matching algorithm correlation of reference and sensor scenes. Utilizing a simple technique, however, that does not compensate for the original scene properties or the impact of the algorithm itself on the resulting surface distribution can be inaccurate when typical distortions are present. A more detailed discussion of adaptive performance predictions is given in Section 5.2.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 437

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 13: 0238_426

CONROW, RATKOVIC

3. Reference map construction and scene selection process

Figure 4 describes the overall map construction and scene selection process. Several steps arenecessary to develop a reference map from raw sensor data. In the first step of the process, it may benecessary to identify the scene material (especially if a different wavelength is to be utilized) and geometri-cally correct for other viewing aspects. Once this is accomplished the scene signature can be predicted anda nominal signature determined. Due to environmental factors and other time -varying variations inherent inthe properties of scene, it is also necessary to predict perturbations to the nominal that are likely to occur.Having completed the signature prediction task it is necessary to construct the reference map and check viathe scene selection process that it is adequate for the map- matching task. In constructing the reference mapin many cases it is necessary not only to predict intensity levels but (depending on the matching algorithm)also to identify homogeneous regions within the scene. Once this is accomplished the scene can be checkedvia mathematical techniques to ensure that it contains sufficient information for matching purposes and thatthe scene is sufficiently unique to avoid any major inter -scene redundancy problems. Finally, the referencescene must be tested via simulation to ensure that it is suitable under real world conditions.

This section will briefly examine:1) the conversion process,2) the problems associated with signature prediction,3) construction of the reference map, and4) the scene selection process.

RawSensorData

ConversionProces s

- Material I. D.

- Geometry

SignaturePrediction

- Nominal

- Perturbed

i

Reference MapConstruction

- Signature

- RegionIdentification

- Environmental factors

- Variations in physicaland electrical sceneproperties

SceneSelection

- Math Tests

- SimulationTests

Figure 4. Reference map construction and scene selection process

3.1 Conversion process

ReferenceMap

As discussed previoualy the first phase of reference map construction generally involves conversion ofthe raw sensor data: 1) to the wavelength or frequency of the sensor onboard the vehicle and 2) to the geo-metrical perspective from which the sensor is to view the scene. Because the raw data is generally not atthe same wavelength as the sensor it is necessary to estimate the material properties of each region of thescene. Since many materials have very similar broad band reflectance properties in the optical /near IR por-tion of the spectrum (from which most raw imagery originates) there may be significant mis- identificationerrors which can create map- matching difference errors and ultimately degrade total system performance.

The other major almost insurmountable problem is to adjust the imagery for the geometry perspectivewhich the sensor is likely to see. For systems which look directly down (down -looking systems) thegeometry corrections are quite simple since one can assume a flat plane model for the ground. For othernon -down looking systems the geometric conversion process involves developing 3 -D target model from theoriginal 2 -D imagery and then creating a 2 -D image at the anticipated perspective angle. Since the vehiclemay not actually fly the nominal trajectory, non down - looking systems are subject to geometric errorswhich require significant processing efforts to remove.

3.2 Signature prediction

Signatures of the reference map need to be determined not only for the final reference map(s) which arestored in the vehicle for comparison but also to test (via simulation) the performance of the system.Seasonal maps of the reference area may need to be developed and stored for use in at least some map -matching systems. Depending on sensor wavelength and map -matching algorithm it may also be necessaryto store separate reference maps which account for diurnal, atmospheric and meteorological effects on thereference scene image. The mission planner or reference scene evaluator is cautioned not to developoverly sophisticated signature models when an underspecified set of conditions will exist. Even worse isthe case where poor guesses are made for certain input variable magnitudes; since in some cases this willresult in nominal reference signature with greater error than that from a simple model.

438 / SP /E Vol 238 Image Processing for Missile Guidance (1980)

3.

CONROW, RATKOVIC

Reference map construction and scene selection process

Figure 4 describes the overall map construction and scene selection process. Several steps are necessary to develop a reference map from raw sensor data. In the first step of the process, it may be necessary to identify the scene material (especially if a different wavelength is to be utilized) and geometri­ cally correct for other viewing aspects. Once this is accomplished the scene signature can be predicted and a nominal signature determined. Due to environmental factors and other time-varying variations inherent in the properties of scene, it is also necessary to predict perturbations to the nominal that are likely to occur. Having completed the signature prediction task it is necessary to construct the reference map and check via the scene selection process that it is adequate for the map-matching task. In constructing the reference map in many cases it is necessary not only to predict intensity levels but (depending on the matching algorithm) also to identify homogeneous regions within the scene. Once this is accomplished the scene can be checked via mathematical techniques to ensure that it contains sufficient information for matching purposes and that the scene is sufficiently unique to avoid any major inter-scene redundancy problems. Finally, the reference scene must be tested via simulation to ensure that it is suitable under real world conditions.

This section will briefly examine:1) the conversion process,2) the problems associated with signature prediction,3) construction of the reference map, and4) the scene selection process.

Conversion Process

Signature Prediction

Reference Map Construction

Scene Selection

Raw Sensor- Data

Material I. D,

Geometry

- Math Tests

- Simulation Tests

Reference""" Map

- Environmental factors

- Variations in physical and electrical scene properties

Figure 4. Reference map construction and scene selection process

3. 1 Conversion process

As discussed previoualy the first phase of reference map construction generally involves conversion of the raw sensor data: 1) to the wavelength or frequency of the sensor onboard the vehicle and 2) to the geo­ metrical perspective from which the sensor is to view the scene. Because the raw data is generally not at the same wavelength as the sensor it is necessary to estimate the material properties of each region of the scene. Since many materials have very similar broad band reflectance properties in the optical/near IR por­ tion of the spectrum (from which most raw imagery originates) there may be significant mis-identification errors which can create map-matching difference errors and ultimately degrade total system performance.

The other major almost insurmountable problem is to adjust the imagery for the geometry perspective which the sensor is likely to see. For systems which look directly down (down-looking systems) the geometry corrections are quite simple since one can assume a flat plane model for the ground. For other non-down looking systems the geometric conversion process involves developing 3-D target model from the original 2-D imagery and then creating a 2-D image at the anticipated perspective angle. Since the vehicle may not actually fly the nominal trajectory, non down-looking systems are subject to geometric errors which require significant processing efforts to remove.

3. 2 Signature prediction

Signatures of the reference map need to be determined not only for the final reference map(s) which are stored in the vehicle for comparison but also to test (via simulation) the performance of the system. Seasonal maps of the reference area may need to be developed and stored for use in at least some map- matching systems. Depending on sensor wavelength and map-matching algorithm it may also be necessary to store separate reference maps which account for diurnal, atmospheric and meteorological effects on the reference scene image. The mission planner or reference scene evaluator is cautioned not to develop overly sophisticated signature models when an underspecified set of conditions will exist. Even worse is the case where poor guesses are made for certain input variable magnitudes; since in some cases this will result in nominal reference signature with greater error than that from a simple model.

438 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 14: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Perturbed signature variations from the nominal are required to test the performance of the systemunder a variety of diurnal, seasonal, atmospheric, and meteorological conditions. One should utilize thisprocedure to determine whether several reference maps will perform better under a variety of signatureconditions than a nominal signature prediction which is not accurate for any one scene condition but isdesignated for compensating fbr these variational effects.

3.3 Reference map construction

In the reference map construction area there are two questions which need to be addressed. First, whatcharacteristics should the reference scene possess? Second, how should the reference scene be evaluated?In this subsection we shall briefly discuss the choice of a reference area. In the subsequent section we shalldiscuss the simpler question of reference scene evaluation.

Table 8 lists some of the characteristics in the ideal reference map case versus the real world situation.If the ideal reference map characteristics shown in this table existed then no reference screening or evalu-ation procedure would be necessary. Philosophically, one can not control mother nature nor can one obtainagreement (even if one could control mother nature) on what scene characteristics (number of homogeneousregions, interpixel correlation length, etc. ) are best for map- matching systems. The only sure thing thatcan be said about reference map construction is that certain signature characteristics should be avoided, andhence this is the major topic of the following discussion. Since many types of algorithms require that homo-geneous regions or features be identified in the reference map a brief discussion of automatic techniques forthe region extraction is included here.

Ideal Case

Table 8. Ideal Versus Probable Reference Scene Characteristics

Probable Case

Source data base has:Error free source data base

No reference map preparation errors

Reference scene containsA single homogeneous regionNo intra -scene redundancy

- Statistically independent scene elementsSimple statistical intensity distributionTime and space invariant signature

without contrast reversals.

- Finite SNR

- Environmental and geometric distortionspresent.

Datum plane transferral errorsImperfect material identification and signature

models used in spectral translation.

Reference scene usually contains:Several homogeneous regionsAt least some intra -scene redundancyInterpoint scene element correlationComplex statistical intensity distribution

_ Time and space- varying signature withcontrast reversals.

Proper scene selection. Because of the complexity possessed by most spectral imagery and its non-linear space and time- varying signature characteristics, the reference scene selection process is less thanan exact science. It is generally easier to make qualitative assessments as to desirable or undesirablesignature physics traits. It is considerably more difficult, however, to determine exactly how good a candi-date reference area is without rigorous evaluation due to the statistical nature of expected environmental andgeometric distortions, SNR effects and intrascene redundancy.

The net effect of these degradations is to impact map - matching algorithm performance, and hence, thereliability of the fixing process itself. An examination of algorithm class sensitivity to SNR and contrastreversals is given in Section 4. 3, while a review of fix performance estimation measures is presented inSection 5.2.

If a map- matching algorithm is utilized which is sensitive to contrast reversals (i. e. , ordinary correla-tion metrics) then vegetation that exhibits strong time -varying growth variations should be omitted orminimized in update areas in every spectral interval. Similarly, it is also advisable to eliminate candidateupdate areas where low material thermal inertia and short wavelength reflectance in the passive middle andthermal IR, and microwave (for high microwave emittance materials) regions predominates to avoid contrastreversal effects. From Table 5, it is clear that the snow /ice /water complex can adversely alter thereference area signature in each spectral interval. Obviously then, water bodies should only be included in

SP /E Vol 238 /mage Processing for Missile Guidance (1980) / 439

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Perturbed signature variations from the nominal are required to test the performance of the system under a variety of diurnal, seasonal, atmospheric, and meteorological conditions. One should utilize this procedure to determine whether several reference maps will perform better under a variety of signature conditions than a nominal signature prediction which is not accurate for any one scene condition but is designated for compensating fbr these variational effects.

3. 3 Reference map construction

In the reference map construction area there are two questions which need to be addressed. First, what characteristics should the reference scene possess ? Second, how should the reference scene be evaluated? In this subsection we shall briefly discuss the choice of a reference area. In the subsequent section we shall discuss the simpler question of reference scene evaluation.

Table 8 lists some of the characteristics in the ideal reference map case versus the real world situation. If the ideal reference map characteristics shown in this table existed then no reference screening or evalu­ ation procedure would be necessary. Philosophically, one can not control mother nature nor can one obtain agreement (even if one could control mother nature) on what scene characteristics (number of homogeneous regions, interpixel correlation length, etc. ) are best for map-matching systems. The only sure thing that can be said about reference map construction is that certain signature characteristics should be avoided, and hence this is the major topic of the following discussion. Since many types of algorithms require that homo­ geneous regions or features be identified in the reference map a brief discussion of automatic techniques for the region extraction is included here.

Table 8. Ideal Versus Probable Reference Scene Characteristics

Ideal Case Probable Case

Error free source data base Source data base has:

- Finite SNR

- Environmental and geometric distortions

present.

No reference map preparation errors Datum plane transferral errors

Imperfect material identification and signature

models used in spectral translation.

Reference scene contains Reference scene usually contains:

- A single homogeneous region - Several homogeneous regions

No intra-scene redundancy - At least some intra-scene redundancy

- Statistically independent scene elements - Interpoint scene element correlation

- Simple statistical intensity distribution - Complex statistical intensity distribution

- Time and space invariant signature - Time and space-varying signature with

without contrast reversals. contrast reversals.

Proper scene selection., Because of the complexity possessed by most spectral imagery and its non­ linear space and time-varying signature characteristics, the reference scene selection process is less than an exact science. It is generally easier to make qualitative assessments as to desirable or undesirable signature physics traits. It is considerably more difficult, however, to determine exactly how good a candi­ date reference area is without rigorous evaluation due to the statistical nature of expected environmental and geometric distortions, SNR effects and intrascene redundancy.

The net effect of these degradations is to impact map-matching algorithm performance, and hence, the reliability of the fixing process itself. An examination of algorithm class sensitivity to SNR and contrast reversals is given in Section 4. 3, while a review of fix performance estimation measures is presented in Section 5. 2.

If a map-matching algorithm is utilized which is sensitive to contrast reversals (i. e. , ordinary correla­ tion metrics) then vegetation that exhibits strong time-varying growth variations should be omitted or minimized in update areas in every spectral interval. Similarly, it is also advisable to eliminate candidate update areas where low material thermal inertia and short wavelength reflectance in the passive middle and thermal IR, and microwave (for high microwave emittance materials) regions predominates to avoid contrast reversal effects. From Table 5, it is clear that the snow/ice/water complex can adversely alter the reference area signature in each spectral interval. Obviously then, water bodies should only be included in

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 439

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 15: 0238_426

CONROW, RATKOVIC

reference areas if they are unlikely to freeze because of the moderate to large signature perturbations thatcan result in active and passive spectral imagery. Unless phase -modulated or range -gated systems areutilized, disastrous fixing results will often occur with ordinary correlation or feature matching algorithmswhen snow /ice /water is present and changes in complex state are expected.

If map -matching algorithms are used which are sensitive to SNR (primarily feature matching and to alesser degree the hybrid processing approach), then regions where strong atmospheric and meteorologicalvariations exist should be carefully evaluated. The impact of atmospheric parameters (particularlyattenuation and aerosol scattering) typically decreases with increasing wavelength, but still generally formsthe limiting operational case to the thermal IR region (where reradiation becomes important). In thepassive microwave region, reradiation from percipitable water can introduce small to large signature vari-ations; particularly when materials of differing microwave emittances exist. Radars, however, aregenerally unaffected by all but the most severe atmospheric conditions.

Although meteorological effects (i. e. , latent and sensible heat transfer) typically produce a smaller per-formance degradation than atmospheric ones, they directly impact the terrain signature in each spectralregion when soil moisture is present by governing its rate of evaporation. For each active spectral regionand passive optical /near IR, this appears as a change in soil reflectance. In the passive middle and thermalIR, and microwave regions soil moisture variations alter the emissive powers of the surface.

Soil moisture effects will generally impact ordinary correlation and feature matching algorithm per-formance the greatest, since its presence in sensor imagery is space and time -varying and is often notequally proportioned within a homogeneous region. The impact of latent and sensible heat transfer for lowsoil moisture and high atmospheric precipitable water will generally be to reduce the dynamic range, hencecontrast between homogeneous regions, in passive middle and thermal IR, and microwave imagery.

In some cases even these procedures will be inadequate to produce representative imagery for guidanceupdating purposes. Here, it may be necessary to select multiple reference images of the same area tocompensate for diurnal and seasonal effects. From this, the mission planner can either select the mostrepresentative image in near real -time or store the set of multiple frames in the vehicle.

Diurnal variations in passive middle and thermal IR, and microwave imagery tend to be region- based.Seasonal variations except those induced by snow /ice /water tend to be interregional for all the candidatesensor types considered here. As a consequence, the hybrid map- matching algorithm is often desirable ifan adequate SNR exists. From this, it is apparent that proper algorithm selection for a given sensor typecan often simplify the task of nominal reference scene prediction. Conversely, using a sub - optimalalgorithm will often place an overly stringent accuracy requirement on signature prediction, and signifi-cantly increase the time required for reference scene preparation.

Preprocessing references and sensor images or using binary data correlation can reduce the impact ofsignature perturbation factors. As previously discussed, such schemes can not be successfully utilizedwithout a thorough understanding of the expected imaging physics, SNR and geometric distortions present.If applied blindly, these techniques can often reduce, rather than enhance, guidance updating systems per-formance.

Because of the inherent deficiencies in nominal signature prediction for a given sensor type coupled withmap - matching algorithm limitations, it is often desirable to employ adaptive performance predictionmeasures to estimate the quality of individual fixes. A discussion of possible performance predictiontechniques for guidance updating applications and then limitations is given in Section 5.2.

Region extraction (12 -22). Obviously homogeneous regions or features in the scene can be found visually;however, when scenes are described digitally by large arrays of numbers, it is highly desirable to intro-duce some level of automation into the process. There are two different approaches for automaticallyextracting regions from scenes: 1) those based on edge operators and 2) those based on the stationarity pro-perties of the region.

Edge approaches apply gradient or Laplacian -type operators to the scene and then use threshold techni-ques to decide upon the existence of any edge (the boundary of a feature). The major danger in using thesetechniques is that noise and distortion can make it very difficult to locate edges in the sensor imagery.

Homogeneous regions may also be located using the statistical property of stationarity (first order, con-stant mean level in the region; second -order, mean and variance constant and autocorrelation independent ofposition). In this area -based approach, one would grow regions of spatially connected pixels which wouldpossess this property. While this approach is less susceptible to problems of noise and distortions it iscomputationally more complex than the simpler edge operator techniques.

3.4 Scene selection

The scene selection process is concerned with choosing maps for which the probability of matching asensor image from within the reference map boundary is high. This process has both physical andmathematical implications. There will obviously be signature differences between the sensed image and itsreference map counterpart due to such factors as geometric, atmospheric, meteorological, diurnal, and

440 / SP/E Vol 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

reference areas if they are unlikely to freeze because of the moderate to large signature perturbations that can result in active and passive spectral imagery. Unless phase-modulated or range gated systems are utilized, disastrous fixing results will often occur with ordinary correlation or feature matching algorithms when snow/ice/water is present and changes in complex state are expected.

If map-matching algorithms are used which are sensitive to SNR (primarily feature matching and to a lesser degree the hybrid processing approach), then regions where strong atmospheric and meteorological variations exist should be carefully evaluated. The impact of atmospheric parameters (particularly attenuation and aerosol scattering) typically decreases with increasing wavelength, but still generally forms the limiting operational case to the thermal IR region (where reradiation becomes important). In the passive microwave region, reradiation from percipitable water can introduce small to large signature vari­ ations; particularly when materials of differing microwave emittances exist. Radars, however, are generally unaffected by all but the most severe atmospheric conditions.

Although meteorological effects (i. e. , latent and sensible heat transfer) typically produce a smaller per­ formance degradation than atmospheric ones, they directly impact the terrain signature in each spectral region when soil moisture is present by governing its rate of evaporation. For each active spectral region and passive optical/near IR, this appears as a change in soil reflectance. In the passive middle and thermal IR, and microwave regions soil moisture variations alter the emissive powers of the surface.

Soil moisture effects will generally impact ordinary correlation and feature matching algorithm per­ formance the greatest, since its presence in sensor imagery is space and time-varying and is often not equally proportioned within a homogeneous region. The impact of latent and sensible heat transfer for low soil moisture and high atmospheric precipitable water will generally be to reduce the dynamic range, hence contrast between homogeneous regions, in passive middle and thermal IR, and microwave imagery.

In some cases even these procedures will be inadequate to produce representative imagery for guidance updating purposes. Here, it may be necessary to select multiple reference images of the same area to compensate for diurnal and seasonal effects. From this, the mission planner can either select the most representative image in near real-time or store the set of multiple frames in the vehicle.

Diurnal variations in passive middle and thermal IR, and microwave imagery tend to be region-based. Seasonal variations except those induced by snow/ice/water tend to be interregional for all the candidate sensor types considered here. As a consequence, the hybrid map-matching algorithm is often desirable if an adequate SNR exists. From this, it is apparent that proper algorithm selection for a given sensor type can often simplify the task of nominal reference scene prediction. Conversely, using a sub-optimal algorithm will often place an overly stringent accuracy requirement on signature prediction, and signifi­ cantly increase the time required for reference scene preparation.

Preprocessing references and sensor images or using binary data correlation can reduce the impact of signature perturbation factors. As previously discussed, such schemes can not be successfully utilized without a thorough understanding of the expected imaging physics, SNR and geometric distortions present. If applied blindly, these techniques can often reduce, rather than enhance, guidance updating systems per­ formance.

Because of the inherent deficiencies in nominal signature prediction for a given sensor type coupled with map-matching algorithm limitations, it is often desirable to employ adaptive performance prediction measures to estimate the quality of individual fixes. A discussion of possible performance prediction techniques for guidance updating applications and then limitations is given in Section 5.2.

Region extraction (12-22). Obviously homogeneous regions or features in the scene can be found visually; however, when scenes are described digitally by large arrays of numbers, it is highly desirable to intro­ duce some level of automation into the process. There are two different approaches for automatically extracting regions from scenes: 1) those based on edge operators and 2) those based on the stationarity pro­ perties of the region.

Edge approaches apply gradient or Laplacian-type operators to the scene and then use threshold techni­ ques to decide upon the existence of any edge (the boundary of a feature). The major danger in using these techniques is that noise and distortion can make it very difficult to locate edges in the sensor imagery.

Homogeneous regions may also be located using the statistical property of stationarity (first order, con­ stant mean level in the region; second-order, mean and variance constant and autocorrelation independent of position). In this area-based approach, one would grow regions of spatially connected pixels which would possess this property. While this approach is less susceptible to problems of noise and distortions it is computationally more complex than the simpler edge operator techniques.

3. 4 Scene selection

The scene selection process is concerned with choosing maps for which the probability of matching a sensor image from within the reference map boundary is high. This process has both physical and mathematical implications. There will obviously be signature differences between the sensed image and its reference map counterpart due to such factors as geometric, atmospheric, meteorological, diurnal, and

440 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 16: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

seasonal effects. These effects on system performance can be minimized in the extreme by either pre-paring the reference map to be near real -time estimate of the sensor image at the moment it overflies thereference area or by developing scene particular algorithms that are relatively invariant to the signaturedeviations between the sensor image and reference map. Realistically one must reach a compromisebetween these two extremes and develop a reference map which will reduce the signature deviationsespecially in defining the boundaries of a homogeneous region and utilize a matching algorithm that willcompensate for any remaining signature differences between the two maps.

In general, successive screening techniques from simple math tests to full -blown simulations arechosen and used to evaluate the candidate reference area. Since computer processing requirementsincrease considerably with each screening step, it is desirable for unacceptable reference scenes to beidentified and rejected before the final simulation analysis if possible.

The mathematical criteria for reference scene selection requires that there be (1) sufficient informationfor map- matching and (2) a minimal amount of intrascene spatial redundancy within the reference mapboundary. Techniques exist for measuring the information content in the scene to ensure that the sensorimage size (in terms of resolution elements) contain a sufficient number of independent elements. The moredifficult issue and yet unresolved is the determination of measure for scene uniqueness. Equipped with sucha measure it would be possible within a reference map boundary to test the ensemble of possible sensorimages to determine the amount of intrascene spatial redundancy.

Heights of the secondary correlation peaks and their magnitude determined by autocorrelating a particu-lar sensor map over the reference map area yield the location of areas where there is a possible spatialredundancy problem. Two problems emerge from attempting to use this as a measure of the uniquenessproblem. First, in real world imagery the magnitude of the intensity level shifts within the imagery mayhave a significant impact on the location of secondary peaks. Thus this approach does not seem fruitful formeasuring scene uniqueness. Second, this approach uses texture information within a region which may ormay not be used in the matching algorithm; consequently, the results may be different when texture informa-tion is omitted.

The underlying spatial patterns in the map as designated by the size and shape of the homogeneousregions are the primary concern in dealing with the spatial redundancy problem. One method for measuringscene uniqueness would be to use the correlation surface associated with a specialized hybrid algorithm asa means for screening reference maps. Here the reference area would be broken up into homogeneousregions and each pixel within the region would be replaced by the mean intensity level of the entire region.An autocorrelation of a particular sensor map over the reference map would be performed using a hybridcorrelation algorithm. High secondary correlation peaks would indicate areas where spatial sceneredundancy potentially could be a problem. By pulling out a number of sensor maps from the reference mapboundary and repeating this process one could determine as first -order measure the uniqueness propertiesof the reference map.

The most accurate evaluation procedure uses a Monte Carlo simulation to provide an estimate of theupdate circular error probability (CEP) and Pff for a given reference area under a variety of conditions (23).Samples are drawn from statistical distributions that represent vehicle position- velocity characteristics,and are used to specify the sensor scene location within the reference image. Samples from another distri-bution are used to specify the imaging environmental characteristics present (i. e., time of day). Anintensity computation for the specified conditions is performed for each subscene location by using theappropriate signature model. Noise terms and geometric distortions are similarly introduced into thesensor scene.

The map- matching operation is then performed between the reference and specified synthetic sensor mapsusing the selected matching algorithm. The along and cross -track difference between the computed andactual (sensor scene draw) match point locations is determined, and by using a predetermined criteria, theupdate is catalogued as either a good or false fix.

Since each of the variables are represented by statistical distributions, the simulation can be run aspecified number of times to provide CEP and Pff estimates over the range of expected update conditions.From this, the reference scene suitability for guidance updating applications can be determined versus apredetermined performance criteria.

An estimate of the spatial redundancy present within the reference scene is provided by this proceduresince the location of the sensor scene is randomly selected from within the reference map boundaries. Like-wise, estimates of the impact of environmental, geometric and SNR effects on reference map performanceare also evaluated by this procedure.

It should be recognized, however, that the quality of the performance estimate provided by a Monte Carlosimulation for a given reference area is generally a strong function of the preprocessing and map- matchingalgorithms and the characteristics of the environmental, and geometric distortions and SNR selected. Theuse of different preprocessing or map -matching algorithms for reference scene evaluation and guidanceupdating should be avoided to minimize performance degradations.

SP /E Vol 238 Image Processing for Missile Guidance (1980) / 441

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

seasonal effects. These effects on system performance can be minimized in the extreme by either pre­ paring the reference map to be near real-time estimate of the sensor image at the moment it overflies the reference area or by developing scene particular algorithms that are relatively invariant to the signature deviations between the sensor image and reference map. Realistically one must reach a compromise between these two extremes and develop a reference map which will reduce the signature deviations especially in defining the boundaries of a homogeneous region and utilize a matching algorithm that will compensate for any remaining signature differences between the two maps.

In general, successive screening techniques from simple math tests to full-blown simulations are chosen and used to evaluate the candidate reference area. Since computer processing requirements increase considerably with each screening step, it is desirable for unacceptable reference scenes to be identified and rejected before the final simulation analysis if possible.

The mathematical criteria for reference scene selection requires that there be (1) sufficient information for map-matching and (2) a minimal amount of intrascene spatial redundancy within the reference map boundary. Techniques exist for measuring the information content in the scene to ensure that the sensor image size (in terms of resolution elements) contain a sufficient number of independent elements. The more difficult issue and yet unresolved is the determination of measure for scene uniqueness. Equipped with such a measure it would be possible within a reference map boundary to test the ensemble of possible sensor images to determine the amount of intrascene spatial redundancy.

Heights of the secondary correlation peaks and their magnitude determined by autocorrelating a particu­ lar sensor map over the reference map area yield the location of areas where there is a possible spatial redundancy problem. Two problems emerge from attempting to use this as a measure of the uniqueness problem. First, in real world imagery the magnitude of the intensity level shifts within the imagery may have a significant impact on the location of secondary peaks. Thus this approach does not seem fruitful for measuring scene uniqueness. Second, this approach uses texture information within a region which may or may not be used in the matching algorithm; consequently, the results may be different when texture informa­ tion is omitted.

The underlying spatial patterns in the map as designated by the size and shape of the homogeneous regions are the primary concern in dealing with the spatial redundancy problem. One method for measuring scene uniqueness would be to use the correlation surface associated with a specialized hybrid algorithm as a means for screening reference maps. Here the reference area would be broken up into homogeneous regions and each pixel within the region would be replaced by the mean intensity level of the entire region. An autocorrelation of a particular sensor map over the reference map would be performed using a hybrid correlation algorithm. High secondary correlation peaks would indicate areas where spatial scene redundancy potentially could be a problem. By pulling out a number of sensor maps from the reference map boundary and repeating this process one could determine as first-order measure the uniqueness properties of the reference map.

The most accurate evaluation procedure uses a Monte Carlo simulation to provide an estimate of the update circular error probability (CEP) and Pff for a given reference area under a variety of conditions (23). Samples are drawn from statistical distributions that represent vehicle position-velocity characteristics, and are used to specify the sensor scene location within the reference image. Samples from another distri­ bution are used to specify the imaging environmental characteristics present (i.e., time of day). An intensity computation for the specified conditions is performed for each subscene location by using the appropriate signature model. Noise terms and geometric distortions are similarly introduced into the sensor scene.

The map-matching operation is then performed between the reference and specified synthetic sensor maps using the selected matching algorithm. The along and cross-track difference between the computed and actual (sensor scene draw) match point locations is determined, and by using a predetermined criteria, the update is catalogued as either a good or false fix.

Since each of the variables are represented by statistical distributions, the simulation can be run a specified number of times to provide CEP and Pff estimates over the range of expected update conditions. From this, the reference scene suitability for guidance updating applications can be determined versus a predetermined performance criteria.

An estimate of the spatial redundancy present within the reference scene is provided by this procedure since the location of the sensor scene is randomly selected from within the reference map boundaries. Like­ wise, estimates of the impact of environmental, geometric and SNR effects on reference map performance are also evaluated by this procedure.

It should be recognized, however, that the quality of the performance estimate provided by a Monte Carlo simulation for a given reference area is generally a strong function of the preprocessing and map-matching algorithms and the characteristics of the environmental, and geometric distortions and SNR selected. The use of different preprocessing or map-matching algorithms for reference scene evaluation and guidance updating should be avoided to minimize performance degradations.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 441

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 17: 0238_426

CONROW, RATKOVIC

Any uncertainty in specifying the random variable distribution properties utilized in the simulation willresult in a decreased confidence in the reference scene evaluation process, If large uncertainties exist in agiven variable distribution, it is better to eliminate the variable from the simulation. If significantuncertainty errors are present in several distributions, the benefits of employing this form of referencescene evaluation decrease and the resulting performance estimates produced are often unreliable.

4. Algorithm selection process

Figure 5 describes the overall algorithm selection process. The sensor image is generally consideredto be some subset of the reference map corrupted by errors. A matching algorithm is then used to locatethe position of the sensed image in the reference map coordinate system. Based on an analysis of systemperformance, it is possible to determine the capability of each algorithm to accommodate various types oferror. Ultimately, since for each sensor type some errors are more dominant than others, it is possibleto determine the most appropriate algorithm for each sensor type.

This section will discuss the following topics: (1) a description and categorization of error sources;(2) a description and classification of matching algorithms; and (3) an analysis of the compatibility of variousalgorithms to accommodate different error sources.

4,1 Error sources

The problems associated with image dynamics, geometrical distortions, noise, and other error sourcescan be lumped into four mutually exclusive comprehensive categories. These categories are defined as:

1) Global errors --those errors which uniformly affect the intensity level of all scene elements. Thiscategory would include geometric distortions and bias and gain changes.

2) Regional errors --those errors where the change in intensity levels occurs uniformly only withinhomogeneous regions within the scene. Examples would include region -level shifts (contrastreversals) due to image dynamics and predictive coding errors from incorrect reference mapconstruction.

3) Local errors -- errors expected to affect each pixel or grouping of pixels (contained within aninterpixel correlation length) independently. The primary example of this error source isadditive noise.

4) Nonstructured errors- -this is a catchall category designed to fit those errors whose effect on thescene can not be described as being global, regional, or local (an example being a cloud cover overportions of the target area changing the signature in a nonstructured manner).

The advantage of this formulation of the error source is that by grouping errors into these categoriesit is simpler to describe the utility of each algorithm relative to a given class of error source rather thanhaving to backtrack and deal with each error /algorithm combination on an individual basis.

Reference Image

SensedImage

Errors

- GlobalRegional

- Local- Nonstructured

Algorithms Analysis

- Correlation System Algorithm/ Appropriate- Feature Performance Error Algorithm/

matching Compatibility Sensor- Hybrid Combinations

Figure 5. Algorithm selection process

4.2 Map -Matching Algorithm Classes

There are three classes of algorithms which can be employed to perform the image matching task. Thesealgorithms include correlation, feature matching, and hybrid classes (24).

442 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

Any uncertainty in specifying the random variable distribution properties utilized in the simulation will result in a decreased confidence in the reference scene evaluation process. If large uncertainties exist in a given variable distribution, it is better to eliminate the variable from the simulation. If significant uncertainty errors are present in several distributions, the benefits of employing this form of reference scene evaluation decrease and the resulting performance estimates produced are often unreliable.

4. Algorithm selection process

Figure 5 describes the overall algorithm selection process. The sensor image is generally considered to be some subset of the reference map corrupted by errors. A matching algorithm is then used to locate the position of the sensed image in the reference map coordinate system. Based on an analysis of system performance, it is possible to determine the capability of each algorithm to accommodate various types of error. Ultimately, since for each sensor type some errors are more dominant than others, it is possible to determine the most appropriate algorithm for each sensor type.

This section will discuss the following topics: (1) a description and categorization of error sources; (2) a description and classification of matching algorithms; and (3) an analysis of the compatibility of various algorithms to accommodate different error sources.

4. 1 Error sources

The problems associated with image dynamics, geometrical distortions, noise, and other error sources can be lumped into four mutually exclusive comprehensive categories. These categories are defined as:

1) Global errors those errors which uniformly affect the intensity level of all scene elements. This category would include geometric distortions and bias and gain changes.

2) Regional errors those errors where the change in intensity levels occurs uniformly only within homogeneous regions within the scene. Examples would include region-level shifts (contrast reversals) due to image dynamics and predictive coding errors from incorrect reference map construction.

3) Local errors errors expected to affect each pixel or grouping of pixels (contained within an interpixel correlation length) independently. The primary example of this error source is additive noise.

4) Nonstructured errors this is a catchall category designed to fit those errors whose effect on the scene can not be described as being global, regional, or local (an example being a cloud cover over portions of the target area changing the signature in a nonstructured manner).

The advantage of this formulation of the error source is that by grouping errors into these categories it is simpler to describe the utility of each algorithm relative to a given class of error source rather than having to backtrack and deal with each error/algorithm combination on an individual basis.

Reference Image

Algorithms Analysis

Errors

Correlation Feature

matching Hybrid

System _____ Performance

Algorithm/Error

C ompatibility

Appropriate Algorithm/ Sensor Combinations

Sensed Image

.:

GlobalRegional LocalNonstructured

Figure 5. Algorithm selection process

4. 2 Map-Matching Algorithm Classes

There are three classes of algorithms which can be employed to perform the image matching task. These algorithms include correlation, feature matching, and hybrid classes (24).

442 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 18: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

All algorithms perform four operations: (1) transformation of the original intensity data associated witheach resolution element in both sensed image and reference map; (2) establishment of a metric for com-paring a portion of the reference map to the sensed image; (3) the computation of that metric for all possiblepositions of comparison between the reference and sensor maps; and (4) a selection rule (generally theextremum metric value) for delineating the match position based on the metric value.

Correlation types of algorithms use the intensity values associated with the resolution elements of eachmap (of some transformation of these intensity values, i.e., normalization) as the map data to be used incomputing the metric. Correlation metrics measure either the degree of similarity (i. e. , product typealgorithm) or dissimilarity (i.e., difference squared) between the sensed image and the portion of thereference map it is being compared against.

Feature matching algorithms do not utilize intensity data per se but attempt to work with only features inthe scene (25). This is generally accomplished by using algorithms to locate boundaries or edges betweenregions. Edge or boundary information is extended to determine the position at which boundaries or edgesintersect. The position of this vertex point and the direction and number of line segments emanating fromthe vertex point form the basis for map comparisons with the metric being some form of a mean squaredistance measure between locations of vertices in the reference and sensed map. This distance measuremay be weighted by the number and direction of line segments emanating from the vertex point with multipleintersecting vertices being weighted more heavily.

The hybrid algorithms (26) uses a combination of intensity level and region identification information indetermining a match location. In this class of algorithm homogeneous regions in the reference scene areidentified and all resolution elements within the region are tagged as belonging to the region. When thesensor is compared to a portion of the reference map, an assumption is made that this position of compari-son is the correct one, and the sensor image is broken up into homogeneous regions as identified by thecounterpart elements of the reference map. The elements in each region are correlated separately using acorrelation algorithm, and the total correlation between the two maps is found by summing the individualcorrelations taken over each homogeneous segment of the reference map.

4.3 Analysis of algorithm compatibility

Let us consider which class of algorithm is most appropriate for accommodating each class of errorsource separately. Table 9 shows a rating of the algorithm's ability to accommodate each error class.Examining the errors relative to the algorithms, all algorithms can readily accommodate global errors.Correlation and hybrid algorithms, however, probably have somewhat more difficulty in accommodatingthis type of error. Corrective action for compensating for global errors include processing of sensor ele-ments in smaller spatial groups to accommodate geometric errors (27 -29), normalization of intensity levelsto compensate for bias errors and gain shifts, and extending the algorithm search dimension to includesearching the scene for scale and rotational effects. Correlation and hybrid algorithm corrective measureswould rely most heavily on spatial grouping and intensity level compensation, while feature matching algo-rithms (working with less data to begin with) would primarily resort to search techniques to compensate forglobal errors.

Table 9. Algorithm Ability to Accommodate Each Class of Error

Error Class

Algorithm Global Regional Local Nonstructured

Correlation Medium Poor Good Good

Feature Good Good Poor PoorMatching

Hybrid Medium Good Medium Good

Correlation algorithms are extremely poor performers in the presence of regional errors, the possiblesolution being (besides switching to one of the other algorithms) to store and search over multiple referencemaps, restrict the wavelength of the imagery to spectral regions where regional errors are not likely tooccur, or to locate reference maps in geographical areas in which regional errors are unlikely. Both thefeature matching and hybrid class of algorithms are good in accommodating regional errors.

Local errors such as noise can cause significant problems in the performance of feature matchingalgorithms primarily due to the difficulty in extracting features of line boundaries from the sensed imageryusing edge type operators. Correlation type algorithms are virtually immune to local errors, while hybridalgorithms are susceptible to this error source if there is also a scene redundancy problem with noise,making it more difficult to distinguish images with similar spatial patterns. The only corrective measurefor feature matching algorithms in the presence of local errors is to switch to one of the other two classesof algorithms.

SPIE Vol 238 Image Processing for Missile Guidance (1980) / 443

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

All algorithms perform four operations: (1) transformation of the original intensity data associated with each resolution element in both sensed image and reference map; (2) establishment of a metric for com­ paring a portion of the reference map to the sensed image; (3) the computation of that metric for all possible positions of comparison between the reference and sensor maps; and (4) a selection rule (generally the extremum metric value) for delineating the match position based on the metric value.

Correlation types of algorithms use the intensity values associated with the resolution elements of each map (of some transformation of these intensity values, ue. , normalization) as the map data to be used in computing the metric. Correlation metrics measure either the degree of similarity (i. e. , product type algorithm) or dissimilarity (i.e. , difference squared) between the sensed image and the portion of the reference map it is being compared against.

Feature matching algorithms do not utilize intensity data per se but attempt to work with only features in the scene (25). This is generally accomplished by using algorithms to locate boundaries or edges between regions e Edge or boundary information is extended to determine the position at which boundaries or edges intersect. The position of this vertex point and the direction and number of line segments emanating from the vertex point form the basis for map comparisons with the metric being some form of a mean square distance measure between locations of vertices in the reference and sensed map. This distance measure may be weighted by the number and direction of line segments emanating from the vertex point with multiple intersecting vertices being weighted more heavily.

The hybrid algorithms (26) uses a combination of intensity level and region identification information in determining a match location. In this class of algorithm homogeneous regions in the reference scene are identified and all resolution elements within the region are tagged as belonging to the region. When the sensor is compared to a portion of the reference map, an assumption is made that this position of compari­ son is the correct one, and the sensor image is broken up into homogeneous regions as identified by the counterpart elements of the reference map. The elements in each region are correlated separately using a correlation algorithm, and the total correlation between the two maps is found by summing the individual correlations taken over each homogeneous segment of the reference map.

4. 3 Analysis of algorithm compatibility

Let us consider which class of algorithm is most appropriate for accommodating each class of error source separately. Table 9 shows a rating of the algorithm's ability to accommodate each error class. Examining the errors relative to the algorithms, all algorithms can readily accommodate global errors. Correlation and hybrid algorithms, however, probably have somewhat more difficulty in accommodating this type of error. Corrective action for compensating for global errors include processing of sensor ele­ ments in smaller spatial groups to accommodate geometric errors (27-29), normalization of intensity levels to compensate for bias errors and gain shifts, and extending the algorithm search dimension to include searching the scene for scale and rotational effects. Correlation and hybrid algorithm corrective measures would rely most heavily on spatial grouping and intensity level compensation, while feature matching algo­ rithms (working with less data to begin with) would primarily resort to search techniques to compensate for global errors.

Table 9. Algorithm Ability to Accommodate Each Class of Error

Error Class

Algorithm Global Regional Local Nonstructured

Correlation Medium Poor Good Good

Feature Good Good Poor PoorMatching

Hybrid Medium Good Medium Good

Correlation algorithms are extremely poor performers in the presence of regional errors, the possible solution being (besides switching to one of the other algorithms) to store and search over multiple reference maps, restrict the wavelength of the imagery to spectral regions where regional errors are not likely to occur, or to locate reference maps in geographical areas in which regional errors are unlikely. Both the feature matching and hybrid class of algorithms are good in accommodating regional errors.

Local errors such as noise can cause significant problems in the performance of feature matching algorithms primarily due to the difficulty in extracting features of line boundaries from the sensed imagery using edge type operators. Correlation type algorithms are virtually immune to local errors, while hybrid algorithms are susceptible to this error source if there is also a scene redundancy problem with noise, making it more difficult to distinguish images with similar spatial patterns. The only corrective measure for feature matching algorithms in the presence of local errors is to switch to one of the other two classes of algorithms.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 443

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 19: 0238_426

CONROW, RATKOVIC

Finally, since feature matching algorithms use less information in the scene than the other two types ofalgorithms, they are most susceptible to nonstructured errors where positions of the sensed image may lookobliterated when compared to the reference map. Correlation and hybrid algorithms can still perform quitewell even in the presence of large missing areas in the sensed image.

As discussed above, each algorithm has advantages and disadvantages relative to certain types of errorsources; however, real world systems are likely to be faced with a combination of error sources to dealwith.

From the discussion in Section 2, it is seen that certain sensor bands have characteristic errors pri-marily regional (i. e. , contrast reversals and predictive) errors and local errors. Based on the magnitudeof these errors sources (and excluding the effects of global and nonstructured errors) it is possible to deter-mine the compatibility of sensors with matching algorithms. If regional errors dominate the process, thena feature matching algorithm is most appropriate. If local errors dominate, then correlation algorithmslook most attractive. In the presence of both local and regional errors then the hybrid class of algorithmlooks most appropriate.

To summarize, all error sources can be placed into one of four generic categories. Using these cate-gories one can analyze the performance of the three types of algorithms relative to a particular errorsource. Some algorithms are more capable than others at accommodating certain classes of error. In theend the final algorithm choice will depend on determining the weighting of the error sources that the systemis likely to encounter.

An analysis was performed to determine the optimum map- matching algorithm class for each sensoroperating band based on the regional errors given in Table 7, as well as sensor characteristics and oper-ational considerations. The results of this analysis are summarized in Table 10. Although in no sensorcase is one algorithm class superior to the others, a number of caveats have been developed and presentedas a guide to the mission planner to ensure optimum performance.

Table 10. Map- matching algorithm selection based on designated sensor operating region

Sensor Region Type Algorithm Selection*

Optical /Near IR P h A - Correlation when SNR low- Hybrid when SNR moderate and vegetation present- Feature matching when SNR high and vegetation absent

Middle IR P - Correlation unacceptable because of regional thermalinertia effects.

- Hybrid when SNR low to moderate and vegetation present- Feature matching generally undesirable unless high SNR

exists and vegetation is absent.

A - Same as Optical /Near IR

Thermal IR P - Same as Passive Middle IR

A - Same as Optical /Near IR

Microwave P - Correlation when SNR low and microwave reflectancedominates.

- Hybrid when SNR moderate and vegetation present- Feature matching when SNR high and vegetation absent

A - Correlation when SNR low or with one -dimensional imagingsystems.

- Hybrid when SNR moderate, vegetation present or with two -dimensional imaging systems.

- Feature matching generally unacceptable because of inade-quate SNR unless specialized preprocessing used.

Range or Phase - A - Correlation only when low SNR presentModulated Sensors - Hybrid unnecessary since regional errors are generally

small.- Feature matching desirable when SNR high

* When moderate nonstructured errors or snow /ice /water are present, the hybrid approach must be usedwith all systems except range or phase -modulated sensors to ensure update reliability.

444 / SPIE Vol 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

Finally, since feature matching algorithms use less information in the scene than the other two types of

algorithms, they are most susceptible to nonstructured errors where positions of the sensed image may look obliterated when compared to the reference map. Correlation and hybrid algorithms can still perform quite well even in the presence of large missing areas in the sensed image.

As discussed above, each algorithm has advantages and disadvantages relative to certain types of error sources; however, real world systems are likely to be faced with a combination of error sources to deal with.

From the discussion in Section 2, it is seen that certain sensor bands have characteristic errors pri­ marily regional (i.e., contrast reversals and predictive) errors and local errors. Based on the magnitude of these errors sources (and excluding the effects of global and nonstructured errors) it is possible to deter­ mine the compatibility of sensors with matching algorithms. If regional errors dominate the process, then a feature matching algorithm is most appropriate. If local errors dominate, then correlation algorithms look most attractive. In the presence of both local and regional errors then the hybrid class of algorithm

looks most appropriate.

To summarize, all error sources can be placed into one of four generic categories. Using these cate­

gories one can analyze the performance of the three types of algorithms relative to a particular error source. Some algorithms are more capable than others at accommodating certain classes of error. In the end the final algorithm choice will depend on determining the weighting of the error sources that the system is likely to encounter.

An analysis was performed to determine the optimum map-matching algorithm class for each sensor operating band based on the regional errors given in Table 7, as well as sensor characteristics and oper­ ational considerations. The results of this analysis are summarized in Table 10. Although in no sensor case is one algorithm class superior to the others, a number of caveats have been developed and presented as a guide to the mission planner to ensure optimum performance.

Table 10. Map-matching algorithm selection based on designated sensor operating region

Sensor Region Type Algorithm Selection*

Optical/Near IR P & A - Correlation when SNR low- Hybrid when SNR moderate and vegetation present- Feature matching when SNR high and vegetation absent

Middle IR P Correlation unacceptable because of regional thermalinertia effects.

- Hybrid when SNR low to moderate and vegetation present- Feature matching generally undesirable unless high SNR

exists and vegetation is absent.

A - Same as Optical/Near IR

Thermal IR P - Same as Passive Middle IR

A - Same as Optical/Near IR

Microwave P - Correlation when SNR low and microwave reflectancedominates.

- Hybrid when SNR moderate and vegetation present- Feature matching when SNR high and vegetation absent

A - Correlation when SNR low or with one-dimensional imaging systems.

- Hybrid when SNR moderate, vegetation present or with two- dimensional imaging systems.

- Feature matching generally unacceptable because of inade­ quate SNR unless specialized preprocessing used.

Range or Phase A - Correlation only when low SNR presentModulated Sensors - Hybrid unnecessary since regional errors are generally

small.- Feature matching desirable when SNR high

* When moderate nonstructured errors or snow/ice/water are present, the hybrid approach must be used

with all systems except range or phase-modulated sensors to ensure update reliability.

444 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 20: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Correlation is desirable in active and passive optical /near IR cases where a low SNR is present. Sourcesof this type include passive low -light level operation, low scene contrast operation, or when a high atmos-pheric aerosol content is present in the imaging path. When significant vegetation is present in thereference area, the hybrid approach is desirable, while feature extraction is reserved for cases when a highSNR exists and vegetation that exhibits a time -varying regional boundary shift is absent.

Correlation class algorithms provide unacceptable performance with passive middle and thermal IR andgenerally with passive microwave imagery (when microwave emittance predominates) due to contrastreversals or moderate to large time -varying regional errors induced by material thermal inertia effects.Traditional correlation techniques,including binary conversion preprocessing, generally provide insignifi-cant performance improvement when applied to middle and thermal IR imagery and small to moderateimprovements with passive microwave imagery. The hybrid approach is preferable in cases where lessthan a high SNR exists due to sensor, imaging contrast or atmospheric reradiation considerations, or whena time -varying vegetation signature is present. Feature matching application is operationally limited as inother spectral regions to cases where a high SNR exists and time - varying vegetation coverage is absent.

Comments given for the optical /near IR region are generally applicable for all map -matching systemsusing active sensors. The principal limitations of active middle and (particularly) thermal IR systems forapplications against natural materials is the low imaging contrast typically present. It is often necessary toutilize dynamic range expansion preprocessing techniques with these sensor types, which limits the use offeature matching methods in these cases unless a high data SNR exists.

Although atmospheric effects generally have a negligible impact on radar image contrast, the moderateto low SNR typically present for most proposed missile -borne systems coupled with specular materialreturns from the reference area provide other problems for operational map- matching systems. With one -dimensional radar map -matching systems correlation algorithms are usually preferred over featurematching to minimize the number of discrete scatters required to ensure adequate performance. Hybridalgorithms are preferable when a moderate SNR exists or when significant vegetation is present thatpossesses a time -varying signature. Feature matching algorithms are generally unacceptable for pro-cessing missile borne radar data because of typically low SNRs unless specialized preprocessing techniquesare used which emphasize stable,while deemphasizing potentially unstable,edges present.

For range or phase -modulated sensors, the hybrid approach is generally unwarranted (unless a signifi-cant amount of deciduous trees exist) because of the generally time- invariant nature of these forms ofreference imagery. The choice between correlation and feature matching approaches here should be deter-mined versus the expected SNR since predictive and nonstructured errors are generally small.

In addition to the caveats just presented, it should be recognized that other error types sometimes pre-sent can significantly alter map- matching performance. Correlation and feature matching algorithm classperformance are sensitive to predictive (i. e. , snow /ice /water complex) and nonstructured (i. e. , shadowing)reference map errors. In the event that a high probability of time and space- varying snow /ice /water orshadowing exists within the reference area, the hybrid algorithm class is preferable. The only practicalexception to this, allowing adequate correlation or feature matching performance, would involve the blockageof only a small amount of the total map information content (i. e. , number of independent elements) and /ortotal map area.

In summary, when sensor characteristics or operational considerations result in a low SNR and when theselected reference area has a relatively time invariant signature, correlation class algorithms should beconsidered. When a moderate SNR exists and predictive and nonstructured errors are expected, the hybridclass is preferable. In cases where a high SNR exists and predictive and nonstructured errors are small,feature mapping is desirable.

5. Performance prediction

Major performance considerations for image matching systems involve (1) the avoidance of false fixesduring acquisition, and (2) the accuracy with which the position fix can be made. The major focus of thispaper is on the acquisition phase of image matching, which is the more difficult and important part of theoverall problem to be solved. The acquisition system designer relative to performance measures is con-cerned (1) with developing general guidelines for performance as a function of sensor and computationalalgorithm characteristics, and (2) real -time scene dependent estimates of system performance in order todetermine whether or not a position fix is valid. This section is therefore divided into two parts: onedealing with the general development of performance guidelines for acquisition and the other dealing withadaptive techniques for estimating system performance in real -time onboard the vehicle.

5. 1 Performance guidelines for systems

As pointed out above, the performance criteria for acquisition is concerned with the avoidance of falsefixes as measured by its probability of occurrence, Pff. Developing some general theoretical guidelines inthis area avoids the expenses associated with developing guidelines completely from Monte Carlo simula-tions. The general theoretical development of determining Pff or Pc (1 - Pff)) begins with examining thecorrelation surface shown in Figure 6. The correlation values can be broken into two groups --those asso-ciated with match and nonmatch correlation values, oi (J), which are located away from the central peak, Asseen in Figure 6, these correlation values can be compactly represented by two statistical distributions- -one

SPIE Vol 238 Image Processing for Missile Guidance (1980) / 445

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

Correlation is desirable in active and passive optical/near IR cases where a low SNR is present. Sources of this type include passive low-light level operation, low scene contrast operation, or when a high atmos­ pheric aerosol content is present in the imaging path. When significant vegetation is present in the reference area, the hybrid approach is desirable, while feature extraction is reserved for cases when a high SNR exists and vegetation that exhibits a time-varying regional boundary shift is absent.

Correlation class algorithms provide unacceptable performance with passive middle and thermal IR and generally with passive microwave imagery (when microwave emittance predominates) due to contrast reversals or moderate to large time-varying regional errors induced by material thermal inertia effects. Traditional correlation techniques,including binary conversion preprocessing, generally provide insignifi­ cant performance improvement when applied to middle and thermal IR imagery and small to moderate improvements with passive microwave imagery. The hybrid approach is preferable in cases where less than a high SNR exists due to sensor, imaging contrast or atmospheric reradiation considerations, or when a time-varying vegetation signature is present. Feature matching application is operationally limited as in other spectral regions to cases where a high SNR exists and time-varying vegetation coverage is absent.

Comments given for the optical/near IR region are generally applicable for all map-matching systems using active sensors. The principal limitations of active middle and (particularly) thermal IR systems for applications against natural materials is the low imaging contrast typically present. It is often necessary to utilize dynamic range expansion preprocessing techniques with these sensor types, which limits the use of feature matching methods in these cases unless a high data SNR exists.

Although atmospheric effects generally have a negligible impact on radar image contrast, the moderate to low SNR typically present for most proposed missile-borne systems coupled with specular material returns from the reference area provide other problems for operational map-matching systems. With one- dimensional radar map-matching systems correlation algorithms are usually preferred over feature matching to minimize the number of discrete scatters required to ensure adequate performance. Hybrid algorithms are preferable when a moderate SNR exists or when significant vegetation is present that possesses a time-varying signature. Feature matching algorithms are generally unacceptable for pro­ cessing missile borne radar data because of typically low SNRs unless specialized preprocessing techniques are used which emphasize stable, while deemphasizing potentially unstable, edge s present.

For range or phase-modulated sensors, the hybrid approach is generally unwarranted (unless a signifi­ cant amount of deciduous trees exist) because of the generally time-invariant nature of these forms of reference imagery. The choice between correlation and feature matching approaches here should be deter­ mined versus the expected SNR since predictive and nonstructured errors are generally small.

In addition to the caveats just presented, it should be recognized that other error types sometimes pre­ sent can significantly alter map-matching performance. Correlation and feature matching algorithm class performance are sensitive to predictive (i. e. , snow/ice/water complex) and nonstructured (i. e. , shadowing) reference map errors. In the event that a high probability of time and space-varying snow/ice/water or shadowing exists within the reference area, the hybrid algorithm class is preferable. The only practical exception to this, allowing adequate correlation or feature matching performance, would involve the blockage of only a small amount of the total map information content (i. e. , number of independent elements) and/or total map area.

In summary, when sensor characteristics or operational considerations result in a low SNR and when the selected reference area has a relatively time invariant signature, correlation class algorithms should be considered. When a moderate SNR exists and predictive and nonstructured errors are expected, the hybrid class is preferable. In cases where a high SNR exists and predictive and nonstructured errors are small, feature mapping is desirable.

5. Performance prediction

Major performance considerations for image matching systems involve (1) the avoidance of false fixes during acquisition, and (2) the accuracy with which the position fix can be made. The major focus of this paper is on the acquisition phase of image matching, which is the more difficult and important part of the overall problem to be solved. The acquisition system designer relative to performance measures is con­ cerned (1) with developing general guidelines for performance as a function of sensor and computational algorithm characteristics, and (2) real-time scene dependent estimates of system performance in order to determine whether or not a position fix is valid. This section is therefore divided into two parts: one dealing with the general development of performance guidelines for acquisition and the other dealing with adaptive techniques for estimating system performance in real-time onboard the vehicle.

5. 1 Performance guidelines for systems

As pointed out above, the performance criteria for acquisition is concerned with the avoidance of false fixes as measured by its probability of occurrence, Pff. Developing some general theoretical guidelines in this area avoids the expenses associated with developing guidelines completely from Monte Carlo simula­ tions. The general theoretical development of determining Pff or P C (1 - Pff)) begins with examining the correlation surface shown in Figure 6. The correlation values can be broken into two groups --those asso­ ciated with match and nonmatch correlation values, <k (J), which are located away from the central peak. As seen in Figure 6, these correlation values can be compactly represented by two statistical distributions --one

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 445

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 21: 0238_426

CONROW, RATKOVIC

associated with the nonmatch values and one associated with the match value(s). The correlation value(s)associated with the correct match point may also take on a distribution of values due to noise and othererrors (such as geometric) in the system. Errors will have a tendency to spread out both the match andnonmatch correlation distributions. The computation of the performance measure involves determining theprobability that a correlation value drawn from the distribution associated with the match point exceeds allcorrelation values drawn from distribution associated with nonmatch values.

NON MATCH POSITIONS

MAPS NOTCOINCIDENT

MATCH POSITION NON MATCH POSITIONS

SENSOR

REFERENCE MAP MAP

CORRELATOR OUTPUT (INTENSITY VS

MATCH I +1

POSITIONI 95(0)

r'

OUT -OF- REGISTER

CORRELATION VALUES

(J)

MAPS NOTCOINCIDENT

DISPLACEMENT)

o; iii ,I 1

OUT -OF- REGISTER

-- CORRELATION VALUES

(J )-1

Figure 6. Output of correlation, process

0(0)

STATI STI CAL

REPRESENTATION

OF PROCESS

Mathematically this can be expressed as the general expression shown in Figure 7, where it isnecessary to compute for a given match correlation value the probability that the match correlation valueexceeds the nonmatch correlation value for all nonmatch positions of comparison (Q) between the sensedimage and reference maps with this expression being weighted by the distribution associated with the matchvalues (30, 31). If the match and nonmatch correlation values are indeed independent, then, as shown instep 2 of Figure 7, the probability expression can be computed using two separate distribution functions - -onefor the match value and one for all the nonmatch values.

In the real world there are generally spatial patterns in the sensed imagery which are partially repeatedin some position of the reference map. This scene interredundancy problem can be a major source ofsystem failures when compounded by noise and other error sources. It also generally causes the correla-tion value at some nonmatch points to be highly dependent on the match correlation, thus preventing the twodistributions to be separated and requiring a joint distribution expression to be used in computing the proba-bility that a nonmatch correlation value exceeds the match correlation value. If one attempts to bemathematically correct in modeling this scene interredundancy problem, the expression involving the jointdistribution function (for match and nonmatch values) causes one into a scene specific "modus operandi" witha probability expression which is too complicated to derive general results from.

Most authors, in attempting to develop a general Pc guideline, have ignored the implications of the sceneredundancy and have assumed the match and nonmatch correlation values to be independent. The implica-of avoiding to model the interscene redundancy problem are twofold. First, and foremost, the analysis*Due to correlation in the scene elements themselves several values around the correct match peak may beassociated with match correlation distribution.

446 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

associated with the nonmatch values and one associated with the match value(s). * The correlation value(s)

associated with the correct match point may also take on a distribution of values due to noise and other

errors (such as geometric) in the system. Errors will have a tendency to spread out both the match and

nonmatch correlation distributions. The computation of the performance measure involves determining the

probability that a correlation value drawn from the distribution associated with the match point exceeds all

correlation values drawn from distribution associated with nonmatch values.

NON MATCH POSITIONS MATCH POSITION NON MATCH POSITIONS

MAPS NOT COINCIDENT REFERENCE MAP

SENSOR MAP

MAPS NOT COINCIDENT\

M—-•OUT-OF-REGISTER CORRELATION VALUES <#>U)

CORRELATOR OUTPUT (INTENSITY VS DISPLACEMENT)

1 ' V MATCH /' \POSITION /

0

OUT-OF-REGISTER CORRELATION VALUES —

-1

STATISTICAL REPRESENTATION OF PROCESS

Figure 6. Output of correlation process

Mathematically this can be expressed as the general expression shown in Figure 7, where it is

necessary to compute for a given match correlation value the probability that the match correlation value

exceeds the nonmatch correlation value for all nonmatch positions of comparison (Q) between the sensed

image and reference maps with this expression being weighted by the distribution associated with the match

values (30, 31). If the match and nonmatch correlation values are indeed independent, then, as shown in

step 2 of Figure 7, the probability expression can be computed using two separate distribution functions one

for the match value and one for all the nonmatch values.

In the real world there are generally spatial patterns in the sensed imagery which are partially repeated

in some position of the reference map. This scene interredundancy problem can be a major source of

system failures when compounded by noise and other error sources. It also generally causes the correla­

tion value at some nonmatch points to be highly dependent on the match correlation, thus preventing the two

distributions to be separated and requiring a joint distribution expression to be used in computing the proba­

bility that a nonmatch correlation value exceeds the match correlation value. If one attempts to be mathematically correct in modeling this scene interredundancy problem, the expression involving the joint

distribution function (for match and nonmatch values) causes one into a scene specific "modus operandi" with

a probability expression which is too complicated to derive general results from.

Most authors, in attempting to develop a general Pc guideline, have ignored the implications of the scene

redundancy and have assumed the match and nonmatch correlation values to be independent. The implica-

of avoiding to model the interscene redundancy problem are twofold. First, and foremost, the analysis

*Due to correlation in the scene elements themselves several values around the correct match peak may be

associated with match correlation distribution.

446 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 22: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

which follows to determine the Pc guidelines should be considered a limiting case where noise and otherappropriately modeled errors dominate the failure mode. For situations where the scene selection pro-cesses have done a good job in screening out the scene redundancy failure mode, the analysis could stillprovide useful performance guidelines. If, however, sufficient effort was not made in properly selectingreference maps to avoid scene redundancies, system performance is likely to be significantly worse thanpredicted by these guidelines. Second, other approximations and assumptions beyond this point take on lesssignificance.

NON MATCHCORRELATION CORRELATION

DISTRIBUTIONS VALUES, 4)0),COME FROM HERE

NON MATCH, fNM(x) MATCH, fM ly)MATCH CORRELATION VALUE,0101, COMES FROM HERE

1. PC= ,nfMlyldy (P 4111, y, f21<y,131<y, . .(Q)<y1101=y1]

2. PC = f fMlyl dy

o (01

GENERAL EXPRESSION

IyIY Iy Y-c _<,

. . . f, fNMlxl,x2, x3, . . . xQl dxl, dx2, dx3 . . . dxQ

ALL NON MATCH CORRELATION VALUES INDEPENDENT

QI3. PC = f fMlyl dy>

fy fNM (xi) dx.

ALL NON MATCH CORRELATION VALUES IDENTICALLY DISTRIBUTED

Y ' Q1fM (y) dy f fNM (x) dx4. PC =

5. PC =1

CORRELATION DISTRIBUTIONS GAUSS IAN

f. D(P -(y - 4D)Q))2' 1/2 + erf2v2 (017010)

6. PC = 1/2 + erf

APPROXIMATION

ck 101 -K (QIl

o-1.11

y- (J)

vlJl

vlJlv101

K IQ) = erf 1 [(1/2) 1 /QI -(1121

Fixure 7. Probability expressions for correct match

Y

Returning to Figure 7, in order to achieve the next level of simplification from the general expression itis necessary to assume all nonmatch correlation values are independent. Two factors invalidate thisassumption. First, the nature by which some computational algorithms (i. e. , Mean Absolute Difference(MAD)) process the map elements leads to adjacent correlation values being correlated and hence notindependent. This problem can be overcome by the avoidance of this type of algorithm. Another problemarises from the fact that real world scene elements are almost always correlated which leads to theirassociated correlation values being correlated. This problem can be overcome by modeling the scene to becomposed of a number of independent elements (less than the total number of scene elements), estimating thenumber of equivalent independent elements in the scene, and using this number in the Pc computation pro-cess. Here not only must the scene be scaled by the correlation length factor, but equivalent scaling mustbe performed on the number of nonmatch comparison points.

Further simplification of the expression step 3 to step 4 requires all the nonmatch correlation values tobe identically distributed. In general the heterogeneous nature of scene structure, i. e. , the scene beingcomposed of homogeneous regions with different mean intensity values, can negate this assumption. The useof algorithms which tend to homogenize scenes (such as the hybrid algorithm) can overcome this difficultyand make this assumption more realistic.

Since correlation values involve the summing of a large number of random variables (some combinationof the scene elements) the central limit theorem can be invoked to simplify the expression in step 4 further.This assumption implies that the distribution of the match and nonmatch correlation values is Gaussian.Finally, a further approximation can be applied to obtain a closed form expression for the performanceguideline.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 447

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

which follows to determine the Pc guidelines should be considered a limiting case where noise and other appropriately modeled errors dominate the failure mode. For situations where the scene selection pro­ cesses have done a good job in screening out the scene redundancy failure mode, the analysis could still provide useful performance guidelines. If, however, sufficient effort was not made in properly selecting reference maps to avoid scene redundancies, system performance is likely to be significantly worse than predicted by these guidelines. Second, other approximations and assumptions beyond this point take on less significance.

CORRELATION DISTRIBUTIONS

NON MATCH CORRELATION VALUES, <J>(J), COME FROM HERE

dy

NON MATCH, f (x) MATCH, fM (y)MATCH CORRELATION VALUE,

x </>(0), COMES FROM HERE

GENERAL EXPRESSION

^ W A 1' A2' A3'

ALL NON MATCH CORRELATION VALUES INDEPENDENT

r* I r y c y /- y /- y2- P c s /x fM <y^ dy |4 /x 4 . . . J^ f MM (x 1P X 0 , x. . . X Q ) dxr dx2 , dx3 . . . dx Q

fNM (X i )dX i

ALL NON MATCH CORRELATION VALUES IDENTICALLY DISTRIBUTED

4 " PQ

I

CORRELATION DISTRIBUTIONS GAUSSIAN,v2 r

/27rcr(0)

/,'

2cr 2 (0)1/2 + erf

(7(J)dy

APPROXIMATION

6. P = 1/2 + erf (^ (0) - </> (J) K (QJ(J(J)

cr(J)

(7(0)

K (Q ) = erf 1 [((1/2) 1/Q I -(1/2)]

Figure 7. Probability expressions for correct match

Returning to Figure 7, in order to achieve the next level of simplification from the general expression it is necessary to assume all nonmatch correlation values are independent. Two factors invalidate this assumption. First, the nature by which some computational algorithms (i.e., Mean Absolute Difference (MAD)) process the map elements leads to adjacent correlation values being correlated and hence not independent. This problem can be overcome by the avoidance of this type of algorithm. Another problem arises from the fact that real world scene elements are almost always correlated which leads to their associated correlation values being correlated. This problem can be overcome by modeling the scene to be composed of a number of independent elements (less than the total number of scene elements), estimating the number of equivalent independent elements in the scene, and using this number in the P C computation pro­ cess. Here not only must the scene be scaled by the correlation length factor, but equivalent scaling must be performed on the number of nonmatch comparison points.

Further simplification of the expression step 3 to step 4 requires all the nonmatch correlation values to be identically distributed. In general the heterogeneous nature of scene structure, i. e. , the scene being composed of homogeneous regions with different mean intensity values, can negate this assumption. The use of algorithms which tend to homogenize scenes (such as the hybrid algorithm) can overcome this difficulty and make this assumption more realistic.

Since correlation values involve the summing of a large number of random variables (some combination of the scene elements) the central limit theorem can be invoked to simplify the expression in step 4 further. This assumption implies that the distribution of the match and nonmatch correlation values is Gaussian. Finally s a further approximation can be applied to obtain a closed form expression for the performance guideline.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 447

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 23: 0238_426

CONROW, RATKOVIC

To summarize, one cannot develop a useful general expression for computing Pc in the presence ofscene interredundancy problems and other external error sources such as noise and distortions. If one canignore the scene redundancy problem by a stringent scene selection process, it is possible, using a seriesof approximations and assumptions, to develop a Pc expression which yields guidelines for system per-formance in the presence of noise and other error sources. The remainder of this section will examinedeveloping adaptive techniques for estimating system performance from the correlation surface itself.

5.2 Adaptive methods

In order to ensure mission effectiveness and safe warhead arming, a criteria to estimate whether theguidance update is acceptable or not is desirable. One proposed technique involves a voting logic with threesuccessive update scenes. Here, the determined fix point of two of the three correlated scenes must bematched within an acceptable bound; else the fix sequence is rejected for guidance updating. Althoughsimple to implement and suitable for use with relatively invariant reference areas, this technique breaksdown when the update area is missed altogether; or when significant variations from the expected scenesignature exist that can not be modeled à priori. When coupled with the inherent modeling limitations ofmost sensor operating bands, this technique does not provide any indication of the uncertainty of theindividual fixes.

Two basic techniques exist which are capable of providing a better performance estimate of update quality.The first involves the analysis of the distribution of the raw sensor scene data. Under conditions of cloudcover or surface snow /ice /water the resulting standard deviation of the distribution will often approach thatof the noise equivalent (spectral) power of the sensor itself, and thus will be substantially smaller than thatfrom the unexpected update area. If a multimodel (histogram) distribution exists where the mean andstandard deviation of a region are substantially different than expected, the suspect points can be labeledbefore further processing. When the ratio of the number of total imaged points versus those in the suspectdistribution region each a predetermined value, the image can be deleted from the update or voting process.

A more reliable technique involves computing the correlation, hybrid or feature extraction surface, thenusing properties or statistics of that distribution for estimating update performance. A list of thesetechniques in order of increasing reliability is given in Table 11.

Table 11. Statistical Match Surface Methods for Estimating Fix Quality

1. Main peak amplitude

2. Main peak to first or highest sidelobeamplitude ratio.

3. Main peak amplitude vs background statistics ratio

4. Statistics of main peak vs background

5. Above, with compensation for inter -pointscene correlation and algorithm contribution.

6. Above, with homogeneous region segmentationof reference scene.

*Methods in increasing order of effectiveness

While all six approaches can be used with correlation or hybrid algorithms, only the first three arecompatible with feature matching techniques. (As given in Table 11, the sixth and most accurate approachfor fix performance estimation directly incorporates the hybrid algorithm.) An analytical relationshipbetween surface statistics and original scene properties may not exist because of the use of edge or vertexdata for matching with feature matching techniques. In either case, a decision threshold based on surfaceproperties or statistics for fix acceptability must be determined á priori from accurate simulation of theupdate areas. The first three cases are simple to implement, and utilize the main peak, its ratio with thefirst or highest sidelobe peak or its ratio with the surface background statistics.

The first case uses the amplitude of the main peak and is generally unreliable. It is highly dependent onthe imaged "information content" (i. e. , NI) which can vary significantly with environmental distortions andSNR. The second utilizes the ratio of the main to first or highest sidelobe peaks. It is generally unreliableunless the ratio is very high or low. For realistic intermediate cases, the ratio will oscillate considerablydue to environmental distortions and SNR variations. Although an improvement over preceding cases, usingthe ratio of the main peak versus background statistics, can often be unreliable because estimates of theoriginal reference and sensor scene statistical properties are omitted which impact the matching surface,hence this ratio.

The final three approaches of varying degrees of completeness determine a probability of correct match(Pc) based on the separation between main peak and background statistical distributions (30, 32, 33).

448 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

To summarize, one cannot develop a useful general expression for computing PC in the presence of scene inter redundancy problems and other external error sources such as noise and distortions. If one can ignore the scene redundancy problem by a stringent scene selection process, it is possible, using a series of approximations and assumptions, to develop a PC expression which yields guidelines for system per­ formance in the presence of noise and other error sources. The remainder of this section will examine developing adaptive techniques for estimating system performance from the correlation surface itself.

5. 2 Adaptive methods

In order to ensure mission effectiveness and safe warhead arming, a criteria to estimate whether the guidance update is acceptable or not is desirable. One proposed technique involves a voting logic with three successive update scenes. Here, the determined fix point of two of the three correlated scenes must be matched within an acceptable bound; else the fix sequence is rejected for guidance updating. Although simple to implement and suitable for use with relatively invariant reference areas, this technique breaks down when the update area is missed altogether; or when significant variations from the expected scene signature exist that can not be modeled a priori. When coupled with the inherent modeling limitations of most sensor operating bands, this technique does not provide any indication of the uncertainty of the individual fixes.

Two basic techniques exist which are capable of providing a better performance estimate of update quality. The first involves the analysis of the distribution of the raw sensor scene data. Under conditions of cloud cover or surface snow/ice/water the resulting standard deviation of the distribution will often approach that of the noise equivalent (spectral) power of the sensor itself, and thus will be substantially smaller than that from the unexpected update area. If a multimodel (histogram) distribution exists where the mean and standard deviation of a region are substantially different than expected, the suspect points can be labeled before further processing. When the ratio of the number of total imaged points versus those in the suspect distribution region each a predetermined value, the image can be deleted from the update or voting process.

A more reliable technique involves computing the correlation, hybrid or feature extraction surface, then using properties or statistics of that distribution for estimating update performance. A list of these techniques in order of increasing reliability is given in Table 11.

Table 11. Statistical Match Surface Methods for Estimating Fix Quality"

1. Main peak amplitude

2. Main peak to first or highest sidelobe amplitude ratio.

3. Main peak amplitude vs background statistics ratio

4. Statistics of main peak vs background

5. Above, with compensation for inter-pointscene correlation and algorithm contribution.

6. Above, with homogeneous region segmentation of reference scene.

^Methods in increasing order of effectiveness

While all six approaches can be used with correlation or hybrid algorithms, only the first three are compatible with feature matching techniques. (As given in Table 11, the sixth and most accurate approach for fix performance estimation directly incorporates the hybrid algorithm. ) An analytical relationship between surface statistics and original scene properties may not exist because of the use of edge or vertex data for matching with feature matching techniques. In either case, a decision threshold based on surface properties or statistics for fix acceptability must be determined a priori from accurate simulation of the update areas. The first three cases are simple to implement, and utilize the main peak, its ratio with the first or highest sidelobe peak or its ratio with the surface background statistics.

The first case uses the amplitude of the main peak and is generally unreliable. It is highly dependent on the imaged "information content" (i. e. , Nj) which can vary significantly with environmental distortions and SNR. The second utilizes the ratio of the main to first or highest sidelobe peaks. It is generally unreliable unless the ratio is very high or low c For realistic intermediate cases, the ratio will oscillate considerably due to environmental distortions and SNR variations,, Although an improvement over preceding cases, using the ratio of the main peak versus background statistics, can often be unreliable because estimates of the original reference and sensor scene statistical properties are omitted which impact the matching surface, hence this ratio.

The final three approaches of varying degrees of completeness determine a probability of correct match (Pc ) based on the separation between main peak and background statistical distributions (30,32,33).

448 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 24: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

The fourth case utilizes estimates of the main peak and background statistics to determine the degree ofseparation between in and out of register distributions (hence Pc). Although an improvement over thepreceding cases, it does not utilize estimates of the original scene statistics and is also biased by thematching algorithm itself. An example of this case is the Bhattachryya distance. In the next case, com-pensation is made for both the interpoint scene correlation and the impact of the matching algorithms on thesurface statistics. If level shifts between regions in the update area do exist, this approach can be veryaccurate.

The last approach utilizes the methodology of the previous case, but with region segmentation for hybridprocessing. Here, the reference scene is segmented into homogeneous regions and matched against theunsegmented sensor image. Since diurnal or seasonal level shift variations occur in most spectral imagery,compensation to region boundaries is generally required to ensure the accuracy of fix quality estimates. Thehybrid approach is generally more reliable than one which segments both reference and sensor scenes beforeprocessing; since this method tends to amplify environmental and SNR induced region boundary distortions.It is estimated that this hybrid algorithm has considerable utility in fix quality evaluation; since it incorpo-rates the statistical properties of both the original and correlated reference and sensor scenes.

6. 0 Summary

This paper advances seven major points. First, there are map difference errors between the referencemap and sensor image which have time and spatial varying components. The magnitude of these errors ishighly sensor wavelength or frequency dependent; however, the statistics of the map difference errors canbe quantified for each sensor wavelength as a function of the material properties of the scene.

Second, an important aspect of the problem is to choose and to evaluate reference maps to avoid usingareas which:

1) do not contain sufficient information,2) have a scene redundancy problem, and3) have materials at a wavelength under investigation with large signature oscillations.

Third, grouping errors and algorithms into the generic classes indicated in this paper simplifies theanalysis and enables the problem to be structured.

Fourth, certain algorithms can accommodate certain classes of errors more readily than other types ofalgorithms. As certain sensor wavelengths have a class of errors which dominate, it is possible to pre-determine which algorithm is most appropriate for dealing with scene data at a given sensor wavelength.This paper defines those algorithm /sensor wavelength relationships for several specific operational condi-tions.

Fifth, the computation of the probability of correct match is scene dependent, and hence any generaliza-tion must be considered an approximation. Since no absolute Pc measures can be determined, it is notuseful to develop optimal algorithms based on mathematical approximations to the general P formulation.The more appropriate problem is to obtain the correct algorithm for accommodating the map differenceerrors which are anticipated to occur and not to worry about which sub -class of algorithm is mathematicallyoptimal for the ideal, nonreal world case.

Sixth, it is possible to improve the process of updating missile position by using map- matching surfacedata to estimate in real -time the performance of the system. These estimates, while approximations, haveproven through experimentation useful in separating true matches from false matches and can be used inweighting the accuracy of the fix position.

Seventh, a new class of map- matching algorithm, the hybrid algorithm, was presented which incorporatesmany of the advantages associated with the feature matching algorithms while avoiding many of the pitfallsassociated with extracting features from noisy sensor images. It was shown to have a significant utility indealing with a large number of map difference errors.

7. 0 Conclusions and recommendations

The major stumbling block in analyzing map- matching systems is the "scene." Variations in thetemporal and spatial characteristics of scenes mitigate the need for high -order algorithm refinement andinvalidate sophisticated math modeling of the process. Such variations in scene imagery are the majorproblems in developing an automated system. Two major entities are required to deal with the problem:(1) the establishment, and (2) an analysis of a data base devoted exclusively to the image dynamics problem.

A data base should be created for each sensor wavelength enumerated in this paper. The data baseshould consist of (1) a statistically representative set of reference maps covering the range of expectedmaterials, material interfaces and target types likely to be encountered, (2) an accompanying set of sensorimages (contained within the reference map boundary) which reflect the range of expected temporal signa-ture variations, and (3) a library of the physical and wavelength dependent electrical properties of "common"scene materials.

SPIE Vol. 238 /mage Processing for Missile Guidance (1980) / 449

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

The fourth case utilizes estimates of the main peak and background statistics to determine the degree of separation between in and out of register distributions (hence P C ). Although an improvement over the preceding cases, it does not utilize estimates of the original scene statistics and is also biased by the matching algorithm itself. An example of this case is the Bhattachryya distance. In the next case, com­ pensation is made for both the interpoint scene correlation and the impact of the matching algorithms on the surface statistics. If level shifts between regions in the update area do exist, this approach can be very accurate.

The last approach utilizes the methodology of the previous case, but with region segmentation for hybrid processing. Here, the reference scene is segmented into homogeneous regions and matched against the unsegmented sensor image. Since diurnal or seasonal level shift variations occur in most spectral imagery, compensation to region boundaries is generally required to ensure the accuracy of fix quality estimates. The hybrid approach is generally more reliable than one which segments both reference and sensor scenes before processing; since this method tends to amplify environmental and SNR induced region boundary distortions. It is estimated that this hybrid algorithm has considerable utility in fix quality evaluation; since it incorpo­ rates the statistical properties of both the original and correlated reference and sensor scenes.

6. 0 Summary

This paper advances seven major points. First, there are map difference errors between the reference map and sensor image which have time and spatial varying components. The magnitude of these errors is highly sensor wavelength or frequency dependent; however, the statistics of the map difference errors can be quantified for each sensor wavelength as a function of the material properties of the scene.

Second, an important aspect of the problem is to choose and to evaluate reference maps to avoid using areas which:

1) do not contain sufficient information sZ) have a scene redundancy problem, and3) have materials at a wavelength under investigation with large signature oscillations.

Third, grouping errors and algorithms into the generic classes indicated in this paper simplifies the analysis and enables the problem to be structured.

Fourth, certain algorithms can accommodate certain classes of errors more readily than other types of algorithms. As certain sensor wavelengths have a class of errors which dominate, it is possible to pre­ determine which algorithm is most appropriate for dealing with scene data at a given sensor wavelength. This paper defines those algorithm/sensor wavelength relationships for several specific operational condi­ tions.

Fifth, the computation of the probability of correct match is scene dependent, and hence any generaliza­ tion must be considered an approximation. Since no absolute PC measures can be determined, it is not useful to develop optimal algorithms based on mathematical approximations to the general P formulation. The more appropriate problem is to obtain the correct algorithm for accommodating the map difference errors which are anticipated to occur and not to worry about which sub-class of algorithm is mathematically optimal for the ideal, nonreal world case.

Sixth, it is possible to improve the process of updating missile position by using map-matching surface data to estimate in real-time the performance of the system. These estimates, while approximations, have proven through experimentation useful in separating true matches from false matches and can be used in weighting the accuracy of the fix position.

Seventh, a new class of map-matching algorithm, the hybrid algorithm, was presented which incorporates many of the advantages associated with the feature matching algorithms while avoiding many of the pitfalls associated with extracting features from noisy sensor images. It was shown to have a significant utility in dealing with a large number of map difference errors.

7. 0 Conclusions and recommendations

The major stumbling block in analyzing map-matching systems is the "scene. " Variations in the temporal and spatial characteristics of scenes mitigate the need for high-order algorithm refinement and invalidate sophisticated math modeling of the process. Such variations in scene imagery are the major problems in developing an automated system. Two major entities are required to deal with the problem: (1) the establishment, and (Z) an analysis of a data base devoted exclusively to the image dynamics problem.

A data base should be created for each sensor wavelength enumerated in this paper. The data base should consist of (1) a statistically representative set of reference maps covering the range of expected materials, material interfaces and target types likely to be encountered, (Z) an accompanying set of sensor images (contained within the reference map boundary) which reflect the range of expected temporal signa­ ture variations, and (3) a library of the physical and wavelength dependent electrical properties of "common" scene materials.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 449

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 25: 0238_426

CONROW, RATKOVIC

Having developed such a data base it is then necessary to statistically quantify the nature of the sceneerrors present. Based on this quantification, the most appropriate algorithm class and preprocessingchoice for a particular wavelength (and possibly target type) can be determined. Finally, after evaluatingsystem performance over the expected range of scene errors and operational constraints it would be possi-ble to determine which sensor wavelengths are most appropriate for the image matching tasks.

8. 0 Acknowledgments

This report represents the culmination and integration of three years of joint research by the authorsin the areas of signature analysis, algorithm evaluation, and overall map- matching system design. Theauthors wish to acknowledge the contribution and helpful suggestions of their colleagues -- specifically,Joe Dodd of The Aerospace Corporation, and Lloyd Mundie, Ted Garber, How Bailey, Hy ShulmanMarianne Lakatos, and John Clark of The Rand Corporation. The authors also wish to acknowledge thesupport of the secretarial staffs at both corporations throughout the course of this project.

References

1. Thomas, J. , et al. , Cruise Missiles: An Examination of Development Decisions and Management,Naval Postgraduate School, Monterey, California, March 1978.

2. Bailey, H. H. , et al. , Cruise Missile Guidance: Discussion of Scene Characteristics and Pro-cessing Techniques, The Rand Corporation, Santa Monica, California, sponsored by DARPA, ContractMDA903 -78 -C -0281, Rand Working Note WN -10412 -ARPA, February 1979.

3. Terrain Contour Matching (TERCOM) Primer, Directorate for Systems Engineering, AeronauticalSystems Division, Air Force Systems Command, Wright- Patterson Air Force Base, Ohio, Report ASD -TR-77-61, August 1977.

4. Hinricks, P. R. , "Advanced Terrain Correlation Techniques, " IEEE Position Locations andNavigation Symposium, 1976, pp. 89 -96.

5. Conrow, E. H. , "The Feasibility of an Airborne Thermal Infrared Position Updating System"AIAA Paper 77 -1477, Proceedings of the AIAA Second Digital Avionics Systems Conference, November 2 -4,1977, pp. 35 -39.

6. Sagawa, S. S. , Radiometric Area Correlation Guidance Captive Flight Test Program, Phase 5 -Executive Summary, Air Force Armament Laboratory, Air Force Systems Command, Report AFA L -TR-78 -102, Eglin Air Force Base, Florida, September 1978.

7. Carr, J. and J. Sobek, "Digital Scene Matching Area Correlator, " Proceedings of the 24th Inter-national SPIE Symposium, July 28 - August 1, 1980.

8. Reed, C. , Kohn, J. and D. Mercier, "The Range Only Correlation System, " Proceedings of the24th International SPIE Symposium, July 28 - August 1, 1980.

9. Hiller, E. R. , "Pulse Doppler Map-Matching," Proceedings of the 24th International SPIESymposium, July 28 - August 1, 1980.

10. J. Golden, "Terrain Correlation (TERCOM) ", Proceedings of the 24th International SPIE Symposium,July 28 - August 1, 1980.

11. Ratkovic, J. A. , Structuring the Components of the Image Matching Problem, Note #N- 1216 -AF,The Rand Corporation, Santa Monica, California, December 1979.

12. Brice, C. R. , and C. L. Fennema, "Scene Analysis Using Regions, " Artificial Intelligence, Vol. 1,1970, pp. 205 -226.

13. Davis, L. S. , "A Survey of Edge Detection Techniques, " Computer Graphics and Image Processing,Vol. 4, September 1975, pp. 248 -270.

14. Farag, R. F. H. , "An Information Theoretic Approach to Image Partitioning, " IEEE Transactionson Systems, Man, and Cybernetics, Vol. SMC -8, No. 11, November 1978.

15. Garnett, J. M. III, and S. S. Yaw, "Nonparametric Estimation of the Bayes Error of FeatureExtractors Using Ordered Nearest Neighbor Sets, " IEEE Transactions on Computers, Vol. C -26,January 1977, pp. 46 -54.

16. Gupta, J. N. , and P. A. Wintz, Closed Boundary Feature Finding Selection and ClassificationApproach to Multi -Image Modeling, Laboratory for Applications of Remote Sensing, Note No. 062773,Purdue University, W. Lafayette, Indiana, 1973.

17. Kettig, R. L. , and D. A. Landgrebe, Classification of Multispectral Image Data by Extraction andClassification of Homogeneous Objects, Laboratory for Applications of Remote Sensing, Purdue UniversityW. Lafayette, Indiana, 1975.

18. Lahart, M. J. , "Local Segmentation of Noisy Images, " Optical Engineering, Vol. 18, No. 1,January- February 1979, pp. 76 -78.

19. Pratt, W. R. , Digital Image Processing, Wiley b Sons, New York, 1978.20. Robertson, T. F. , et al. , Multispectral Image Partitioning, Laboratory for Applications of Remote

Sensing, Purdue, University, W. Lafayette, Indiana, 1973.21. Rosenfeld, A. , and A. C. Kak, Digital Picture Processing, Academic Press, New York, 1978.22. Wacker, A. G. , A Cluster Approach to Finding Spatial Boundaries in Multispectral Imagery,

Laboratory for Agricultural Remote Sensing, Purdue University, W. Lafayette, Indiana, 1970.23, Conrow, E. H. , "Parametric Analysis of TERCOM False FIX Probability, " National Aerospace

Electronics Conference, May 16 -18, 1978, pp. 1271 -1277.24. Ratkovic, J. A. , Performance Considerations for Image Matching Systems, Note #N- 1217 -AF,

The Rand Corporation, Santa Monica, California, December, 1979.25. Kim, R. H. Pattern Matcher Development Study, Sponsored by Defense Advanced Research Projects

Agency Report (DoD), ARPA Order #3208, June 1978, Contract DAAK 40 -77 -C -0017.

450 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

Having developed such a data base it is then necessary to statistically quantify the nature of the scene errors present. Based on this quantification, the most appropriate algorithm class and preprocessing choice for a particular wavelength (and possibly target type) can be determined. Finally, after evaluating system performance over the expected range of scene errors and operational constraints it would be possi­ ble to determine which sensor wavelengths are most appropriate for the image matching tasks.

8. 0 Acknowledgments

This report represents the culmination and integration of three years of joint research by the authors in the areas of signature analysis, algorithm evaluation, and overall map-matching system design. The authors wish to acknowledge the contribution and helpful suggestions of their colleagues --specifically, Joe Dodd of The Aerospace Corporation, and Lloyd Mundie, Ted Garber, How Bailey, Hy Shulman Marianne Lakatos, and John Clark of The Rand Corporation. The authors also wish to acknowledge the support of the secretarial staffs at both corporations throughout the course of this project.

References

1. Thomas, J. , et al. , Cruise Missiles: An Examination of Development Decisions and Management, Naval Postgraduate School, Monterey, California, March 1978.

2. Bailey, H. H. , et al. , Cruise Missile Guidance: Discussion of Scene Characteristics and Pro­ cessing Techniques, The Rand Corporation, Santa Monica, California, sponsored by DARPA, Contract MDA903-78-C-0281, Rand Working Note WN- 10412-ARPA, February 1979.

3. Terrain Contour Matching (TERCOM) Primer, Directorate for Systems Engineering, Aeronautical Systems Division, Air Force Systems Command, Wright-Patterson Air Force Base, Ohio, Report ASD-TR- 77-61, August 1977.

4. Hinricks, P. R. , "Advanced Terrain Correlation Techniques, " IEEE Position Locations and Navigation Symposium, 1976, pp. 89-96.

5. Conrow, E. H. , "The Feasibility of an Airborne Thermal Infrared Position Updating System" AIAA Paper 77-1477, Proceedings of the AIAA Second Digital Avionics Systems Conference, November 2-4, 1977, pp. 35-39.

6. Sagawa, S. S. , Radiometric Area Correlation Guidance Captive Flight Test Program, Phase 5 - Executive Summary, Air Force Armament Laboratory, Air Force Systems Command, Report AFAL-TR- 78-102, Eglin Air Force Base, Florida, September 1978.

7. Carr, J. and J. Sobek, "Digital Scene Matching Area Correlator, " Proceedings of the 24th Inter­ national SPIE Symposium, July 28 - August 1, 1980.

8. Reed, C. , Kohn, J. and D. Mercier, "The Range Only Correlation System, " Proceedings of the 24th International SPIE Symposium, July 28 - August 1, 1980.

9. Hiller, E. R. , "Pulse Doppler Map-Matching, " Proceedings of the 24th International SPIE Symposium, July 28 - August 1, 1980.

10. J. Golden, "Terrain Correlation (TERCOM)", Proceedings of the 24th International SPIE Symposium, July 28 - August 1, 1980.

11. Ratkovic, J. A., Structuring the Components of the Image Matching Problem, Note #N-1216-AF, The Rand Corporation, Santa Monica, California, December 1979.

12. Brice, C. R. , and C. L. Fennema, "Scene Analysis Using Regions," Artificial Intelligence, Vol. 1, 1970, pp. 205-226.

13. Davis, L. S. , "A Survey of Edge Detection Techniques, " Computer Graphics and Image Processing, Vol. 4, September 1975, pp. 248-270.

14. Farag, R. F. H. , "An Information Theoretic Approach to Image Partitioning, " IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-8, No. 11, November 1978.

15. Garnett, J. M. Ill, and S. S. Yaw, "Nonparametric Estimation of the Bayes Error of Feature Extractors Using Ordered Nearest Neighbor Sets, " IEEE Transactions on Computers, Vol. C-26, January 1977, pp. 46-54.

16. Gupta, J. N. , and P. A. Wintz, Closed Boundary Feature Finding Selection and Classification Approach to Multi-Image Modeling, Laboratory for Applications of Remote Sensing, Note No. 062773, Purdue University, W. Lafayette, Indiana, 1973.

17. Kettig, R. L. , and D. A. Landgrebe, Classification of Multispectral Image Data by Extraction and Classification of Homogeneous Objects, Laboratory for Applications of Remote Sensing, Purdue University W. Lafayette, Indiana, 1975.

18. Lahart, M. J. , "Local Segmentation of Noisy Images, " Optical Engineering, Vol. 18, No. 1, January-February 1979, pp. 76-78.

19. Pratt, W. R. , Digital Image Processing, Wiley & Sons, New York, 1978.20. Robertson, T. F. , et al. , Multispectral Image Partitioning, Laboratory for Applications of Remote

Sensing, Purdue, University, W. Lafayette, Indiana, 1973.21. Rosenfeld, A., and A. C. Kak, Digital Picture Processing, Academic Press, New York, 1978.22. Wacker, A. G. , A Cluster Approach to Finding Spatial Boundaries in Multispectral Imagery,

Laboratory for Agricultural Remote Sensing, Purdue University, W. Lafayette, Indiana, 1970.23. Conrow, E. H. , "Parametric Analysis of TERCOM False FIX Probability, " National Aerospace

Electronics Conference, May 16-18, 1978, pp. 1271-1277.24. Ratkovic, J. A., Performance Considerations for Image Matching Systems, Note #N-1217-AF,

The Rand Corporation, Santa Monica, California, December, 1979.25. Kim, R. H. Pattern Matcher Development Study, Sponsored by Defense Advanced Research Projects

Agency Report (DoD), ARPA Order #3208, June 1978, Contract DAAK 40-77-C-0017.

450 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 26: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

26. Ratkovic, J. A. , Hybrid Correlation Algorithms - A Bridge between Feature Matching and ImageCorrelations, Paper #6418, Rand Corporation, Santa Monica, California, November, 1979.

27. Gerson, G. , et al. , Image Sensor Measurements Program: Vol. I, Multiple Subarea Bi -Level Cor-relation Scene Matching System, Hughes Research Laboratories, Malibu, California, Contract No. F- 30602-77 -C -0049, June 1979.

28. Smith, F. W. , et al. , Optimal Spatial Filters, Systems Control Inc. , Palo Alto, California, DARPAContract No. DAAK 40 -77 -C -0113, September 1978.

29. Merchant, J. , Address Modification, Image Technology Program, Vol. 1, Overview and Theory,Honeywell Electroptics Center, Lexington, Massachusetts, Sponsored by DARPA, Contract No. DAAK -40-78 -C -0144, April 1978.

30. Ratkovic, J. A. , Estimation Techniques and Other Work on Image Correlation, Report #R- 2211 -AF,Rand Corporation, Santa Monica, California, September, 1977.

31. Johnson, M. W. , "Analytical Development and Test Results of Acquisition Probability for TerrainCorrelation Devices Used in Navigation Systems, " AIAA Paper 72 -122, presented at the Tenth AerospaceSciences Meeting, San Diego, California, January 1972.

32. Wessely, H. W. , Image Correlation: Part II, Simulation and Analysis, The Rand Corporation,R- 2057/2 -PR, Santa Monica, California, November 1976.

33. Bailey, H. H., et al., Image Correlation: Part I, Simulation and Analysis, The Rand Corporation,R- 2057/1 -PR, Santa Monica, California, November 1976.

34. Siegel, R. and T. R. Howell, Thermal Radiation Heat Transfer, McGraw Hill Book Company,New York, 1972.

SPIE Vol 238 Image Processing for Missile Guidance (1980) / 451

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

26. Ratkovic, J. A. , Hybrid Correlation Algorithms - A Bridge between Feature Matching and Image Correlations, Paper #6418, Rand Corporation, Santa Monica, California, November, 1979.

27. Gerson, G. , et al. , Image Sensor Measurements Program: Vol. I, Multiple Subarea Bi-Level Cor­ relation Scene Matching System, Hughes Research Laboratories, Malibu, California, Contract No. F-30602- 77_C-0049, June 1979.

28. Smith, F. W. , et al. , Optimal Spatial Filters, Systems Control Inc. , Palo Alto, California, DARPA Contract No. DAAK 40-77-C-0113, September 1978.

29. Merchant, J. , Address Modification, Image Technology Program, Vol. 1, Overview and Theory, Honeywell Electroptics Center, Lexington, Massachusetts, Sponsored by DARPA, Contract No. DAAK-40- 78-C-0144, April 1978.

30. Ratkovic, J. A., Estimation Techniques and Other Work on Image Correlation, Report #R-2211-AF, Rand Corporation, Santa Monica, California, September, 1977.

31. Johnson, M. W. , "Analytical Development and Test Results of Acquisition Probability for Terrain Correlation Devices Used in Navigation Systems, " AIAA Paper 72-122, presented at the Tenth Aerospace Sciences Meeting, San Diego, California, January 1972.

32. Wessely, H. W. , Image Correlation: Part II, Simulation and Analysis, The Rand Corporation, R-2057/2-PR, Santa Monica, California, November 1976.

33. Bailey, H 0 H. , et al. , Image Correlation: Part I, Simulation and Analysis, The Rand Corporation, R-2057/1-PR, Santa Monica, California, November 1976.

34. Siegel, R. and T. R. Howell, Thermal Radiation Heat Transfer, McGraw Hill Book Company, New York, 1972.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 451

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 27: 0238_426

CONROW, RATKOVIC

Appendix

Texture

The concept of texture from a scene composition /sensor resolution description has previously been intro-duced. It is of interest to explore the source of the textural spectral signature variations present on a timeand space- varying basis within sensor imagery. Texture is effectively produced by two different types ofphenomena.

In the first case, a material of different physical or electrical properties than its neighbors exists withinthe scene. If it occupies one or more scene elements it will be resolved within the sensor image. It is alsopossible to resolve this material on a subelement basis if its area times radiance is greater than that fromall other materials within the element, and if the resulting radiance is above the detectors SNR.

In the second case, a truly homogeneous material may exist within a region virtually independent ofresolution (i. e. , dry beach sand). Texture may still be present due to three factors.

The first is the slope and slope azimuth of the material relative to the sun (or illuminator)- target- sensorgeometry. It is possible for areas with moderate slopes (20 °.30 °) to produce substantially more or lessradiance depending on their orientation between illumination source and detector (this is especially impor-tant when direct (i, e. , laser or sun) versus diffuse (i. e. , skylight) irradiance exists).

For imaging lasers and radars, slope and slope azimuth between the illuminator -target- detectorgeometry significantly impacts the magnitude of the returned target radiance. At least here, versus apassive system, the illuminator is usually co- located near the detector. The net effect of this is to simplifythe governing geometry for determining the return vector of the propogated wave. Weak to moderatereflectors oriented at steep incident angles to a co- located transmitter /detector can often produce a signifi-cantly greater return than strong reflectors oriented less favorably.

For passive systems where the illuminator (usually the sun) -vehicle geometry is generally not co- located,shadowing is more difficult to evaluate. Here, shadowing is often a problem due to the time -varying sun -target geometry coupled with slope and slope azimuth, and surface roughness of the reference area. As aconsequence, shadowing can significantly impact daytime optical /near IR and middle IR imagery where astrong solar component exists. Its effect in the thermal IR region is to prevent direct incident short wave-length radiation from being absorbed by the target, thus reducing the diurnal temperature oscillation byweakening the thermal inertia driving function. Because of the diffuse nature of passive microwave radi-ation, the effect of shadowing on apparent brightness temperature is generally not a problem within areasonable range of antenna depression angles with this form of imagery for water and metal because oftheir moderate to high microwave reflectances respectively. For materials with high microwave emittances,shadowing weakens the thermal inertia driving function for diurnal temperature oscillation, and can reducethe emissive power and the resulting observed apparent brightness temperature.

As with active illuminator systems, slope and slope azimuth play an important role in many passiveimaging systems. In the optical /near IR and middle IR regions it impacts the returned target radiancesimilarly to active systems,although to a greater extent because of the varying solar- target geometry. Slopeand slope azimuth also produce differential heating from absorbed short wavelength solar radiation, whichcan have a moderate to strong impact on night -time middle IR and diurnal thermal IR imagery and a small tomoderate effect on diurnal passive microwave imagery when a high microwave material emittance exists.

The second parameter that can produce image texture is the material reflection coefficient or reflectance.A variation in smooth surface reflected energy versus incidence angle exists due to (real and imaginary)material electrical components. Because material reflectance is the dominant energy balance parameter inseveral spectral regions, its directional characteristics can have a significant impact on the amount ofenergy returned from a target. The real component is the material dielectric component, while the imagi-nary one equals the electrical conductivity divided by the angular frequency times the free -spacepermittivity. For conductors (i. e. , bare metals), the second component generally predominates. Fordielectrics (most natural materials) the first term is usually dominant since negligible electrical conduc-tivity exists.

Given the electrical component values, the vertically or horizontally polarized directional reflectance fora smooth material can be determined from Fresnel's equations at a particular wavelength (34). The valuescomputed by Fresnel's equations provide a measure of theoretical material reflectance versus incident andreflected angles. Surface roughness height and orientation can, however, significantly impact the amount ofradiance actually reflected (or absorbed) by the material.

Consequently, the third parameter of interest is the roughness of the surface itself. For the sameillumination source -detector geometry multiple reflections will occur within the material when the roughnessheight to wavelength ratio is large. This results in an increase in absorptance and a consequent increase inemittance in wavelength regions where this parameter is relevant. When the ratio of roughness height towavelength is small, multiple reflection effects diminish, and the resulting absorptance decreases to atheoretical minimum for the material.

452 / SPIE Vol 238 Image Processing for Missile Guidance (1980)

CONROW, RATKOVIC

Appendix

Texture

The concept of texture from a scene composition/sensor resolution description has previously been intro­ duced. It is of interest to explore the source of the textural spectral signature variations present on a time and space-varying basis within sensor imagery. Texture is effectively produced by two different types of phenomena.

In the first case, a material of different physical or electrical properties than its neighbors exists within the scene. If it occupies one or more scene elements it will be resolved within the sensor image. It is also possible to resolve this material on a subelement basis if its area times radiance is greater than that from all other materials within the element, and if the resulting radiance is above the detectors SNR.

In the second case, a truly homogeneous material may exist within a region virtually independent of resolution (i. e. , dry beach sand). Texture may still be present due to three factors.

The first is the slope and slope azimuth of the material relative to the sun (or illuminator)-target-sensor geometry. It is possible for areas with moderate slopes (20° -30°) to produce substantially more or less radiance depending on their orientation between illumination source and detector (this is especially impor­ tant when direct (io e. , laser or sun) versus diffuse (i. e. , skylight) irradiance exists).

For imaging lasers and radars, slope and slope azimuth between the illuminator-target-detector geometry significantly impacts the magnitude of the returned target radiance. At least here, versus a passive system, the illuminator is usually co-located near the detector. The net effect of this is to simplify the governing geometry for determining the return vector of the propogated wave. Weak to moderate reflectors oriented at steep incident angles to a co-located transmitter/detector can often produce a signifi­ cantly greater return than strong reflectors oriented less favorably.

For passive systems where the illuminator (usually the sun)-vehicle geometry is generally not co-located, shadowing is more difficult to evaluate. Here, shadowing is often a problem due to the time-varying sun- target geometry coupled with slope and slope azimuth, and surface roughness of the reference area. As a consequence, shadowing can significantly impact daytime optical/near IR and middle IR imagery where a strong solar component exists. Its effect in the thermal IR region is to prevent direct incident short wave­ length radiation from being absorbed by the target, thus reducing the diurnal temperature oscillation by weakening the thermal inertia driving function. Because of the diffuse nature of passive microwave radi­ ation, the effect of shadowing on apparent brightness temperature is generally not a problem within a reasonable range of antenna depression angles with this form of imagery for water and metal because of their moderate to high microwave reflectances respectively. For materials with high microwave emittances, shadowing weakens the thermal inertia driving function for diurnal temperature oscillation, and can reduce the emissive power and the resulting observed apparent brightness temperature.

As with active illuminator systems, slope and slope azimuth play an important role in many passive imaging systems. In the optical/near IR and middle IR regions it impacts the returned target radiance similarly to active systems,although to a greater extent because of the varying solar-target geometry. Slope and slope azimuth also produce differential heating from absorbed short wavelength solar radiation, which can have a moderate to strong impact on night-time middle IR and diurnal thermal IR imagery and a small to moderate effect on diurnal passive microwave imagery when a high microwave material emittance exists.

The second parameter that can produce image texture is the material reflection coefficient or reflectanca A variation in smooth surface reflected energy versus incidence angle exists due to (real and imaginary) material electrical components. Because material reflectance is the dominant energy balance parameter in several spectral regions, its directional characteristics can have a significant impact on the amount of energy returned from a target. The real component is the material dielectric component, while the imagi­ nary one equals the electrical conductivity divided by the angular frequency times the free-space permittivity. For conductors (i.e. , bare metals), the second component generally predominates. For dielectrics (most natural materials) the first term is usually dominant since negligible electrical conduc­ tivity exists.

Given the electrical component values, the vertically or horizontally polarized directional reflectance for a smooth material can be determined from Fresnel's equations at a particular wavelength (34). The values computed by Fresnel's equations provide a measure of theoretical material reflectance versus incident and reflected angles. Surface roughness height and orientation can, however, significantly impact the amount of radiance actually reflected (or absorbed) by the material.

Consequently, the third parameter of interest is the roughness of the surface itself. For the same illumination source-detector geometry multiple reflections will occur within the material when the roughness height to wavelength ratio is large. This results in an increase in absorptance and a consequent increase in emittance in wavelength regions where this parameter is relevant. When the ratio of roughness height to wavelength is small, multiple reflection effects diminish, and the resulting absorptance decreases to a theoretical minimum for the material.

452 / SPIE Vol. 238 Image Processing for Missile Guidance (1980)

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms

Page 28: 0238_426

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

The directional and bidirectional reflectances of a material in a given wavelength region are dependent onthe surface roughness as well as the governing electrical relationship. Unfortunately, it is difficult tocharacterize surface roughness. The root mean square (rms) roughness sometimes used provides no infor-mation pertaining to the statistical distribution of roughness around the rms value and the average slope ofthe sides of the roughness peaks, although they can significantly influence directional and bidirectionalreflectance (34). As a result the directional and bidirectional reflectance characteristics that somematerials exhibit are due to a combination of complex surface structure and surface impurities (i. e. , ironoxide or moss) present together with the materials inherent electrical properties.

Surface roughness significantly impacts the returns from imaging lasers and radars depending on therelevant geometry. When a shallow incidence angle exists between the illuminator and target, the neteffect for rough surfaces is often to return more radiance than from a smooth one because of the effectivepresence of material corner reflectors. This is evident in radar data when examining imagery from smoothversus rough fields or water. At steep incidence angles, the reverse is true. Here, a smooth surface willgenerally return more radiance than from a rough one.

Surface roughness can also impact the returned target radiance in the optical /near IR, and to a lesserextent in the middle IR region, because of the predominance of the direct solar illumination component. Theeffects are similar to those discussed under active illuminator systems. In the thermal IR region, an insig-nificant amount of direct energy radiated by the sun reaches the surface and generally high emittance existfor most natural materials (typically . 85 to . 99). As a consequence, the net effect here is to impact theshort wavelength absorptance and possibly the material thermal inertia.

In the passive microwave imagery where high emittance materials are present which have a surfaceroughness substantially greater than the imaging wavelength, effects similar to those in the thermal IRcan exist. Many materials such as metal, concrete, asphalt and smooth water behave specularly in thepassive microwave region (particularly at frequencies below 140 GHz) because their surface roughness issmall in comparison to the imaging wavelength. An interesting case of the effect of surface roughness onmaterial reflectance in this region occurs with water. Calm water behaves as a good specular reflector ofpassive microwave radiation (second only to metal in this wavelength region). As surface roughnessincreases, the magnitude of the sky radiation times microwave material reflectance term decreases due tomultiple reflections present. As a consequence, the emittance times the ground (water) temperature termpredominates in rough water where capillary waves exist, and emissive power variations under clear skiescan be on the order of 20% to 30% between this and the smooth water case.

SPIE Vol 238 Image Processing for Missile Guidance (1980) / 453

ALMOST EVERYTHING ONE NEEDS TO KNOW ABOUT IMAGE MATCHING SYSTEMS

The directional and bidirectional reflectances of a material in a given wavelength region are dependent on the surface roughness as well as the governing electrical relationship. Unfortunately, it is difficult to characterize surface roughness. The root mean square (rms) roughness sometimes used provides no infor­ mation pertaining to the statistical distribution of roughness around the rms value and the average slope of the sides of the roughness peaks, although they can significantly influence directional and bidirectional reflectance (34). As a result the directional and bidirectional reflectance characteristics that some materials exhibit are due to a combination of complex surface structure and surface impurities (i. e. , iron oxide or moss) present together with the materials inherent electrical properties.

Surface roughness significantly impacts the returns from imaging lasers and radars depending on the relevant geometry. When a shallow incidence angle exists between the illuminator and target, the net effect for rough surfaces is often to return more radiance than from a smooth one because of the effective presence of material corner reflectors. This is evident in radar data when examining imagery from smooth versus rough fields or water. At steep incidence angles, the reverse is true. Here, a smooth surface will generally return more radiance than from a rough one.

Surface roughness can also impact the returned target radiance in the optical/near IR, and to a lesser extent in the middle IR region, because of the predominance of the direct solar illumination component. The effects are similar to those discussed under active illuminator systems. In the thermal IR region, an insig­ nificant amount of direct energy radiated by the sun reaches the surface and generally high emittance exist for most natural materials (typically . 85 to . 99). As a consequence, the net effect here is to impact the short wavelength absorptance and possibly the material thermal inertia.

In the passive microwave imagery where high emittance materials are present which have a surface roughness substantially greater than the imaging wavelength, effects similar to those in the thermal IR can exist. Many materials such as metal, concrete, asphalt and smooth water behave specularly in the passive microwave region (particularly at frequencies below 140 GHz) because their surface roughness is small in comparison to the imaging wavelength. An interesting case of the effect of surface roughness on material reflectance in this region occurs with water. Calm water behaves as a good specular reflector of passive microwave radiation (second only to metal in this wavelength region). As surface roughness increases, the magnitude of the sky radiation times microwave material reflectance term decreases due to multiple reflections present. As a consequence, the emittance times the ground (water) temperature term predominates in rough water where capillary waves exist, and emissive power variations under clear skies can be on the order of 20% to 30% between this and the smooth water case.

SPIE Vol. 238 Image Processing for Missile Guidance (1980) / 453

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 12/10/2013 Terms of Use: http://spiedl.org/terms