Top Banner
M ETHODOLOGIES A PPLIED F OR O BTAINING D IGITAL E LEVATION M ODELS (DEM S ) BASED ON SATELLITE DATA - Final - Author: Lic. Rayner De Ruyt M. Instructor: Marcelo Scavuzzo Febraury, 2012
51

methodologies applied for obtaining digital elevation models(dems)

Feb 09, 2022

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: methodologies applied for obtaining digital elevation models(dems)

METHODOLOGIES APPLIED FOR OBTAINING

DIGITAL ELEVATION MODELS(DEMS)BASED ON

SATELLITE DATA

- Final -

Author: Lic. Rayner De Ruyt M.

Instructor: Marcelo Scavuzzo

Febraury, 2012

Page 2: methodologies applied for obtaining digital elevation models(dems)

Abstract

Most geoscientific applications using georeferenced cartographic data, therefore need agood knowledge and visualization of the topography of the Earth’s surface. For exam-ple, for flood planning, erosion control and agriculture, need to have a mapping of geo-morphological features for to generate a better interpretation of such information in two-dimensional(2D).

This investigation work, has aim to review the different methods used for to extractabsolute or relative elevation from satellital data and inspect their performance using theresults from various research. Specifically, the methodologies to be discussed are as fol-lows: Clinometry(VIR and SAR data), Steroscopy (VIR and SAR data), Radargrammetry,Interferometry and Altimetry.

It will show also, the more important features of the Digital Elevation Models producedfor the SRTM mission and the project ASTER GDEM as complementary sources for man-age information of the elevation from satellital data.

Keywords: Remote Sensing, Digital Elevation Models(DEMs), Clinometry, Steroscopy,Radargrammetry, Interferometry, Altimetry

i

Page 3: methodologies applied for obtaining digital elevation models(dems)

Contents

1 Introduction 1

2 Clinometry or Shadow and shade 3

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2 Application in VIR images . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2.1 VIR Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.3 Aplication in SAR images . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3.1 SAR Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Principles of Steroscopy and Radargrammetry 15

3.1 Steroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2 Dem generation with different sensor . . . . . . . . . . . . . . . . . . . . . 16

3.2.1 SPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.2.2 IRS-1C / 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.2.3 ASTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2.4 IKONOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

ii

Page 4: methodologies applied for obtaining digital elevation models(dems)

CONTENTS

3.2.5 QUICKBIRD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2.6 CORONA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2.7 TK350 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.3 Radargrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.3.1 Image matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.3.2 3D stereo intersection . . . . . . . . . . . . . . . . . . . . . . . . 22

3.3.3 Pyramidal scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.3.4 Examples with images Envisat . . . . . . . . . . . . . . . . . . . . 23

4 Interferometry 26

4.1 General Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.1.1 Basic Principles of SAR Interferometry . . . . . . . . . . . . . . . 27

4.1.2 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5 Altimetry 31

5.1 Altimeter radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1.1 General concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1.2 Operation system . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.2 Altimeter laser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.2.1 General concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.2.2 Satellite laser altimetry . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Conclusions 36

7 SRTM mission, and ASTER GDEM 38

7.1 SRTM mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

7.1.1 Process description . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7.2 ASTER GDEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7.2.1 General information . . . . . . . . . . . . . . . . . . . . . . . . . 39

iii

Page 5: methodologies applied for obtaining digital elevation models(dems)

List of Figures

2.1 Geometry formed by the sensor and the surface - Clinometry . . . . . . . . 4

2.2 Angles formed between the trajectories of the sensor and the solar azimuth . 5

2.3 Geometry between the sensor signals, sun energy and the elements present . 5

2.4 Methodological steps are performed with SPOT data in order to determineand measure the shadows of buildings and trees in the urban area analyzed . 7

2.5 Comparing SPOT satellite image and simulated image from linear shape-from-shading algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.6 Specific geometry formed by the sensor and the surface - SAR Clinometry . 13

2.7 General structure of the proposed algorithm [LHmZR05]. . . . . . . . . . 13

2.8 Results obteined, for the case of Israel [LHmZR05] . . . . . . . . . . . . . 14

2.9 Results obteined, for the case of Tunez [LHmZR05] . . . . . . . . . . . . . 14

3.1 Stereo condition of optical images[Jac03]. . . . . . . . . . . . . . . . . . . 15

3.2 Correct geo-location by means of 1 image and a DEM (orthoimage)[Jac03]. 16

3.3 Approximate geo-location by means of 1 image and height level of refer-ence plane formula 1: position error: dl = dh∗ tanv [Jac03]. . . . . . . . . 16

iv

Page 6: methodologies applied for obtaining digital elevation models(dems)

LIST OF FIGURES

3.4 Stereo-extracted DTM (60 km x 60 km; 5m grid spacing) from HRG stereopair. The black areas correspond to the mismatched areas due to radio-metric differences between the multi-date HRG images, as a result of (A)melting snow in the mountains, (B) frozen lakes and (C) the St. LawrenceRiver[Tou06]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.5 Comparation between SRTM data and CORONA DEM[GDP05]. . . . . . 19

3.6 Geometry for Radargrammetry [YLG11a] . . . . . . . . . . . . . . . . . . 20

3.7 Radargrammetry processing[YLG11a]. . . . . . . . . . . . . . . . . . . . 21

3.8 Schematic representation methodology[FMP10] . . . . . . . . . . . . . . 21

3.9 Build an image pyramid [FMP10] . . . . . . . . . . . . . . . . . . . . . . 23

3.10 Intensity SAR images and simulated image for radargrammetric processing[YLG11b].24

3.11 Correlation map between reference and matching images[YLG11b]. . . . . 25

3.12 Re-processed radargrammetry DEMs using intensity average and simula-tion method[YLG11b]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.1 The observation geometry of InSAR system . . . . . . . . . . . . . . . . . 27

4.2 InSAR processing for DEM generation . . . . . . . . . . . . . . . . . . . . 29

4.3 Processing result: (a) SAR amplitude image, (b) Coherence map (c) Inter-ferogram (d) Unwraped phase map[RMLD11]. . . . . . . . . . . . . . . . 30

4.4 Comparison results of the registered DEM and ASTER GDEM . . . . . . . 30

5.1 Geometry formed between the altimeter plataform and the earth surface . . 32

5.2 Different distance between altimeter and geodesic surface . . . . . . . . . . 33

5.3 Differences on the Bering Glacier system from 2000 to 2006, measures forthe ICESat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

7.1 Plataform Geometry - ASTER . . . . . . . . . . . . . . . . . . . . . . . . 41

7.2 DEM of the Mount Yatsugatake and Tsukuba in Japan and horizontal profile 42

v

Page 7: methodologies applied for obtaining digital elevation models(dems)

List of Tables

7.1 System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

7.2 Spectral Passband . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

vi

Page 8: methodologies applied for obtaining digital elevation models(dems)

CHAPTER 1

Introduction

In conceptual terms, digital elevation model (DEM) is based on a higher number of pointswith X, Y and Z coordinates describing the bare soil. Instead of the expression DEM, alsothe term digital height model (DHM) is used. The definition of a digital terrain model(DTM) is wider, it includes also the information about the location of objects[Jac03].

Digital Elevation Model (DEM) has provided as an important source of data whichis able to provide useful information about terrain. This digital model is used for manyapplications such as urban extension monitoring, disaster (flood, earthquake) monitoring,agriculture, engineering works and basic information of other applications [YLG11a].

Since these early experiments, various analog or digital sensors in the visible or in themicrowave spectrum have been flown to provide researchers and geoscientists with spatialdata for extracting and interpreting 3D information of the earth’s surface. Although theshape-from- shading technique can be applied to optical sensor images, stereo-viewing us-ing space camera or digital scanner images was, and still is the most common method usedby the mapping, photogrammetry and remote sensing communities. However, side-lookingsynthetic aperture radar (SAR) data gives also the opportunity to extract 3D informationusing image-processing techniques appropriate to the nature of the data. With SAR data,three main methods have been developed: radargrammetry, clinometry and interferometry.Radargrammetry (similar to the stereoviewing of optical data) uses two images acquiredfrom different viewpoints to generate a stereopair and stereoviewing. Clinometry takes ad-vantage of the SAR shading and shadowing in the image, and interferometry uses mainlythe SAR signal data instead of the image [TM00].

Digital Elevation Models do play a fundamental role in mapping. The digital descrip-tion of the threedimensional surface is important for several applications. Today the mostoften used photogrammetric product are orthoimages generated by means of a single im-

1

Page 9: methodologies applied for obtaining digital elevation models(dems)

1 INTRODUCTION

age and a DEM. The very high resolution space sensors are mainly operating in a singleimage mode; stereo pairs are not taken very often. A correct geo-referencing is only pos-sible based on a DEM. But these DEMs have to be created. The existing and not classifiedworld wide DEMs usually do not have a sufficient accuracy and reliability for more preciseapplications or they may be to expensive[Jac03].

The following report, presents the results obtained in the work done and the thematic tofollow, with regard to developing a theoretical framework of reference for the methodologyapplied for the realization of Digital Elevation Models with satellite data. Specifically, thisreport consists in present the 4 main form to produce un DEM, which are: Clinometry,Steroscopy, Interferometry and Altimetry (no image data).

Finally, in the appendix show the general characteristics of digital elevation modelsproduced by SRTM(USGS)and ASTER GDEM, which corresponding to the models forresearch and methodologies, and are freely available.

2

Page 10: methodologies applied for obtaining digital elevation models(dems)

CHAPTER 2

Clinometry or Shadow and shade

2.1 Introduction

The clinometry, corresponds according to [TM00] is a method that calculates the slope ofa given surface from the reflectivity values of a values matrix (figure 2.1). In other words,based on the shadows present in image, is capable of providing objects shapes and with thisobtain measurement of the elevation data. These data are measurement relative, however,so to generate a digital elevation model absolutely, requires additional information fromthe analyzed surface.

By definition, the shadows present in the images correspond to areas where there isnot signal from the sensor mainly due to the geometry that is generated between the signalfrom the sensor and the surface topography. According to [TM00], there are considerabledifferences between the SAR and VIR data regarding this phenomenon. While in the first,corresponding to areas without information (due to the lateral view of the sensor), wherethe sensor does VIR diffusely receive energy from these zones. For this, methodologicallyhave been treated differently.

2.2 Application in VIR images

For VIR images, shadow and occluded areas are different due to the illumination source isthe sun. Satellite VIR images are acquired from the descending path of a sun-synchronousorbit, and the local solar time is generally before or around noon (e.g., for SPOT, the localsolar time of the descending node is around 10:30am). Consequently, west-looking images

3

Page 11: methodologies applied for obtaining digital elevation models(dems)

2.2 APPLICATION IN VIR IMAGES

Figure 2.1: Geometry formed by the sensor and the surface - Clinometry

will have shadow and occluded areas in the same direction. Care must be taken to separatethese two effects. Since the sun elevation angle around noon will generate shadow withsteep slopes it can be consistently measured only from vertical structures such as buildingsor trees (in a row or isolated), or very rough terrain [TM00].

2.2.1 VIR Methodology

In [SS98], performed a measurement of heights of buildings and trees with SPOT(Panchromaticand Infrared bands) in a remote area 25 kilometers north of Adelaide, Australia.

As illustrated in the figure 2.2, the shadow part of the tree line, seen by the sensor isdependent upon its location with respect to the sun and the trees. The width of the sunshadow, measured along a normal to the rows of trees, depends on the azimuths of the sun,the image scanline, of the object.

In addittion, shows the angles formed between the trajectories of the sensor and the so-lar azimuth, corresponding to the reference frame of the linear measurement of the shadowon the surface in question.

From here on we will reserve the term “length of shadow ”to indicate the length ofshadow along the row of trees. For relating shadow width to object height, a few assump-tions were made: First, the object is perpendicular to the Earth’s surface which is flat.Second. the shadows are cast directly onto the ground. Third, it is also assumed that theshadow starts from the tree-trunk line on the ground. Finally, it is assumed that if the sensorand the sun are on opposite sides of the object, the sensor is able to see the entire shadow(figure 2.3).

4

Page 12: methodologies applied for obtaining digital elevation models(dems)

2.2 APPLICATION IN VIR IMAGES

Figure 2.2: Angles formed between the trajectories of the sensor and the solar azimuth

Figure 2.3: Geometry between the sensor signals, sun energy and the elements present

Using the figure 2.3, we can write the shadow width along the sun azimuth as theshadow width obstructed along the azimuth of the sensor by the object(tree) in the sensor’sfield of view is:

Ssu =ht

tan(θsu)(2.1)

The shadow width obstructed along the azimuth of the sensor by the object (tree) in thesensor’s field of view is:

Ssa =ht

tan(θsa)(2.2)

The component of the shadow of the object along the normal to the tree line is:

5

Page 13: methodologies applied for obtaining digital elevation models(dems)

2.2 APPLICATION IN VIR IMAGES

Ssun = Ssu ∗ cos(θsun) (2.3)

The component of the shadow obstructed by the object along the normal to the tree lineis:

Ssan = Ssa ∗ cos(θsun) (2.4)

Where:

Φsun = Φsu +90−Φt (2.5)

and

Φsan = Φsa +90−Φt (2.6)

In where Φsu Φsa and Φt , are azimuths of the sun, the image scan lines, and the treelines, respectively, as shown in the figure 2.2. The shadow width that is seen by the satellitealong the normal to the tree line is given by:

S = Ssun−Ssan = ht ∗cos

(θsun)− cos(Φsan)

tan(θsa)(2.7)

therefore:= ht =

Scos

(θsun)− cos(Φsan)

tan(θsa)

(2.8)

If the tree (shadow) line is at an angle to the image scan line, an additional correctionneeds to be applied to equation 2.8. The shadow width S in the equation 2.8 needs to bemultiplied by sec(φscan), where φscan is the angle between the scan lines and the tree line(figure2.2). The terms other than S (equation 2.8) may be combined into a function. Theonly variable in this function, for a given image, is the tree line azimuth φr. Thus, theequation 2.8 can be simplified to:

ht = s∗KΦt (2.9)

where:

KΦt =sec

cos(Φsun)

tan(θsu)− cos(Φsan)

tan(θsa)

(2.10)

6

Page 14: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

KΦt is a function of φr. Note that Ssan= 0 if the satellite is on the opposite side of theobject from the sun.

Firstly, an appropriate threshold to delimit shadows in the images is selected. Shadowscast by rows of trees are used to estimate the mean heights of trees. Calibration curves arethen constructed to relate the actual mean heights of trees to the estimated heights. Finally,heights of buildings are computed using their shadow lengths and the calibration curves,without any correction for the terrain slopes.

Figure 2.4: Methodological steps are performed with SPOT data in order to determine andmeasure the shadows of buildings and trees in the urban area analyzed

In the figure 2.5 comparing SPOT satellite image and simulated image from linearshape-from-shading algorithm. (A) Real SPOT satellite image with locations of groundcontrol points; (B) hill-shaded relief image based on elevation model derived from linearshape-from-shading algorithmIn by [Liu03]

2.3 Aplication in SAR images

In a SAR data matrix, shadows and hidden areas are mixed, because as mentioned in theintroduction, these areas dont have information, so that in comparison with VIR images,the boundaries of the shadows are easier defined. This situation is always determined bythe angle of incidence having the images.

7

Page 15: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

Radarclinometry, correspond essentially to the inversion of a mathematical expressionof the radar backscatter in terms of the albedo and the local incidence angle. The local slopeis then computed from the pixel reflectivity value and transformed into relative elevation byintegration pixel by pixel. In other words, shape-from-shading makes use of the sensitivityof the micro-topography, but it cannot provide absolute location. Some reference elevationinformation is needed to derive the absolute elevation. Intrinsic radiometric and geometricambiguities then limit the accuracy of this technique, when applied to general terrain sur-faces; the accuracy of derived slopes and elevation is generally of the order of few degreesand better than 100 m, respectively, depending on the image resolution and terrain relief[TG00].

The first radiometric ambiguity is related to the inversion of the model since it dependson two parameters. The SAR backscatter of the surface is altered if the surface propertiesvary from place to place. In this case assuming uniform reflecting properties (constantalbedo) will recover a shape (incidence angle) that is different from the actual one.

2.3.1 SAR Methodology

In the [LHmZR05], applied this technique in SAR data processing, to perform the calcula-tion of elevation data in semiarid regions, both in Israel and Tunisia.

In the figure 2.6, shows the following components: θ is the incidence angle, x is thedirection in distance (range), y azimuth direction, α and β are local angles on the groundin the XOZ and YOZ planes: Rd and Ra are the dimensions, respectively, in the distanceand direction of the site in the plane corresponding to a pixel, and L (α) and l (β ) are thedimensions of the surface, which depend both on their orientation and pixel size on the flatsurface.

L(α) =Rd sinθ

sin(|θ −α|);α ∈ [−π/2+0;0[∪]0;+α/2] (2.11)

l(β )≈ Ra

cosβ;β ∈ [−π/2;+α/2] (2.12)

∆Hli ∆Hco correspond to the altitude differences between two consecutive lines betweentwo consecutive columns of an image. Then, from this figure we have:

tanα = [Rd

∆Hco+

1tanθ

]-1 (2.13)

tanβ = [∆Hli

Ra] (2.14)

The wave vector is: K =( sinθ

0cosθ

)8

Page 16: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

and the unit surface normal vector is: N =

(−sinα cosβ

0cosα cosβ

). Thus:

K ∗N = cos(θ −α)cosβ (2.15)

According to the plane wave propagating in free space, the measured power is [LHmZR05]Pr:

Pr =PeGeGrλ

2

(4π)3r4 σ (2.16)

where λ is the wavelength, Pe the emitted power, Ge and Gr the antenna gains, r thedistance between the satellite and the target (surface), and s the product of the equivalentsurface by the gain in the backscattering direction. It is called the backscattering coefficient.In the previous works (from Wildey 1986, to Paquerault 1998 in [LHmZR05]), the observedsurface was assumed to be Lambertian, mainly for simplification. Indeed, it implies that:

σ = dAKNσ0 cosθr (2.17)

Where θr is the backscattering angle: cosθr = KN, and dA is the resolution surface:dA=L(α)l(β )

The ratio Q(θ ,α ,β ) between the powers backscattered by the observed surface with itsorientation angles α and β , and by the same surface but flat is:

Q(θ ,α,β ) =[cos(|θ −α|)]2 ∗ cosβ sinθ

sin(|θ −α|)∗ cos 2θ(2.18)

However, if the Lambertian assumption may be considered as a first approximationfor diffuse scattering surfaces such as dense vegetation, it is not true in most cases wherebackscattering is dominated by surface scattering. Therefore, here, we develop a moregeneral formulation. In the more general case, we write s as follows:

σ = dA(KN)Gs(θr) (2.19)

=RdRa sinθ cos(θ −α)

sin(θ −α)Gs(cos -1[cos(θ −α)cosβ ]) (2.20)

Deduce the ratio Q(θ ,α,β ):

Q(θ ,α,β ) =cos(θ −α)sinθ

sin(θ −α)cosθ

Gs(cos -1[cos(θ −α)cosβ ])

Gs(θ)(2.21)

9

Page 17: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

Elevation regularitation

The signal variations as a function of β angle are negligible relative to the variations dueto the a angle. Therefore, important errors can occur on β estimations, inducing importantdrifts on elevation estimations. This induces an important line integration effect, visibleon the first results. Different ways have then been proposed to force the b angle and helpits estimation. They are generally based on a regularization process, which forces theelevation variations to become unimportant, based on the assumption that relief variationsare smooth. Regularization also helps to reduce the errors due to speckle [LHmZR05].

The first is a minimization of the altitude difference between neighbouring pixels. In-deed, neglecting the β angle in equation 2.21, α angle and difference elevation dh in thesite direction can be known. Then, for each image line, the integration of the dh valuesgives the elevation relative to a constant h0(l), which should be estimated for each line. Inthis first approach, this constant is estimated as the value that minimizes the difference ofthe elevations between a pixel and its neighbours of previous lines:

h0(l)issuchthatΣ(Σ [h(s)+h0(l)−h(s′)]2 (2.22)

Typically the neighbourhood Ns was equal to few (3 to 10) previous lines and columns.However, results still show a strong line effect. One reason that can be argued, is that theerrors occurring in a estimation neglecting b are still not corrected from the equation2.22.

Classification process and regularitation

To compute the ratio Qobs, which approximates Q(θ ,α,β ) (equation 2.21), we need toknow the backscattered signal by the considered area in the absence of relief. This signal isgenerally unknown, and it is often taken to be equal to the mean value of the backscatteredintensities over the whole image. This may induce strong mistakes either in the case ofinhomogeneous areas, i.e. areas containing different backscattering classes, or if the reliefinduces a not negligible bias on the mean signal value, e.g. if the area shows an averagedslope significantly different from zero.

In this investigation, was considered a small number of classes, typically three, whosefeatures can easily be estimated from selected areas in a supervised way. Ground data isnot necessary, since for such a number of classes, it is not difficult to find representativeand approximately flat areas on the image. In particular, in the case of semi-arid or aridregions, the bottoms of the valleys and the upper plateaux can be identified.

Following to introduce altitude information, let us consider that the data feature vectoris constituted not only by the intensity value but also by the altitude estimation. Then,based on a Gaussian assumption for the conditional distribution of the altitudes relative totheir class, we define the following energy term:

10

Page 18: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

Ukdata(S) =

[I(k)(Sclass)]2

2var(k)(I(Sclass))(2.23)

Ukcl(S) =

[h(k)(S)−h(k)(Sclass)]2

2var(2)(h(Sclass))(2.24)

Where the superscript (k) denotes the current iteration since convergence is iterative,h(S class) is the altitude average over S class, and var(h(S class)) its variance. To achievegreater robustness, these two statistical moments are not computed on the whole image,but on sub-areas (typically 100X100 pixels). Considering that the new data are vectorial:(I(k)(s)h(k)(s))t , with diagonal covariance matrix, the new ‘data attachment’ term is thesum of Udata(k)(S) data Ucl(k)(S), respectively, defined by the equations 2.23 and 2.24.

Taking into account the altitude only in the case of the classes for which it is relevant,such as ‘bottom of the valleys’ and ‘upper plateaux’,is done by assuming an infinite vari-ance for all other classes. In the same spirit, it can also be useful to introduce a weightingfactor cc between the ‘intensity data attachment’ and the ‘altitude’ one.

Finally, the global energy that should be minimized is then the weighted sum of threeterms over all the image pixels:

U (k) = ∑U (k)data(S)+ ccU

(k)cl (S)+

Cn

2U (k)

N (S) (2.25)

Where Cc and Cn are weight coefficients. The global minimization is iterative, andas previously, it can be achieved using a simulated annealing algorithm. In the figure 2.7summarizes the proposed algorithm.

As can be seen in the figure 2.8 the results obteined are: (a–c) a angle estimation, re-spectively (a) based on the assumption of homogeneous area, (b) using minimum distanceclassification (MDcl), (c) using altitude constrained classification (ACcl); (d–e) classifica-tion results, (d) MDcl and (e) ACcl (class centres are equal to 210 dB, 27 dB and 25 dB);and (f–h) DEM results, obtained respectively from (f) homogeneous area assumption, (g)MDcl, (h) ACcl.

In the figure 2.9 can be seen theretrieved DEM (upperimage), SAR image (lower im-age), and two DEM profiles put in correspondence with the SAR images.

11

Page 19: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

Figure 2.5: Comparing SPOT satellite image and simulated image from linear shape-from-shading algorithm

12

Page 20: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

Figure 2.6: Specific geometry formed by the sensor and the surface - SAR Clinometry

Figure 2.7: General structure of the proposed algorithm [LHmZR05].

13

Page 21: methodologies applied for obtaining digital elevation models(dems)

2.3 APLICATION IN SAR IMAGES

Figure 2.8: Results obteined, for the case of Israel [LHmZR05]

Figure 2.9: Results obteined, for the case of Tunez [LHmZR05]

14

Page 22: methodologies applied for obtaining digital elevation models(dems)

CHAPTER 3

Principles of Steroscopy and Radargrammetry

3.1 Steroscopy

In general terms, for the DEM generation with optical images we need two or more imagesshowing the same area (figure 3.1) from different directions. The projection centre has tobe known in a specified object coordinate system in addition to the view direction for thecorrect determination of the ground point. If an orthoimages will be generated, only oneimage is required in addition to the DEM (figure 3.2). If only an average height is avail-able, discrepancies in the horizontal position dl are caused depending upon the discrepancy(figure 3.3). The same relation we do have for discrepancies dh of a DEM [Jac03].

Figure 3.1: Stereo condition of optical images[Jac03].

If a DEM will be used also for other purposes like a generation of orthoimages, ortho-graphic heights are required which are related to the geoid. Space data are often available

15

Page 23: methodologies applied for obtaining digital elevation models(dems)

3.2 DEM GENERATION WITH DIFFERENT SENSOR

Figure 3.2: Correct geo-location by means of 1 image and a DEM (orthoimage)[Jac03].

Figure 3.3: Approximate geo-location by means of 1 image and height level of referenceplane formula 1: position error: dl = dh∗ tanv [Jac03].

in WGS84 ellipsoidal heights which have to be corrected by the geoid undulation. Not inall areas of the world, the geoid undulations are known with a sufficient accuracy, so a localfit to vertical control points, defined with orthographic heights, is required in this case.

The accuracy of a height determined by the intersection of two imaging rays is depend-ing upon the accuracy of the x-parallax Spx (px = difference of image coordinates x′ – x )and the height to base relation of the imaging configuration[Jac03].

3.2 Dem generation with different sensor

In this part, will be exposed some general features of 7 sensors mentioned in the work of[Jac03], which are specifically:

16

Page 24: methodologies applied for obtaining digital elevation models(dems)

3.2 DEM GENERATION WITH DIFFERENT SENSOR

3.2.1 SPOT

The reached accuracy is depending upon the height to base ratio - the angle between theintersecting rays. Because of the not fixed view direction across the orbit, the base to heightrelation can reach the value 1.0. The simple theory of a linear dependence upon the heightto base relation is justified if the matching accuracy in the image is independent upon theangle between the intersecting rays, but the matching is more precise for smaller parallaxangles, where the image parts are more similar like for larger parallax angles with largerdifferences in the image parts. The optimal base to height relation is depending upon thearea itself – for an open and flat area the relation 1.0 is optimal, for more undulated areaswith buildings and vegetation the factor 1.6 may lead to better results. One example withthis sensor, shows the figure 3.4.

Figure 3.4: Stereo-extracted DTM (60 km x 60 km; 5m grid spacing) from HRG stereopair. The black areas correspond to the mismatched areas due to radiometric differencesbetween the multi-date HRG images, as a result of (A) melting snow in the mountains, (B)frozen lakes and (C) the St. Lawrence River[Tou06].

3.2.2 IRS-1C / 1D

The general configuration of IRS-1C / 1D is corresponding to SPOT, including the sameproblem of time delay in taking corresponding images. Because of satellite energy prob-lems, not so many stereo models have been taken. The smaller pixel size on the groundhas an accuracy advantage; on the other hand the radiometric quality is limited by the 6bit grey values.In the higher regions snow fall between both imaging periods has changedthe object – here no image matching was possible. Also in the dark forest areas the imagematching failed caused by the limited grey value range of the panchromatic IRS-1C of only

17

Page 25: methodologies applied for obtaining digital elevation models(dems)

3.2 DEM GENERATION WITH DIFFERENT SENSOR

6 bit. With just 64 different grey values the radiometric range from the snow to the darkforest cannot be expressed.

3.2.3 ASTER

Like MOMS and SPOT HRS, Terra ASTER is generating the stereo model by a nadirand a backward view within a minute. The height to base relation of 2.0 simplifies theautomatic image matching. The negligible time delay of imaging both scenes enables alsoan automatic image matching in forest areas. ASTER images are available in the Internetjust for a handling fee (specific features will be mentioned in the appendix ??.

3.2.4 IKONOS

Only few stereo scenes are taken by IKONOS. Corresponding to Space imaging the gener-ation of stereo models is a “waste of capacity”, nevertheless stereo models can be ordered.The flexible viewing of IKONOS enables also a stereo view with a base length of just 90kmor 12 seconds time difference. The automatic matching of an IKONOS stereo model takenin the same orbit has not caused any problem. Even with the poor height to base relationof 7.5 an accuracy of building heights of SZ=+/- 1.7m, corresponding to an accuracy of thex-parallax of just Spx=+/-0.2 pixel has been reached.

3.2.5 QUICKBIRD

QuickBird images are available as basic imagery with a geometry similar to SPOT level1A and as Standard imagery similar to IKONOS Geo. With the appropriate software bothtypes can be handled. Two overlapping QuickBird images taken with 2 weeks time dif-ference and a height to base relation of 1.6 have been matched. No larger changes at thevegetation could be seen and so no problems with the automated image matching appeared.Approximately 80% of the points have been matched successful. This is a satisfying result;the not matched points are located mainly at uniform areas like roads with the same greyvalue. The accuracy of the y-parallax of the matched points is in the range of the orien-tation accuracy of 1m, dominated by the limited accuracy of the control points. As meansquare Z-difference between the QuickBird-data and the not error free reference DEM ofthe USGS +/-4.8m, corresponding to spx=0.8 pixel has been reached.

3.2.6 CORONA

The images taken by the former US spy satellites of the CORONA system have been re-leased and are available just for a handling fee. Most often the KH- 4B has been used. Ithas a combination of a forward and a backward looking panoramic film camera, creating astereoscopic model with a height to base relation of 1.8. Of course these 30 to 40 year old

18

Page 26: methodologies applied for obtaining digital elevation models(dems)

3.2 DEM GENERATION WITH DIFFERENT SENSOR

images do not show the actual topographic features, but usually the ground surface is notchanging, so the images can be used for the DEM generation. In areas of strong erosion,the old images can be used as reference for the changes (Schneider et al 2001).

The ground resolution of approximately 3m can lead to a relative vertical accuracy of+/-2m to +/- 5m. The automatic image matching is usually without problems if the area isnot to uniform, but the correct mathematical model has to be used for the solution of thepanoramic images.

Figure 3.5: Comparation between SRTM data and CORONA DEM[GDP05].

3.2.7 TK350

The Russian space film camera TK350 has imaged large parts of the world. The foot printof 200km times 300km offers an economic generation of a DEM over a large area. In thesame area like the ASTER-data, a DEM has been generated by TK350- photos.

The TK350-photos have been scanned with a pixel size of 16 m. But even with this notvery small pixel size the film grain is clearly visible. In addition the film contains severalscratches. Without scratch removal and low pass filter the automatic image matching failed,but also after filtering problems occurred in the forest area. Larger parts could not bematched and several points have been removed because of exceeding a height difference of150m.

19

Page 27: methodologies applied for obtaining digital elevation models(dems)

3.3 RADARGRAMMETRY

3.3 Radargrammetry

To acquire good geometry for radargrammetry pairs, the intersection angle between refer-ence image and matched image should have enough angles for the observed parallax whichis used to determine the terrain elevation. However, in order to have good stereoviewing,the nearly identical images (small intersection angle) are necessary in processing. Ap-proximately 10-20 degree ntersection angles between the two images and small look angle(i.e. angle between nadir and the beam direction >20°) are usually considered as optimalconfigurations for medium to high relief areas [YLG11a]

Figure 3.6: Geometry for Radargrammetry [YLG11a]

Where in figure 3.6, S1, S2 are the satellites, Bx, Bz is the horizontal and vertical base-line, R1, R2 are the distances between the sensors and ground target P. The target P is seenas P1 and P2 in both SAR images from S1and S2. Then, dp is called ‘disparity’ distanceof P1 and P2. If the ground elevation is zero (h=0), disparity will be zero and it incrementsfor increasing heights h. It is expressed by:

d p =√

x2 +(H−h)2−H2−√

x−Bx)2 +(H +Bz−h)2 +(H−Bz)2−H2−Bz (3.1)

In accordance with the height sensor, and the look-angles of θ1 and θ2:

h =d p

cotθ2− cotθ1(3.2)

The figure 3.7 shows the processing steps of radargrammetry for DEM generation. Theradargrammetry DEM processing steps can be described in terms: (1) acquiring stereo-scopic images; (2) subset in the areas of interest; (3) despeckle to remove the noise; (4)co-registration of two subset images;(5) matching between co-registered images; (6) heightcalculation; (7) geocoding and DEM generation.

20

Page 28: methodologies applied for obtaining digital elevation models(dems)

3.3 RADARGRAMMETRY

Figure 3.7: Radargrammetry processing[YLG11a].

The main processing steps to generate DEM from stereo radar images by radargramme-try are shown in the figure 3.8. First, the image matching allows to find correspondancesbetween SAR images ant to generate the disparity map. Second, the 3D stereo intersectionmakes it possible to extract height from the disparity map to generate the raw DEM. Fi-nally, the DEM is geocoded and filtered. Also, GCPs (Ground Control Points) are collectedto refine the geometric model and to correct localization errors [FMP10].

Figure 3.8: Schematic representation methodology[FMP10]

The major shortcomings in radargrammetry are that it uses the low resolution radar im-age compared to optical images and the raw radar image is contaminated by specklenoiseand atmospheric error. Furthermore, it generated lower accuracy product than SAR inter-ferometric processing although it is less affected by atmospheric errors. The low spatialresolution of SAR images leads large grid size and the contaminated raw image increasesthe elevation error in DEM product. To resolve these problems, the raw radar image whichis noisy and of low spatial resolution should be improved using images from the same trackwhich are acquired with the same looking angle. These images are combined to produceimproved images which are used to generate radargrammetric DEM [YLG11a].

21

Page 29: methodologies applied for obtaining digital elevation models(dems)

3.3 RADARGRAMMETRY

3.3.1 Image matching

In radargrammetric processing, a critical step is to obtain a disparity map directly linkedto the parallax of each pixel of the radar image. This disparity map is provided throughthe matching step applied to the stereoscopic radar images pair (called the primary andsecondary images)[FMP10]. The most common image matching method is area correla-tion achieved by the zero-mean normalized cross-correlation (ZNCC) that gives the cross-correlation coefficient ρ:

ρ =E[I1− I2]−E[I1]E[I2]√

V (I1)V (I2)(3.3)

where I1 and I2 represent respectively the amplitude value of the primary and the sec-ondary image window. Windows have the same range and azimuth sizes; the expression ofE and V gives respectively the mean expectation and the variance expression.

Then, we get a correlation surface obtained with the values of the coefficient ρ . Themaximum value of this surface gives the position of the better candidate pixel (called thehomologous pixel) for the matching operation in the secondary image. The range andazimuth indexes of the homologous pixel make it possible to compute the disparity.

3.3.2 3D stereo intersection

For each pixel in the primary image, the disparity value gives the relation between thevalues r1 and r2. Moreover, in the radar image, a pixel is referenced by its range andazimuth indexes. On the one hand, the range distance locates the point on a range spherethat the centre is the radar position: this is the range sphere. On the other hand, the azimuthposition of a pixel can give the Doppler cone which is replaced by a plane in our casebecause of the null Doppler frequency at the perpendicular direction of the radar beam.The intersection of the range sphere and the Doppler plane provides two solutions but onlyone is obviously the good one according to the direction of the radar beam. The solution(x,y,z) of the search point satisfies the following equations system:

(x−X1)+(y−Y1)2 +(z−Z1) = r2

1

(x−X1)X1 +(y−Y1)Y1 +(z−Z1)Z1 = 0

(x−X2)2 +(y−Y2)

2 +(z−Z2)2 = r2

2

(x−X2)X2 +(y−Y2)Y2 +(z−Z2)Z2 = 0

(3.4)

where the Earth ellipsoid parameters are: a = 6378137.0 meters and b = 6356752.3meters. To draw the epipolar line associated to a pixel, it’s necessary to make the height hvary from hmin to hmax, which ones are estimated observing the study area. For a givenpixel on an image, the corresponding pixel in the secondary image locates on this epipolar

22

Page 30: methodologies applied for obtaining digital elevation models(dems)

3.3 RADARGRAMMETRY

line.Ideally, the search area can be reduced on a thin strip of one pixel thickness on theepipolar line [FMP10].

3.3.3 Pyramidal scheme

A hierarchical strategy is used to reduce process time and allows to work on large images:the pyramidal scheme (figure 3.9). The principle is quite simple: from the original imagewe build an image pyramid. At each level, the image size is reduced by a factor 2k cor-responding to the k-iteration step. The images are reduced by averaging the pixels graylevels: in the reduced image, each pixel value is the average of four pixels in the previousimage. For each iteration, the Matching process establishes an approximate disparity map.Specially for the first iteration, this disparity map is no enough regular because of disparityjumps between neighbouring pixels. So, we use a low-passWiener filter to make this dis-parity map more regular. Thus, we are able to predict the disparity offsets at the next levelof the hierarchical process, reducing computation time and speckle errors. With increasingiterations, we obtain a better accuracy for each level. At the final step, the last disparitymap is used to produce the DEM [FMP10].

Figure 3.9: Build an image pyramid [FMP10]

3.3.4 Examples with images Envisat

In the research of [YLG11b] the site has been selected in the Wollongong and Appin areain the state of New South Wales, Australia. This research used nine ENVISAT AdvancedSynthetic Aperture Radar (ASAR) images for radargrammetric DEM generation. For theintersection angle effect comparison between reference and match image, three differenttracks of ENVISAT images are selected and they are processed using cross-matching pairs.Each track has three images and data is selected according to the close acquisition date.

In the figure 3.10 illustrates the intensity SAR images and simulated image for radar-grammetric processing. The terrain shapes are appeared different illustrations such as dis-tance and width difference between rivers and size of city and glass area. This phenomenongenerates the stereoscopy of radargrammetry technique. Two matching images were reg-istered for pixel off-set calculation. The main problems encountered when matching SARimages for radargrammetry are speckle noise and that the difference of two stereo pairs

23

Page 31: methodologies applied for obtaining digital elevation models(dems)

3.3 RADARGRAMMETRY

Figure 3.10: Intensity SAR images and simulated image for radargrammetricprocessing[YLG11b].

from one another, as the backscattered SAR signal mainly depends on the local incidenceangle. In the processing step, the speckle noise and other noise was reduced to increase theimage matching accuracy. This step was applied to each data processing pairs.

In the figure 3.11 shows the correlation map between reference and matching images.The correlation number gives a measure of how strong a match exists between the referenceand match points. This step was performed with approximately 20 tie points in each pairs.The correlation scale ranges from 0 to 1, where the closer the result is to 1,the better thematch. The higher correlations are appeared in the city and low grass field regions. It isrelated with ground condition such as movement of grass, tree and water or stable buildings.The stable surface condition makes the permanent backscattering signal.

The figure 3.12 illustrates the re-processed radargrammetry DEMs using intensity aver-age and simulation method. The terrain reality is improved in intensity average processing.However, the simulation image appeared the terrain distortion in DEM because the diffi-culty of image coregistration and data parameter matching.

24

Page 32: methodologies applied for obtaining digital elevation models(dems)

3.3 RADARGRAMMETRY

Figure 3.11: Correlation map between reference and matching images[YLG11b].

Figure 3.12: Re-processed radargrammetry DEMs using intensity average and simulationmethod[YLG11b].

25

Page 33: methodologies applied for obtaining digital elevation models(dems)

CHAPTER 4

Interferometry

4.1 General Concepts

Radar interferometry is an alternative to the conventional stereoscopic method for extract-ing relative or absolute elevation information. It uses the coherent property of modern SARand enjoys the advantages of radar systems and of digital image processing: all-weather,night and day operation, and automated or semi-automated processing[TG00].

Interferometry (InSAR) is one of the main applications, corresponding to a specifictechnique that measures the interference patterns caused by the phase difference betweentwo or more SAR images, and can thus determine the relative distances in great detail(fraction of wavelength), and the observation of angular differences in the surface necessaryto detect such as topographic layout, or surface deformations in a time interval[Han01].

By this measurement result is obtained as an interferogram, which depending on thecoherence (or correlation) of the data employed, shown through a series of curves, thedistance or it changes between the ground and the sensor. The resulting matrix provides uswith a high density data sampling (~km - 2 100 pixels), high accuracy (~1 cm) and a certainrhythm of observation (1 step per month -2)[MF98].

The phase difference, which is related to the two-way path difference of the radarechoes, is only known with a 2 π ambiguity. It is then necessary to “unwrap ”the phasedifferences for the determination of the absolute interferometric phase to within a constant[MF98].

With existing satellite SAR sensors, only the repeat-pass system one satellite antennaand two passes of the satellite. can generate interferometric data through the combination

26

Page 34: methodologies applied for obtaining digital elevation models(dems)

4.1 GENERAL CONCEPTS

Figure 4.1: The observation geometry of InSAR system

of complex images since there is presently no satellite system with two antennas. Such adual-antenna system, the Shuttle Radar Topographic Mission SRTM., has been launched inearly 2000 [TG00].

However, in [TM00], there are geometric and radiometric limitations in the computa-tion of the terrain elevation since the phase difference contains several components: topo-graphic, atmospheric, displacement and noise. To resolve these ambiguities and to addressthese different components, different developments in SAR interferometric processing tookplace, first with airborne SAR data in the 1980’s and later with spaceborne. These include:

1. The impact of the interferometric baseline on the elevation accuracy;

2. The evaluation of different atmospheric phenomena and the way to characterizethem;

3. The differential interferometry combining multiple interferograms to estimate sub-centimeter surface displacement;

4. The time interval between the image acquisition.

4.1.1 Basic Principles of SAR Interferometry

In the investigation of [RMLD11] is presented a precise synthesis of the basic principlesSAR interferometry, in the case to pass repeat in a time defined. Due to the distance be-tween the two antennas and the target point is different, there is a phase difference betweenthe corresponding image points which generates the interferogram. The phase value ininterferogram is the measured value of phase difference of the twice imaging. On the ba-sis of the geometrical relationship of three-dimensional space position between the phasedifference of twice imaging and the target point and the parameters of flight orbit, the 3Dcoordinates of the point a ground can be determined. Then, a large scale of high accuracyof digital elevation model (DEM) is generated.

27

Page 35: methodologies applied for obtaining digital elevation models(dems)

4.1 GENERAL CONCEPTS

A1 and A2 are the locations of satellite antenna twice image of the same area respec-tively. P is the target point; h is the height of A1; Z(y), the elevation of P; θ ,the incidenceangle of A1; B is the distance between two antennas that is the baseline; α is the includedangle of B relative to the horizontal direction;γ is the slope distance of A1 to P; γ + ∆ γ isthe slope distance of A2 to P.

Based on the geometric figure 4.1, the elevation of target point P is:

Z(y) = h− γ cosθ (4.1)

In line with cosine theorem, the following is available:

(γ +∆γ)2 = γ2 +B2−2γBcos(α +

π

2−θ) = γ

2 +B2−2γBsin(θ −α) (4.2)

From polynomial in the equation4.2 we get:

γ =∆γ2−B2

2Bsin(α−θ −2∆γ2)(4.3)

According to the principles of InSAR technology, the interferometric phase means ∆φ

the phase difference of echo received by radar from point A1 and A2 when radar signals ofP went through the distance of γ and γ +∆γ [RMLD11].

The following relation is established:

∆φ = 4π(∆γ

π) (4.4)

The expression 4.5 is obtained by substituting polynomial of the equation4.3 and theequation4.4 into the equation4.1.

Z(y) = h−

λ ∗∆φ

4π−B2 2∗B∗ sin(α−φ)−∆γ2−B2

2Bsin(α−θ)− λ ∗∆φ

(4.5)

The above expression is the principle of obtaining ground elevation from interferomet-ric phase.

4.1.2 Data Processing

The primary steps of data processing are: the spatial coregistration and resampling of in-terferometric image pair, the computation of interferometric phase, phase unwrapping, theconfirmation of satellite orbit parameters, geometric transformation and projection trans-formation [RMLD11].

28

Page 36: methodologies applied for obtaining digital elevation models(dems)

4.1 GENERAL CONCEPTS

Figure 4.2: InSAR processing for DEM generation

By combining the phase of these two images after coregistration, an interferogramcould be generated whose phase is highly correlated to the terrain topography. From thecomplex interferogram, the interferometric phase is only known modulo 2π . In order to beable to relate the interferometric phase to the topographic height, the correct multiple 2π

has to be added. This is done in the phase-unwrapping step after removing the flat-earthexpected phase from the interferogram.

The interferometric coherence map, generated by calculating the normalized interim-ages correlation, characterizes the uncertainty with which the height can be measured.However, successful phase-to-height conversion requires relative orbital parameters ac-curacy and a fulfillment of the basic requirements for repeat-pass interferometric system(figure4.3). These equirements consist of having, first, stable atmospheric conditions andterrain backscatter and, second, the same interferometric SAR geometry during both SARacquisitions. Phase-to-height conversion is a very important step in the DEM generationprocedure. Therefore, the relationship between phase and height should be calculated ac-curately.

In the figure 4.4 shows the comparison results of the registered DEM and ASTERGDEM. (a) The registered elevation map generated InSAR image pair. (b) The elevationmap of ASTER GDEM. (c) The curve of the generated DEM in vertical direction. (d) Thecurve of the ASTER GDEM in vertical direction. (e) The curve of the generated DEM inhorizontal direction. (f) The curve of the ASTER GDEM in horizontal direction.

29

Page 37: methodologies applied for obtaining digital elevation models(dems)

4.1 GENERAL CONCEPTS

Figure 4.3: Processing result: (a) SAR amplitude image, (b) Coherence map (c) Interfero-gram (d) Unwraped phase map[RMLD11].

Figure 4.4: Comparison results of the registered DEM and ASTER GDEM

30

Page 38: methodologies applied for obtaining digital elevation models(dems)

CHAPTER 5

Altimetry

5.1 Altimeter radar

5.1.1 General concepts

In the last decades, satellite radar altimeters have been successfully exploited to monitor theocean surface on a global scale, and by now measurements performed by radar altimetersare fundamental for understanding key climate mechanisms, for studying ice dynamics, formonitoring tides, for investigating interactions between ocean and atmosphere in climateperturbations. For example, altimeter observations are recognized by the science commu-nity as a fundamental component of the integrated observing strategy for the World OceanCirculation Experiment (WOCE)[GSN+03].

In [TM00] mentioned for example that since the 1990[U+0092]s, the ERS-1 andTopex/Poseidon altimeters offer improved data quality and global data coverage in com-parison with the previous altimeters from GEOS-3, Seasat and GEOSAT. In combinationwith its other sensors, the ERS altimeter has been contributing to scientific development inthe following areas of ocean dynamic research:

1. Improved description of 2-D surface waves and wave heights for wave forecasting

2. Ocean topography from the mesoscale to the global scale; and

3. Ocean surface wind field measurement at high spatial resolution

31

Page 39: methodologies applied for obtaining digital elevation models(dems)

5.2 ALTIMETER LASER

5.1.2 Operation system

The satellite altimetry principle, is the emits a vertical (microwave pulses) high frequency(about 1 Hz) in the direction Earth’s surface and receives an echo reflected by a watersurface. The analysis of this echo can extract a measure very precisely the time it takesthe wave from that part the satellite, is reflected on the water surface and returns to itsorigin (figure 5.1. This time is converted into distance by multiplying the value for thespeed of light. That distance, also known as a measure altimeter is then interpreted as theheight separating the satellite from the water surface reflected. Additionally, if you knowprecisely the position of satellite relative to a known ground reference, usually WGS84earth ellipsoid (figure 5.2), it is possible to estimate the difference between this positionand the extent of the satellite altimetry, the altitude of water level reflected on the ellipsoidreference [LHDD08].

Figure 5.1: Geometry formed between the altimeter plataform and the earth surface

5.2 Altimeter laser

5.2.1 General concepts

In the surweys [RDM+10] is defined Light Laser Detection and Ranging as the applicationof airborne laser radar system which can carry on the threedimensional space survey, andwe obtain the crowded threedimension cloud data about ground object. By related soft-ware processing, we also obtain DEM, the contour map, the Digital Ortho-photo Map and3D Model for the building.Because Lidar system contains the pulse of laser light, globalpositioning system (GPS) and the inertial navigation system (INS) three kinds of technol-ogy, and unifies with the digital aerial surveying camera. Moreover, the shadow and solarangle have no influence to pulse of laser light which improved the data acquisition qualitygreatly; the flight height is not limit to its altitude figure precision, which had the superioritycompared to the convention photographic survey.

The laser scanner represents an emerging technology that is making its transition fromthe proof-of-concept, prototype stage to a readily available and reliable commercial survey

32

Page 40: methodologies applied for obtaining digital elevation models(dems)

5.2 ALTIMETER LASER

Figure 5.2: Different distance between altimeter and geodesic surface

instrument. Automatic DEM generation over large areas is thus produced in a short deliverytime. Its use is mainly restricted to airborne platforms, although a proposal was done,without success, for a satellite laser altimeter of high resolution to map the polar ice sheets.The instrument would have also provided useful data on cloud-top heights and the oceansurface [TM00].

5.2.2 Satellite laser altimetry

The satellite radar altimeter measures the height of a reflecting facet scanned by the passageof the instrument overhead. It uses the echo delays from within the pulse-limited footprintto estimate the minimum radar range. Outside of the pulse, a limited footprint for a flatsurface is determined by the pulse length. As an example, the radar altimeter of GEOSAThas a pulse-limited footprint of about 2 km in diameter, which increases to many kilome-tres as large-scale surface roughness increases. Other system limitations (inaccuracies inthe timing, tropospheric and ionospheric delays of the radar wave propagation and orbitdetermination errors) introduce errors in the range precision, which indirectly affects thespatial resolution [TM00].

The first global laser altimetry missions, the Vegetation Canopy Lidar (VCL) and theIce, Cloud and Land Elevation Satellite (ICESat) are under development. Using large-footprint laser altimetry, VCL will characterize the threedimensional structure of the Earth’s

33

Page 41: methodologies applied for obtaining digital elevation models(dems)

5.2 ALTIMETER LASER

surface, measuring vegetation height, vertical vegetation structure, and ground topography,including sub-canopy topography (i.e., the elevation of Earth’s surface below any overlyingvegetation) to 1m vertical accuracy over the majority of the Earth. These measurementswill be acquired using an instrument with three laser ground tracks, each of which has25 m-diameter laser footprints contiguous along track, with 4 km spacing between beamsacross track. Over its two-year lifetime, these three transecting “beams will sample approx-imately 3% of the Earth’s surface between 67° north and south producing about 5 billionobservations of vertical structure [HRBR02].

Figure 5.3: Differences on the Bering Glacier system from 2000 to 2006, measures for theICESat

In the figure 5.3 shows ICESat-derived surface-elevation differences on the BeringGlacier system from 2000 to 2006. Noteworthy repeat-pass footprints on Bering, track0185, and Bagley, track 0416 (green line is 60 m long), are shown (ASTER images August2003 and 2006 in the panels, respectively). Filled circle diameters are 70 m. Surface-elevation differences from 2000 to 2003/04 are relative to the Intermap DEM.Surface-elevation changes on the colored (red and blue)[MLS+09].

In [Liu], the critical applications of this sensor type are:

• Compared with traditional photogrammetry method, Lidar technology has lowerfield-operation costs, and reduced post-processing time and effort.

• Lidar can be used effectively for measuring heights of features, such as forest canopyheight relative to the ground surface. The returns identified as being from vegetationabove the ground are used to determine the height and density of the overlying vege-tation. In urban areas, a similar separation is done to characterize the location, shape,and height of buildings and other man-made structure.

34

Page 42: methodologies applied for obtaining digital elevation models(dems)

5.2 ALTIMETER LASER

• Laser bathymetry uses a pulsed laser source to measure subsurface topography incoastal waters. Accurate profiling of water depth by transmitting pulsed coherentlaser light beneath an aircraft such that first a strong reflected return is recorded fromthe water surface, and this followed closely by a weaker return from the bottom ofthe water body.

• Important applications include geologic hazard assessment (surface subsidence, ero-sion), hydrologic modeling, riparian zone and flood plain mapping, assessment ofstream quality for salmon spawning, characterization of forest habitat and timberquality, urban planning, and transportation corridor planning and mapping.

35

Page 43: methodologies applied for obtaining digital elevation models(dems)

CHAPTER 6

Conclusions

1. Elevation modelling from satellite data has been one of the most important thematic,since the launch of the first civilian remote sensing satellite. Different data (spacephotographs, VIR scanner, SAR, altimeter) can be processed by different methodsand processing type (interactive, automatic).

2. The clinometry, although present methodological facility of only can to work withone image (various bands), this technique is useful for estimating heights of onlyextended objects situated in flat terrain. Moreover, in the radarclinometry case, theDEMs are still not very precise, and the knowledge of the radar signal is very impor-tant. Therefore, the specificities of radar imagery (speckle and geometry for exam-ple), determine the limit of the performance.

3. Steroscopy with VIR images shows good result, however should be taken into ac-count with the elevation errors usually present differentially in function of the landcovers (bare surfaces; urban areas; forests). Regularly, the worse results are associ-ated with coverage forests, and the best results in bare surfaces. For this reason, thepost-processing and the accuracy of complement data are very important. In the samemanner, a appropriated treatments are essencial in radargrammetry, because one ofthe main problem are the angle difference between the images and the ocluded forthe system lateral vision, in high relief areas.

4. As regards to interferometry the result area very precise and satifactory, however interms of methodology should be considered for DEM generation, for example the usea pair of images with a short temporal baseline, the consideration of the influence oftemporal and spatial decorrelation, etc.

5. For other hand, the altimetry in spetial of kind laser appears as the alternative that

36

Page 44: methodologies applied for obtaining digital elevation models(dems)

6 CONCLUSIONS

presents the most accurate results in comparative terms, so that captures the mainlythe scientific interest.

6. It is key, continue working in the solutions complementarity between methods ownswith different sensors and processing complementary. In the [TM00], for example,exposed an method with stereoscopy in base between VIR and SAR data, where theradiometric content of the VIR image is combined with the SAR’s high sensitivityto the terrain relief and its “all-weather”capability to obtain the second image of thestereo-pair.

37

Page 45: methodologies applied for obtaining digital elevation models(dems)

CHAPTER 7

SRTM mission, and ASTER GDEM

7.1 SRTM mission

In February 2000 the Shuttle Radar Topography Mission (SRTM), carried out a missionto map the world’s landmass between +60° using radar interferometry; The radar mappinginstrument consisted of modified versions of the SIR-C C-band and X-band radars flown onthe shuttle in 1994. The C-band interferometry data was collected in swaths comprised offour subswaths. ScanSAR mapping modes alternately switch between two (or more) beampositions in the cross track direction to increase the swath width at the expense of alongtrack resolution [HRG00].

Constructing accurate height maps using radar interferometry requires precise knowl-edge of the platform position, attitude, and interferometric baseline as well as knowledgeof the radar operating parameters (table 7.1).

Table 7.1: System ParametersBaseline lenght 62 metersBaseline Orientation angle 45 °Burst length 60 -100 pulsesonPlatform altitude 240 kilometersPlatform velocity 7.5 km/s metersLook angle range 30°- 60°Antenna leght 12 m. / 8m.PRF range 1330 -1550 Hz

38

Page 46: methodologies applied for obtaining digital elevation models(dems)

7.2 ASTER GDEM

7.1.1 Process description

In [HRG00], shows the different treatment steps made in this mission, which correspondsto:

1. Range compression: In the range compression module raw signal data is decoded thefrom 4-bit BFPQ format to floating point. This is done via a look up table in bothcases. Range compression is done using the standard Fourier transform convolutionalgorithm. A calibration tone was injected into the returned echo data to track phasedrift between the inboard and outboard channels that is extracted and stored for usein computing the relative phase between the two interferometric channels. A phaseramp is applied to the reference function of the two channels to remove differentialchannel delay and correct for an overall phase constant.

2. Motion compensation: Motion compensation was done to common reference pathwhich insures the data are co-registered in range after image formation, applies theappropriate range spectral shift for a flat surface and flattens the fringes.

3. Presumming and Azimuth Compression:For SRTM was need an algorithm that canbe used with burst mode data and can process the large volume of data efficiently.The algorithm to be used for production that incorporates the presum operation is themodified SPECAN algorithm.

4. Interferogram Formation and filtering: interferograms are generated for each burst acombined in to a larger patch interferogram. The interferogram is filtered to reducenoise using either a boxcar lowpass filter.

5. Unwrapping and Absolute Phase Determination: Absolute phase is determined byaveraging the unwrapped phase to 500 m postings and adding multiples of 2π andthen comparing the associated height values to a low resolution DEM. The multiplewith the smallest RMS residual is selected.

6. High reconstruction and regridding:Interferometric height reconstruction is the deter-mination of a target’s position vector from known platform ephemeris information,baseline information and interferometric phase.

7.2 ASTER GDEM

7.2.1 General information

The Advanced Spaceborne Thermal Emission and Reflection Radiometer(ASTER) is anadvanced multispectral imager that was launched on board the Terra spacecraft in Decem-ber 1999. ASTER covers a wide spectral region with 14 bands from visible to thermalinfrared with high spatial, spectral, and radiometric resolution. The wide spectral region

39

Page 47: methodologies applied for obtaining digital elevation models(dems)

7.2 ASTER GDEM

Table 7.2: Spectral Passband

Spectral PassbandSubsystem Band N° Spectral range(υ m) Spatial Resolution

VNIR1 0.52-0.602 0.63-0.69

3N 0.78-0.86 15 m3B 0.78-0.86

SWIR4 1.600-1.7005 2.145-2.1856 2.185-2.225 30 m7 2.235-2.2858 2.295-2.3659 2.360-2.430

TIR 10 8.125-8.47511 8.475-8.82512 8.925-9.275 90 m13 10.25-10.9514 10.95-11.65

is covered by three separate telescopes. Three visible and near-infrared (VNIR) bands, sixshortwave infrared (SWIR) bands, and five thermal infrared (TIR) bands have a spatial res-olution of 15, 30, and 90 m, respectively. An additional telescope is used to see backwardin the VNIR spectral band (band 3B) for stereoscopic capability that is the major subject ofthis paper [FBK+05] The spectral passbands of ASTER are shown in table7.2.

The Terra spacecraft is in a circular, nearly polar orbit at an altitude of 705 km. Theorbit is Sun-synchronous with a local equatorial crossing time of 10:30 A.M., returning tothe same orbit every 16 days [FBK+05].

The ASTER instrument produces two types of Level-1 data, Level-1A and Level-1B.Level-1A data are formally defined as reconstructed, unprocessed instrument data at fullresolution. According to this definition the ASTER Level-1A data consist of the imagedata, the radiometric coefficients, the geometric coefficients, and other auxiliary data with-out applying the coefficients to the image. The Level-1B data are generated by applyingthese coefficients for radiometric calibration and geometric resampling. The Level-1Adata are used as source data to generate digital elevation model (DEM) products, becausemany useful instrument geometric parameters and the spacecraft information are includedin them. These parameters are available to generate high-quality DEM data products with-out ground control point (GCP) correction for individual scenes. The instrument geometricparameters, such as the line-ofsight (LOS) vectors and the pointing axis vectors were pre-cisely adjusted through the validation activity using many GCPs. The DEM data, which areprocessed only by these system parameters, were demonstrated to have excellent accuracy.The evaluation of the DEM data was carried out both by Japanese and U.S. science teams

40

Page 48: methodologies applied for obtaining digital elevation models(dems)

7.2 ASTER GDEM

separately using different DEM generation software and reference databases [FBK+05].

The VNIR subsystem has two telescopes, a nadir-looking telescope and a backward-looking telescope. The figure7.1 shows the VNIR configuration. The relation betweenbase-to-height (B/H) ratio and α is B/H =tan α , where α is the angle between the nadirand the backward direction at a point on the Earth’s surface. The angle , which correspondsto B/H ratio of 0.6, is 30.96 . By considering the curvature of the Earth’s surface, thesetting angle between the nadir and the backward telescope was designed to be 27.60.

Figure 7.1: Plataform Geometry - ASTER

The pointing function is provided for global coverage in the cross-track direction, sincethe swath width of ASTER is 60 km and the distance between the neighboring orbits is172 km at the equator. The optical axes of the nadir and backward telescopes can be tiltedsimultaneously in the cross-track direction to cover a wider range.

In the figure 7.2 shows the generated full images of the elevation for Mount Yatsugatakeand Tsukuba in Japan and one example of the horizontal profile for each image. An exam-ple of the elevation error profile for each image is also shown in these figures [FBK+05].

41

Page 49: methodologies applied for obtaining digital elevation models(dems)

7.2 ASTER GDEM

Figure 7.2: DEM of the Mount Yatsugatake and Tsukuba in Japan and horizontal profile

42

Page 50: methodologies applied for obtaining digital elevation models(dems)

Bibliography

[FBK+05] H. Fujisada, G. B. Bailey, G. G. Kelly, S. Hara, and M. J. Abrams. Aster demperformance. 43(12):2707–2714, 2005.

[FMP10] F. Fayard, S. Meric, and E. Pottier. Generation of dem by radargrammetrictechniques. pages 4342–4345, 2010.

[GDP05] N. Galiatsatos, D. Donoghue, and Philip.G. An evaluation of the stereoscopiccapabilities of corona declassified spy satellite image data. In 25th EARSeLSymposium, Workshop on 3D Remote Sensing, Porto, Portugal, 2005.

[GSN+03] J. Guijarro, R. Santoleri, B. B. Nardelli, L. Borgarelli, R. Croci, R. Venturini,and G. Alberti. Innovative radar altimeter concepts. 2:1080–1082, 2003.

[Han01] R. Hanssen. Radar Interferometry. Data interpretation and error Analysis.Radar Interferometry. Data interpretation and error Analysis., 2001.

[HRBR02] M.A. Hofton, L.E. Rocchio, J.B. Blair, and Dubayah R. Validation of vegeta-tion canopy lidar sub-canopy topography measurements for a dense tropicalforest. Journal of Geodynamics, 34:491–502, 2002.

[HRG00] S. Hensley, P. Rosen, and E. Gurrola. The srtm topographic mapping proces-sor. 3:1168–1170, 2000.

[Jac03] K. Jacobsen. Dem generation from satellite data. 2003.

[LHDD08] J.G. León Hernández, E. Domínguez, and G. Duque. The most recent satel-lite radar altimetry applications in hydrology. the case of the amazon basin.Revista Ingeniería e Investigación, 28:126–131, 2008.

[LHmZR05] S. Le Hegarat-macle, M. Zribi, and L. Ribous. Retrieval of elevation byradarclinometry in arid or semi-arid regions. International Journal of RemoteSensing, 26:2877–2899, 2005.

43

Page 51: methodologies applied for obtaining digital elevation models(dems)

BIBLIOGRAPHY

[Liu] Hongxing Liu. Non-image remote sensing data note11. Technical report,Departament of Geography - University of Cincinnati.

[Liu03] Hongxing Liu. Derivation of surface topography and terrain parameters fromsingle satellite image using shape-from-shading technique. Computers &Geosciences, 29:1229–1239, 2003.

[MF98] Didier Massonnet and Kurt L. Feigl. Radar interferometry and its applicationto changes in the earth’s surface. Reviews of Geophysics, Issue 4, Volume36:pp.441–500, 1998.

[MLS+09] R. Muskett, C. Lingle, J. Sauber, A. Post, W. Tangborn, B. Rabus, andK. Echelmeyer. Airborne and spaceborne dem- and laser altimetry-derivedsurface elevation and volume changes of the bering glacier system, alaska,usa, and yukon, canada, 1972–2006. Journal of Glaciology, 55:316–326,2009.

[RDM+10] Jiang Ruibo, Wang Dongmei, Yang Mingdong, Yang Fuqin, Pan Jiechen, andXu Liang. Compilation of dem based on lidar data. 9, 2010.

[RMLD11] Xiaojun Rao, Fang Miao, Rui Liu, and Xing Deng. Dem generation of typicalmountain area based on insar technology. pages 353–358, 2011.

[SS98] V.K. Shettigara and G. M. Sumerling. Height determination of extended ob-jects using shadows in spot images. Photogrammetric Engineering and Re-mote Sensing, 64:35–44, 1998.

[TG00] T Toutin and L Gray. State-of-the-art of elevation extraction from satellitesar data. ISPRS Journal of Photogrammetry & Remote Sensing, 55:13–33,2000.

[TM00] T.H. Toutin and R.A. Meyers. Elevation modeling from satellite data. Ency-clopedia of Analytical Chemistry, 10:pp.8543 – 8572, 2000.

[Tou06] T Toutin. Generation of dsms from spot-5 in-track hrs and across-track hrgstereo data using spatiotriangulation and autocalibration. ISPRS Journal ofPhotogrammetry & Remote Sensing, 60:170–181, 2006.

[YLG11a] Jung Hum Yu, Xiaojing Li, and Linlin Ge. Radargrammetric dem generationusing envisat simulation image and reprocessed image. pages 2980–2983,2011.

[YLG11b] Jung Hum Yu, Xiaojing Li, and Linlin Ge. Radargrammetric dem generationusing envisat simulation image and reprocessed image. pages 2980–2983,2011.

44