Top Banner
Normalized and Differential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data Hans Knutsson Carl-Fredrik Westin Computer Vision Laboratory Department of Electrical Engineering Link¨ oping University, S-581 83 Link¨ oping, Sweden Fax: +46 13 138526, email: [email protected], email: [email protected] Abstract In this paper it is shown how false operator re- sponses due to missing or uncertain data can be sig- nificantly reduced or eliminated. Perhaps the most well-known of such effects are the various ‘edge effects’ which invariably occur at the edges of the input data set. Further, it is shown how operators having a higher degree of selectivity and higher tolerance against noise can be constructed using simple combinations of ap- propriately chosen convolutions. The theory is based on linear operations and is general in that it allows for both data and operators to be scalars, vectors or tensors of higher order. Three new methods are presented: Normalized con- volution, Differential convolution and Normalized Dif- ferential convolution. All three methods are examples of the power of the signal/certainty - philosophy, i.e. the separation of both data and operator into a sig- nal part and a certainty part. Missing data is simply handled by setting the certainty to zero. In the case of uncertain data, an estimate of the certainty must accompany the data. Localization or ‘windowing’ of operators is done using an applicability function, the operator equivalent to certainty, not by changing the actual operator coefficients. Spatially or temporally limited operators are handled by setting the applica- bility function to zero outside the window. Consistent with the philosophy of this paper all algo- rithms produce a certainty estimate to be used if fur- ther processing is needed. Spectrum analysis is dis- cussed and examples of the performance of gradient, divergence and curl operators are given. 1 Introduction Information representation is an important issue in all multi-level signal processing systems. The issue is complex and what is a good information represen- tation varies with the application. Nevertheless, we stress that an important common feature for a good representation is that it should keep statement and certainty of statement apart, From a philosophical point of view it should be no argument that ‘knowing’ and ‘not knowing’ are dif- ferent situations regardless of what can be known. Thoughts along these lines are of course by no means new ([11] [3] [15] [5] ) and can, dependent on point of view, be said to have relations to both probabil- ity theory, fuzzy set theory, quantum mechanics and evidence calculus. However, it is felt that the vision community would benefit from an increased awareness of the importance of these ideas. The present paper is intended to be a contribution towards this end. Consider the following simple example. Vectors are commonly used for representing speed and direction of speed. How should a vector having zero magnitude be interpreted? Is the speed zero or do we have the case that no information about the velocity is avail- able? Note that information of local image velocity is impossible to recover in flat regions. In an example below it is shown that, using a vector as the sole repre- sentation for local velocity, borders between regions of missing data and good data can induce strong erratic responses. In practice, having, or being able to produce, addi- tional certainty information is not unusual, e.g. range data normally consists of two parts; a scalar value defining the distance and an energy measure basically defining points, where the range camera has failed to estimate the distance, so called drop-outs. 2 Notations Before presenting the concepts of normalized and differential convolution we will define the notations that will be used in this paper.
10

Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

May 14, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

Normalized and Differential Convolution

Methods for Interpolation and Filtering of Incomplete and Uncertain Data

Hans Knutsson Carl-Fredrik Westin

Computer Vision Laboratory

Department of Electrical Engineering

Linkoping University, S-581 83 Linkoping, Sweden

Fax: +46 13 138526, email: [email protected], email: [email protected]

Abstract

In this paper it is shown how false operator re-sponses due to missing or uncertain data can be sig-nificantly reduced or eliminated. Perhaps the mostwell-known of such effects are the various ‘edge effects’which invariably occur at the edges of the input dataset. Further, it is shown how operators having a higherdegree of selectivity and higher tolerance against noisecan be constructed using simple combinations of ap-propriately chosen convolutions. The theory is basedon linear operations and is general in that it allowsfor both data and operators to be scalars, vectors ortensors of higher order.Three new methods are presented: Normalized con-

volution, Differential convolution and Normalized Dif-ferential convolution. All three methods are examplesof the power of the signal/certainty - philosophy, i.e.the separation of both data and operator into a sig-nal part and a certainty part. Missing data is simplyhandled by setting the certainty to zero. In the caseof uncertain data, an estimate of the certainty mustaccompany the data. Localization or ‘windowing’ ofoperators is done using an applicability function, theoperator equivalent to certainty, not by changing theactual operator coefficients. Spatially or temporallylimited operators are handled by setting the applica-bility function to zero outside the window.Consistent with the philosophy of this paper all algo-

rithms produce a certainty estimate to be used if fur-ther processing is needed. Spectrum analysis is dis-cussed and examples of the performance of gradient,divergence and curl operators are given.

1 IntroductionInformation representation is an important issue in

all multi-level signal processing systems. The issueis complex and what is a good information represen-

tation varies with the application. Nevertheless, westress that an important common feature for a goodrepresentation is that it should keep statement andcertainty of statement apart,

From a philosophical point of view it should be noargument that ‘knowing’ and ‘not knowing’ are dif-ferent situations regardless of what can be known.Thoughts along these lines are of course by no meansnew ([11] [3] [15] [5] ) and can, dependent on pointof view, be said to have relations to both probabil-ity theory, fuzzy set theory, quantum mechanics andevidence calculus. However, it is felt that the visioncommunity would benefit from an increased awarenessof the importance of these ideas. The present paper isintended to be a contribution towards this end.

Consider the following simple example. Vectors arecommonly used for representing speed and directionof speed. How should a vector having zero magnitudebe interpreted? Is the speed zero or do we have thecase that no information about the velocity is avail-able? Note that information of local image velocityis impossible to recover in flat regions. In an examplebelow it is shown that, using a vector as the sole repre-sentation for local velocity, borders between regions ofmissing data and good data can induce strong erraticresponses.

In practice, having, or being able to produce, addi-tional certainty information is not unusual, e.g. rangedata normally consists of two parts; a scalar valuedefining the distance and an energy measure basicallydefining points, where the range camera has failed toestimate the distance, so called drop-outs.

2 NotationsBefore presenting the concepts of normalized and

differential convolution we will define the notationsthat will be used in this paper.

westin
Typewritten Text
Knutsson H, Westin CF. Normalized and differential convolution: Methods for interpolation and filtering of incomplete and uncertain data. CVPR'93. New York City, USA, 1993;515-523.
Page 2: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

ξ is the global spatial coordinate.

x is the local spatial coordinate.

T(ξ) is a tensor representing the input signal.

c(ξ) is a positive scalar function representingthe certainty of T(ξ).

B(x) is a tensor representing the operator filterbasis.

a(x) is the operator equivalent to certainty, apositive scalar function representing theapplicability of B(x).

The philosophy is that data as well as operators areaccompanied by a scalar component representing theappropriate ‘weight’ to put on the data or operatorvalues.

Definition 1 The basic operation needed for the fol-lowing presentation is a generalized form of convolutionwhich can be written:

U(ξ) =∑

x

a(x)B(x) ¯ c(ξ− x)T(ξ− x) (1)

where ¯ denotes some multilinear operation (in stan-dard convolution this operation is scalar multiplication).As long as the basic operation is understood explicitlyindicating the dependence on the spatial coordinates ξ

and x serves no purpose. Thus, for clarity, the aboveexpression for general convolution will in the followingbe written:

U = {aB cT} (2)

where the ‘hat’ over the multilinear operation symbolserves as a marker of the operation involved in the con-volution (this is useful when more than one operationsymbol appear within the brackets).

3 Normalized ConvolutionProbably the most commonly used operation in im-

age processing is convolution. The fundamental as-sumption underlying this fact is that the original rep-resentation of a particular neighbourhood, i.e. thevalues of the signal for each pixel, is not a good onebut that the neighbourhood can be better understoodwhen expressed in terms of a set of carefully chosennew basis functions. The original basis, i.e. the setof impulses located at each pixel, is considered to befixed, and thus the change of base becomes trivial.However, the assumption that the data is representedin a fixed basis is often false and neglecting to notethis fact can introduce severe errors.

The most prominent example of a ‘changing basis’situation is the missing samples case. A missing sam-ple means that the impulse basis function is missingand not that the coordinate, i.e. the value at thatposition, is zero. In a changing basis situation propersignal analysis is possible only if the representation ofthe signal is complete, i.e. the representation includesnot only the coordinates of the signal but also the basisin which the coordinates are given. For images a gen-eral representation of one basis impulse function wouldconsist of its spatial position and of its strength. Inthe following spatial positions are assumed to be quan-tized in a regular fashion so that positions can be iden-tified by simple enumeration. The impulse strength,however, will be explicitly represented and denoted byc. In the following c will be referred to as being thecertainty of the signal.

Normalized convolution is a method for perform-ing general convolution operations on data of sig-nal/certainty type. The definition allows for the signalto be represented by tensors (i.e. scalars, vectors ortensors of higher order).

Definition 2 Let normalized convolution of aB and cT

be defined and denoted by:

UN = {aB cT}N = N−1D (3)

where:

D = {aB cT}

N = {aB¯B∗ · c}

The star, *, denotes complex conjugate

The multilinear operator, ¯, used to produceD andNis the same, i.e. summation (if any) is performed overcorresponding indices. It should be noted that N inequation (3) contains a description of the certaintiesassociated with the new basis functions and can beused if further processing is needed.

0

5

10

15

0

5

10

150

0.2

0.4

0.6

0.8

1

Figure 1: Example of an applicability function α = 0,β = 2, rmax = 8

Page 3: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

0

5

10

15

0

5

10

150

0.2

0.4

0.6

0.8

1

Figure 2: An applicability function having α = 3,β = 0, rmax = 8

The applicability function can be said to definethe localization of the convolution operator. The ap-propriate choice of this function depends of course onthe application. The family of applicability functionsused in our experiments is given by:

a =

{r−α cosβ( π r

2 rmax

) r < rmax

0 otherwise(4)

where:

r denotes the distance from the neighbourhoodcenter.

α and β are positive integers

3.1 Mean square optimality

Normalized convolution produces a description ofthe neighbourhood which is optimal in the meansquare sense. To see this, express a given neigbour-hood, t, in a set of basis functions given by a matrixB and the coefficients u. (In this section standardmatrix and vector notation will be used.)

t′ = Bu (5)

Normally u will be of lower dimensionality than t andthe neigbourhood can not be described exactly, i.e.t′ 6= t. It is well known, however, that, for a given setof basis functions,B , the mean square error, ‖t′ − t‖,is minimized by choosing u to be:

u = [BTB]−1BT t (6)

A weighted mean square solution can be obtained byintroducing a diagonal matrix, W .

W t′ = WBu (7)

Then the minimum of ‖W (t′ − t)‖ is obtained bychoosing u to be:

u = [(WB)TWB]−1(WB)TW t (8)

Figure 3: Top left: The famous Lena-image has beendegraded to a grey-level test image only containing10% of the original information. Top right: Inter-polation using standard convolution with a normalizedsmoothing filter (see figure 1). Bottom left: Interpo-lation using normalized convolution with the same fil-ter as applicability function. Bottom right: Normal-ized convolution using a more local applicability func-tion (see figure 2).

which can be rewritten and split into two parts, N−1

and D.

u = [BTW 2B]︸ ︷︷ ︸N

−1BTW 2t︸ ︷︷ ︸

D

(9)

After some playing around with indices it can beshown that N and D are identical to the correspond-ing quantities used in normalized convolution, equa-tion (3). In normalized convolution the diagonalweighting matrix is, for a neigbourhood centered onξ0, given by:

W 2ii(ξ0) = a(xi) c(ξ0 − xi) (10)

Thus, normalized convolution can be seen as a methodfor obtaining a local weighted mean square error de-scription of the input signal. The input signal isdescribed in terms of the basis function set, B, theweights are adaptive and given by the data certaintiesand the operator applicability

Page 4: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

Figure 4: Left: Test image. Right: Interpolationusing normalized convolution.

3.2 0:th order interpolation

An illustrative example is constituted by the useof normalized convolution to obtain an efficient inter-polation algorithm in a missing sample situation. Inthe simplest possible case the operator filter basis con-sists of only one position invariant basis function, i.e.B = 1. The interpolated result can then be expressedas:

UN = {a · cT}N = {a · c}−1{a · cT} (11)

where · denotes standard scalar multiplication

Since B now is a constant, i.e. a 0:th order tensor itis evident that all components of T will be subject tothe same transformation and it is consequently enoughto study the case where T is a scalar g (for gray scale).Equation (11) then reduces to the following form:

uN = {a · cg}N = {a · c}−1{a · cg} (12)

where:

cg is the certainty value times the grey-level value

It may be worth mentioning that if all certaintiesare equal, equation (12) is reduced to standard convo-lution and the standard representation of a grey-levelimage:

{a · cg}N = {a · c}−1{a · cg} =

c{a · g}

c{a · 1}= {a′ · g}

(13)where a′ is the normalized applicability function. Inother words, interpolation by convolution using a ker-nel maintaining the local DC-component, a techniquewidely used when resampling images to a larger size.

� �

����� �������

Figure 5: A spectrum analysis experiment. The topleft shows the signal. The top right shows the func-tion that was used both as window and as applicabilityfunction. At the bottom the result of standard win-dowed DFT analysis (left) and of spectrum analysisusing normalized convolution (right) is shown.

Experimental results

The first example is interpolation of a sparsely irreg-ular sampled test-image. The image has been con-structed by the use of gated white noise. The thresh-old was chosen so only 10% of the data remained, seefigure 3 (top left). An attempt to reconstruct this im-age by simple smoothing is of course deemed to faildue to the sample density variation. The result of thesmoothing operation using the filter shown in figure 1is shown in figure 3 (top right).

The result of the normalized convolution using thesame filter as applicability function is shown in figure3 (bottom left). The local normalization performedby normalized convolution compensates effectively forthe sample density variations. Although the result issatisfactory, we will show that an even better resultcan be obtained using a more localized applicabilityfunction. There is no need for smoothing the imagesince the shape of the applicability function is compen-sated for in the normalization. The parameter valuesfor the applicability function used to produce the re-sult shown in figure 3 (bottom right) were α = 3 andβ = 2. Using such a filter in normalized convolutionsresults in an adaptive smoothing of the reconstructedimage. The more missing samples the more smoothingis performed.

The second example is interpolation of a denselysampled image having large missing regions, see figure4. The result shows that the algorithm is capable offilling in the ‘holes’ with ‘plausible’ data. The test alsoshows that the algorithm performs well close to theimage border since this region of an image also belongs

Page 5: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

� � �

� � � � � ���

Figure 6: The result of an experiment using the samesignal as in figure 5, the difference being that 50% ofthe samples were removed at random. The removalof the samples has a dramatic effect on the standardDFT of the signal, the spectrum analysis performed bynormalized convolution is, however, robust.

to the case densely samples/large missing regions.

3.3 Spectrum analysis

Spectrum analysis provides a good example of hownormalized convolution can be used. All spectrumanalysis methods, explicitly or implicitly, involve somekind of windowing operation. In image processing thewindows are typically small and the standard window-ing operation is multiplication, i.e. the signal is simplymultiplied with the window function prior to the anal-ysis. The effect is that what is being analyzed is notthe signal but a corrupted version thereof, the impli-cations of which are well known and unwanted. Thisway of solving the locality problem is a clear violationof the signal/certainty principle.

A much preferable way of attaining local signalanalysis is to let the signal be accompanied by a valuestating how important the signal, at a given point, isfor the analysis. In this way the signal can be leftunchanged and locality be introduced by letting theimportance the signal decrease with the distance fromthe center according to a window function. This is pre-cisely what normalized convolution is capable of, theapplicability function being the equivalent of a win-dowing function.

In a missing sample situation the advantages of thesignal/certainty approach become even more appar-ent. Standard spectrum analysis has no way of copingwith this situation, but the performance of normalizedconvolution is highly robust.

Experimental results

Figure 5 shows a spectrum analysis experiment. Thetop left shows the signal which is a sum of a constantterm plus two different sinusoids. The top right showsthe function that was used both as window and asapplicability function. The bottom of figure 5 showsthe result of standard windowed DFT analysis (left)and of spectrum analysis using normalized convolution(right). The advantage of using normalized convolu-tion is evident - the frequency domain resolution issignificantly improved and the two frequency compo-nents of the sinusoids are clearly separated.

Figure 6 shows the result of an experiment usingthe same signal as in figure 5, the difference being that50% of the samples were removed at random. Unsur-prisingly the removal of the samples has a dramaticeffect on the standard DFT of the signal. The spec-trum analysis performed by normalized convolution is,however, remarkably robust and only minor changescan be seen.

3.4 Signal reconstruction

Another application where normalized convolutioncan be applied is in signal reconstruction. Signal re-construction always implies the use of a model forthe signal. In the normalized convolution case themodel is implicit in the chosen basis functions B. Itcan be shown that the signal reconstruction performedby normalized convolution is a weighted least squaressolution using the available basis functions, [7]. How-ever, a good reconstruction can only be hoped for ifthe chosen basis functions are likely to be able to ac-count for a large part of the signal.

Experimental results

Figure 7 shows a signal reconstruction experimentwhere 32 out of 50 samples were missing. The figureshows the signal (top left), the signal certainty prod-uct (top right), the distribution over the used basisfunctions (bottom left) and the reconstructed signal(bottom right). The basis functions consisted of a con-stant and 5 sin/cos pairs having frequencies 0.5, 1.0,1.5, 2.0 and 2.5 cycles per graph width. Although thesignal was chosen not to fit any of the individual basisfunctions, the reconstruction of the signal is good.

3.5 Higher order interpolation

In section 3.2, interpolation using one basis func-tion was discussed. In this section it will be shownthat even better interpolation results can be achievedusing a larger set of basis functions. The reconstruc-tion of the ramp function in the previous section (fig-ure 7) was based on inverting the normalized DFT, i.e.

Page 6: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

� � �

��� � � � � � ���

Figure 7: A signal reconstruction experiment. Thefigure shows the signal (top left), the signal certaintyproduct (top right), the distribution over the used ba-sis functions (bottom left) and the reconstructed signal(bottom right).

based on one measurement centered on the signal. Inthis section convolution is performed as in section 3.2,i.e. each part of the signal is reconstructed locally.

Experimental results

In all three examples below, seven basis functions wereused; one DC function and three sin/cos pairs havingone, two and three cycles per graph width.

Figure 8 shows higher order interpolation of a “ran-dom walk” signal. This signal has a spectrum decreas-ing as one over the frequency variable (as the spec-trum in many images). Although more than halvesthe samples was removed at random, the reconstruc-tion is good.

� � �

� ������� �� � ��� �� ��� � ��� � �

Figure 8: Interpolation example using normalizedconvolution. The figure shows the signal (top left),the signal certainty product (top right), the used ap-plicability function (bottom left) and the reconstructedsignal (bottom right).

Figure 9 shows higher order interpolation of asmoothly varying signal. In this case, when the usedbasis functions well describe the signal, it is almostcompletely restored although large parts is missing.

� � �

� ������� �� ����� �� ��� � ��� � �

Figure 9: Interpolation example using normalizedconvolution. The figure shows the signal (top left),the signal certainty product (top right), the used ap-plicability function (bottom left) and the reconstructedsignal (bottom right).

Figure 10 shows higher order interpolation of a asignal containing the signal presented in figure 9 plusa term varying very fast. In areas where the certain-ties is high, the reconstruction is very good. In areaswhere large parts are missing, a “low-pass” version ofthe signal is reconstructed. The signal is adaptivelysmoothed.

� � �

� ������� �� ����� �� ��� � ��� � �

Figure 10: Interpolation example using normalizedconvolution. The figure shows the signal (top left),the signal certainty product (top right), the used ap-plicability function (bottom left) and the reconstructedsignal (bottom right).

4 Differential ConvolutionIn this section we will discuss an operation termed

differential convolution. This operation can be shown

Page 7: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

to be equivalent to locally weighted sums over all oper-ator differences acting on the corresponding data dif-ferences - hence the name. This description may, tobegin with, not give a full understanding of the opera-tion. The key words are, however, operator differencesand data differences. This makes the operation insen-sitive to any constant term in the input signal.

Definition 3 Let differential convolution between aB

and cT be defined and denoted by:

U∆ = {aB cT}∆ = (14)

{a · c}{aB cT} − {aB · c} ¯ {a · cT}

In definition 3, it is seen that differential convolution isbased on a nonlinear combination of different standardconvolutions. The term, {a c}, can be regarded as thelocal certainty energy and the second term, {aB cT},is the term that corresponds to standard convolution.When it comes to the third and fourth term the inter-pretation is somewhat harder. Each operator in thefilter is weighted locally with corresponding data pro-ducing a weighted average operator where the weightsare given by the data.

{aB · c} ↔ data dependent mean-operator

For the fourth term it is vice versa. The mean-data iscalculated using the operator certainty as weights.

{a · cT} ↔ operator dependent mean-data

Differential convolution should consequently be inter-preted as: A standard convolution weighted with thelocal energy minus the “mean” operator acting on the“mean” data. It has been shown, [6], that differentialconvolution performs summation over all operator dif-ferences acting on corresponding data differences, i.e.:

U∆ = {aB cT}∆ =

1

2

ij

aiajcicj(Bi −Bj)(Ti −Tj) (15)

It may be worth repeating that the double sumin equation (15) is never carried out, the result isachieved by the combination of four simple sums. Notethat only point pairs having non zero ai, ci, aj and cjwill contribute to the sum. If B or T is constant, theexpression will be sum to zero.

0 5 10 150

510

150

0.5

1

0 5 10 150

510

150

0.5

1

0 5 10 150

510

15-1

0

1

0 5 10 150

510

15-1

0

1

0 5 10 150

510

15-1

0

1

0 5 10 150

510

15-1

0

1

Figure 11: Filters shown are the applicability func-tions times operator components used for gradient es-timation. Certainty kernel (top:left) as defined abovehaving τ = 0, β = 2, rmax = 8 Operator kernels valuesare a, a(x2 + y2), ax, ay, a(x2 − y2), a2xy

5 Normalized Differential ConvolutionNormalized Differential convolution is, as the name

indicates, a combination of the first presented conceptsin this paper, Normalized convolution, and Differentialconvolution. Normalized differential convolution aremethods for performing general differential operationson data of signal/certainty type in cases where the DC-component of the output data is zero or uninteresting.

We begin with the definition:

Definition 4 Let normalized differential convolutionbetween aB and cT be defined and denoted by:

UN∆ = {aB T}N∆ = N−1∆ D∆ (16)

where:

D∆ = {a · c}{aB cT} − {aB · c} ¯ {a · cT}

N∆ = {a · c}{aB¯B∗ · c}−

{aB · c} ¯ {aB∗ · c}

5.1 Gradients from plane basis functions

As mentioned, normalized differential convolutionis a useful method for performing general differentialoperations with an operator giving a zero or uninter-esting DC component output. The example we willpresent in this paper is gradient estimation.

The minimum number of basis functions needed forestimating the local gradient using normalized differ-ential convolution is one for each dimension (normal-ized convolution requires one more, the constant DC

Page 8: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

operator). The plane basis is, in n dimensions, givenby:

B = (x1, x2, ... , xn)

where xk are the components of the spatial basis func-tions.

Inserting these basis functions in the definition ofnormalized differential convolution and using the re-lation derived in equation (15), writing all the indicesexplicitly gives:

D∆ = (17)

{a · c}{aB⊗ cT} − {aB · c} ⊗ {a · cT} (18)

=1

2

kl

akalckcl︸ ︷︷ ︸dkl

(xkn − xln)︸ ︷︷ ︸∆xkln

(Tk −Tl)︸ ︷︷ ︸∆Tkl

(19)

=1

2

kl

dkl∆xkln∆Tkl (20)

and

N∆ = (21)

{a · c}{a(B⊗B∗) · c} − {aB · c} ⊗ {aB∗ · c}

=1

2

kl

dkl∆xkln∆xklm (22)

If the gradient, ∂T∂xm

, is constant in the neighbour-hood then

∆Tkl = ∆xklm

∂T

∂xm

for all k, l (23)

Equation (20) then simplifies to:

D∆ =∂T

∂xm

1

2

kl

dkl∆xkln∆xklm (24)

Equation (22) and (24) inserted in equation (4), thedefinition of normalized differential convolution gives:

N−1∆ D∆ =

∂T

∂xm

(25)

which shows that the true gradient is estimated for anideal input signal.

Experimental results

This example shows estimation of the local gradientin a sparsely irregular sampled two dimensional scalarfield. The equations from definition 4 get the followingform;

D∆ = {a · c}{aB ⊗ cT} − {aB · c}{a · cT}

N∆ = {a · c}{a(B⊗B) · c} − {aB · c} ⊗ {aB · c}

Figure 12: Top left: The Lena-image degraded to10% of the original information. Top right: Gradi-ent magnitude estimation using normalized differen-tial convolution. Bottom: Estimation of the x- andy-gradient in the top left image using Differential nor-malized convolution with a filter having τ = −3, β =0, rmax = 8.

where ⊗ denotes tensor product (outer product). Thebasis functions needed are:

B = (x, y) and B⊗B =

(x2 xy

xy y2

)(26)

The filters are weighted with the applicability functiona.

(a, ax, ay, ax2, axy, ay2) (27)

For practical purposes, an equivalent set of filters wasimplemented:

(a, ax, ay, a(x2 + y2), axy, a(x2 − y2) (28)

These are shown in figure 11.

D∆ = {a·c}

({ax·cg}{ay·cg}

)−

({ax·c}{ay·c}

){a·cg}

N∆ = {a·c}

({ax2·c} {axy·c}{axy·c} {ay2·c}

)−

({ax·c}{ax·c} {ax·c}{ay·c}{ax·c}{ay·c} {ay·c}{ay·c}

)

Page 9: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

-15 -10 -5 0 5 10 15-15

-10

-5

0

5

10

15

Figure 13: Left: One frame from the tree-sequence.Right: Estimated velocities from the tree-sequence.See Landelius [10]. In order to increase the visua-bility the original 150x150 image has been resampledto 30x30 pixels .

Note that many terms in this expression are identi-cal. The actual numbers of convolutions needed is thenumber of basis functions in equation (27), i.e. sixscalar convolutions, see figure 11.

The sparsely sampled test image we used in the in-terpolation example in section 3 shall now be used fortesting our new gradient estimation method. The im-age is filtered with the six filters shown in figure 11.Combining these outputs according to the normalizedconvolution procedure gives the gradient estimate, seefigure 12. We can see that the algorithm has no prob-lem coping with the large variation in the samplingdensity

5.2 Divergence and curl in 2D

The gradient estimation method described in theprevious section can be used for estimation of first or-der differential invariants of image velocity fields i.e.curl, divergence and shear [9]. Since these invariantshave found considerable attention in Computer Vision[1, 4, 12] it has been very exiting to test the power oursignal/certainty philosophy on the problem of differ-entiating a velocity field. The motion field inducedby a moving camera is often sparse since estimationof velocity requires moving texture or borders. Themotion of all flat surfaces is unknown (not zero).

The invariants divergence, curl and shear can all bedefined as a combination of partial derivatives of theimage velocity field. The divergence in 2D is definedby:

∇ ·T = trace(∇T) = T11 + T22 =∂T1

∂x1+

∂T2

∂x2(29)

-10 0 10-15

-10

-5

0

5

10

15

-10 0 10-15

-10

-5

0

5

10

15

Figure 14: Left: Normalized differential convolutionon the velocity field in figure 13 produces a field point-ing to the right. If the camera had had the reversedmotion. i.e. a motion away from the scene, the outputfield would have pointed to the left. Pure rotation ofthe camera would have produced a vector field pointingup or down depending on direction of rotation. Right:Standard convolution. We can see that missing datacompletely destroy many of the estimates.

The curl in 2D is a scalar defined by:

∇×T = T21 − T12 =∂T2

∂x1−

∂T1

∂x2(30)

The use of complex numbers instead of 2-dimensional vectors simplifies the formalism for thedivergence and the curl operator. We therefore intro-duce a complex gradient operator and a complex datarepresentation:

∇ =∂

∂x1+ i

∂x2and T = T1 + iT2

This gives the following representation of divergenceand rotation:

Div = xT1 + yT2 and Rot = xT2 − yT1

In polar coordinates we get Bk = bk eiβk ,Tk = tk eiτk . The reference neighbourhood is thecomplex conjugate of the filter, B∗

k = bk e−iβk In-serting this in the definition of normalized differentialconvolution gives:

D∆ = {a · c}{aB · cT} − {aB · c}{a · cT}

Page 10: Normalized and Difierential Convolution Methods for Interpolation and Filtering of Incomplete and Uncertain Data

=1

2

∑akalckcl(e

iβk − eiβl)(eiτk − eiτl)

= 2∑

dkl sin(∆βkl

2) sin(

∆τkl

2)ei(∆βkl+∆τkl)

where:

∆τ = τk − τl

∆β = βk − βl

and

N∆ = {a · c}{aBB∗ · c} − {aB · c}{aB∗ · c}

=1

2

∑akalckcl ‖(e

iβk − eiβl)‖2

= 2∑

dkl sin2(∆βkl

2)

Thus

N−1∆ D∆ =

∑dkl sin(

∆βkl

2 ) sin(∆τkl

2 )ei(∆βkl+∆τkl)

∑dkl sin

2(∆βkl

2 )(31)

Experimental results

As a final example will be shown how local divergenceand curl can be esimated in an image sequence, ([2]).In this example the camera is moving towards a flatimage of a tree, (figure 13 left). The input data isthe sparse estimated velocity field in figure 13 (right).The result of normalized differention convolution usinga divergence/curl operator is shown in figure14(left).The result of standard convolution using the same op-erator is shown in figure 14(right).

6 AcknowledgementThe support from the Swedish National Board for

Technical Development is gratefully acknowledged.Part of this work has been founded by the VAP projectwithin the Esprit Basic Research Action. The authorsalso wish to thank the members of our computer visiongroup, in particular Prof. Gosta Granlund for manyinspiring discussions.

References[1] Roberto Cipolla and Andrew Blake. Surface orienta-

tion and time to contact from image divergence anddeformation. In Proceedings of ECCV–92, LNCS–

Series Vol. 588. Springer–Verlaag, 1992.

[2] David J. Fleet. Measurement of image velocity.Kluwer Academic Publishers, 1992. ISBN 0–7923–9198–5.

[3] G. H. Granlund and H. Knutsson. Contrast of struc-tured and homogenous representations. In O. J. Brad-dick and A. C. Sleigh, editors, Physical and Biological

Processing of Images, pages 282–303. Springer Verlag,Berlin, 1983.

[4] K. Kanatani. Structure and motion from opticalflow under orthographic projection. Computer Vision,

Graphics and Image Processing, 35:181-199, 1986.

[5] H. Knutsson. Representing local structure using ten-sors. In The 6th Scandinavian Conference on Image

Analysis, pages 244–251, Oulu, Finland, June 1989.Report LiTH–ISY–I–1019, Computer Vision Labora-tory, Linkoping University, Sweden, 1989.

[6] H. Knutsson and C-F Westin. Differential convolu-tion: A technique for filtering incomplete and uncer-tain data. In Accepted for 8th SCIA, Tromsø, Norway,May 1993. NOBIM.

[7] H. Knutsson and C-F Westin. Normalized and dif-ferential convolution: Methods for interpolation andfiltering of incomplete and uncertain data. In Accepted

for CVPR, New York City, USA, June 1993. IEEE.

[8] Hans Knutsson. Filtering and Reconstruction in Im-

age Processing. PhD thesis, Linkoping University,Sweden, 1982. Diss. No. 88.

[9] J. J. Koenderink and A. J. van Doorn. Invariant prop-erties of the motion parallax field due to the move-ment of rigid bodies relative to an observer. Opt.

Acta 22, pages 773–791, 1975.

[10] T. Landelius, L. Haglund, and H. Knutsson. Depthand velocity from orientation tensor fields. In Ac-

cepted for 8th SCIA, Tromsø, Norway, May 1993. NO-BIM.

[11] Donald M. MacKay. Information, Mechanism and

Meaning. M.I.T. Press, Cambridge, Massachusettsand London, England, 1969.

[12] R. C. Nelson and J. Aloimonos. Using flow field di-vergence for obstacle avoidance: towards qualitativevision. In Proc. 2nd Int. Conf. on Computer Vision,pages pages 188–196, 1988.

[13] C-F Westin. Feature extraction based on a tensorimage description, September 1991. Thesis No. 288,ISBN 91–7870–815–X.

[14] C-F Westin and H. Knutsson. Extraction of localsymmetries using tensor field filtering. In Proceedings

of 2nd Singapore International Conference on Image

Processing. IEEE Singapore Section, September 1992.

[15] R. Wilson and H. Knutsson. Uncertainty and infer-ence in the visual system. IEEE Transactions on Sys-

tems, Man and Cybernetics, 18(2), March/April 1988.

[16] R. Wilson, H. Knutsson, and G. H. Granlund. The op-erational definition of the position of line and edge. InThe 6th International Conference on Pattern Recog-

nition, Munich, Germany, October 1982.