Top Banner
Introduction to Location Discovery Lecture 9 September 29, 2005 EENG 460a / CPSC 436 / ENAS 960 Networked Embedded Systems & Sensor Networks Andreas Savvides [email protected] Office: AKW 212 Tel 432-1275 Course Website http://www.eng.yale.edu/enalab/courses/2005f/ eeng460a
65

Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Jan 21, 2016

Download

Documents

john

Introduction to Location Discovery Lecture 9 September 29, 2005 EENG 460a / CPSC 436 / ENAS 960 Networked Embedded Systems & Sensor Networks. Andreas Savvides [email protected] Office: AKW 212 Tel 432-1275 Course Website http://www.eng.yale.edu/enalab/courses/2005f/eeng460a. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Introduction to Location Discovery Lecture 9

September 29, 2005

EENG 460a / CPSC 436 / ENAS 960 Networked Embedded Systems &

Sensor Networks

Andreas [email protected]

Office: AKW 212Tel 432-1275

Course Websitehttp://www.eng.yale.edu/enalab/courses/2005f/eeng460a

Page 2: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Lecture Outline

Ecolocation Probabilistic localization methods Camera Based Localization Rigidity Other topics mentioned in discussion

• Robust Quatrilaterals• Robustness and secure localization• Radio Interferometric localization

Page 3: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Radio Signal Strength: Ecolocation (Yetvali et. al USC & Bosch)

Initiation: Node with unknown location (Unknown Node) initiates localization process by broadcasting a localization packet. Nodes at known reference locations (Reference Nodes) collect RSS readings and forward them to a single point.Procedure: Determine the ordered sequence of reference nodes

by ranking them on collected RSS readings. The ordering imposes constraints on the location of the unknown node.

For each location grid-point in the location space determine relative ordering of reference nodes and compare it with RSS ordering to determine how many of the ordering constraints are satisfied.

Pick the location that maximizes the number of satisfied constraints. If there is more than one such location, take their centroid.

Page 4: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

“Constraints” & “Sequences”

1

2

34

AB

C

D

E E

D

C

B

A

1 2

34

Reference nodes (B,C,D,E) ranked into ordered sequence by RSS readings.

The sequence of reference nodes changes with the location of the unknown node (A).

Ideal Scenario: DAB < DAC => RB > RC

Constraint on the location of the unknown node. RSS relationships between all reference nodes forms the constraint set.

R4<R1

R4<R3R3<R1

R4<R3R3<R2R2<R1R1

E:4D:3C:2B:1

Page 5: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Error Controlling Localization

The inherent redundancy in the constraint set helps withstand errors due to multi-path effects. Analogous to error control coding. Error Controlling Localization:

Ecolocation

Constraint construction inherently holds true for random variations in RSS measurements up to a tolerance level of |Ri - Rj|.

Real World Scenario: Multipath fading introduces errors in RSS readings which in turn introduce errors in the constraint set. Location estimate accuracy depends on the percentage of erroneous constraints.

Page 6: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Ecolocation Examples

0 1 2 3 4 5 6 7 8 9 10 11 12

1

2

3

4

5

6

7

8

9

10

11

12

X-AXIS (length units)

Y-A

XIS

(le

ng

th u

nits

)

Location estimate for 124739586

E

P

A1

A2

A4

A7

A3 A9

A5

A8

A6

0 1 2 3 4 5 6 7 8 9 10 11 12

1

2

3

4

5

6

7

8

9

10

11

12

X-AXIS (length units)

Y-A

XIS

(le

ng

th u

nits

)Location estimate for 913276584

P

E

A9

A1

A3

A2

A7 A6

A5

A8

A4

0 1 2 3 4 5 6 7 8 9 10 11 12

1

2

3

4

5

6

7

8

9

10

11

12

X-AXIS (length units)

Y-A

XIS

(le

ng

th u

nits

)

Location estimate for 123456789

E

P

A1

A2

A3

A4

A5 A6

A7

A8

A9

No Erroneous Constraints

0 1 2 3 4 5 6 7 8 9 10 11 12

1

2

3

4

5

6

7

8

9

10

11

12

X-AXIS (length units)Y

-AX

IS (

len

gth

un

its)

Location esitmate for 123745968

P

EA1

A2

A3

A7

A4 A5

A9

A6

A8

14% Erroneous Constraints

22% Erroneous Constraints

47% Erroneous Constraints

A: Reference Node

P: True Location of unknown node

E: Ecolocation Estimated Location

Page 7: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Simulations

Simulation Parameters

RF Channel Parameters Path loss exponent (η) Standard deviation of log-normal shadowing model (σ)

Node Deployment Parameters Number of reference nodes (α) Reference node density (β) Scanning resolution (γ) Random placement of nodes

Compared with four other localization techniques – Proximity Localization, Centroid, MLE, Approximate Point in Triangle (APIT).

Averaged over 100 random trials with 10 random seeds.

Page 8: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Simulation Results

1 3 5 70

10

20

30

40

50

60

70

80

90

100

110

= 7, = 25, = 0.11, = 0.1

Path loss exponent ()

Ave

rag

e lo

catio

n e

rro

r (%

of

Da)

EcolocationCentroidAPITMLEProximity

2 4 6 8 10 12 140

10

20

30

40

50

60

70

80

90

100

110

= 4, = 25, = 0.11, = 0.1

Standard deviation ()

Ave

rag

e lo

ca

tio

n e

rro

r (%

of

Da)

EcolocationCentroidAPITMLEProximity

3 5 7 9 11 13 15 17 19 21 23 250

10

20

30

40

50

60

70

80

90

100

110

= 0.11, = 0.1, = 4, = 7

Number of reference nodes ()

Ave

rag

e lo

catio

n e

rro

r (%

of

D a)

EcolocationCentroidAPITMLEProximity

0.01 0.04 0.11 1 0

10

20

30

40

50

60

70

80

90

100

110

= 25, = 0.1, = 4, = 7

Reference node density () (log scale)

Ave

rag

e lo

catio

n e

rro

r (%

of

Da)

EcolocationCentroidAPITMLEProximity

Da: Average inter reference node distance

Page 9: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Systems Implementation

Outdoors: Represents a class of obstruction free RF channels. Eleven MICA 2 motes placed randomly on the ground in a 144 sq. m area in a parking lot. Locations of all motes are estimated and compared with true locations. All motes in radio range and line of sight of each other.

Indoors: Represents a class of obstructive RF channels. Twelve MICA 2 motes (Reference nodes) are placed randomly on the ground in a 120 sq. m area in an office building. A MICA 2 mote (Unknown node) is placed in five different locations to be estimated. All motes in radio range but only a subset in line of sight of each other.

Page 10: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Systems Implementation Results

0 2 4 6 8 10 120

2

4

6

8

10

12

X (meters)

Y (

mete

rs)

Outdoor Experiment

True locationEstimated location

1

2

34

5

6

7

8

9

10

11

1 2 3 4 5 6 7 8 9 10 110

10

20

30

40

50

60

70

80

90

Node ID

Ave

rage lo

catio

n e

rror

(% o

f D a)

EcolocationMLEProximity

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

X (meters)

Y (

mete

rs)

Indoor Experiment

Ref. nodesTrue pathEstimated path

1

2

3

4 5

Conference room with furniture

Office room

Office room

Door

1 2 3 4 50

20

40

60

80

100

120

140

160

180

200

Unknown node location

Ave

rage

loca

tion

erro

r (%

of

Da)

EcolocationMLEProximity

Locations estimated using Ecolocation, MLE and Proximity methods.

Results suggest a hybrid localization technique.

Page 11: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Conclusion and Future Work

Future Work: Exploring Hybrid Localization technique further. Making Ecolocation more efficient using greedy search, multi-resolution search algorithms. Analytical background for Ecolocation. Measuring localization costs (Time, Throughput, Energy) for various realistic system designs and protocols.

Conclusion: Ecolocation performs better than other RF based localization techniques over a range of RF channel conditions and node deployment parameters. Simulation and experimental results suggest that a Hybrid Localization technique may provide the best accuracy.

Page 12: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Bayesian Filtering For Location Estimation(Fox et. al [Fox02])

State estimators - probabilistically estimate a dynamic system’s state from noisy observations• In system theory, the information that dynamic system

model gives us about the system is called system state

• A system model is a set of equations that describe the system state.

• The variables in the system model are the state variables

Page 13: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Bayesian Filters

In localization, state is the location of an entity• State representation is based on noisy sensor

measurements• In simple cases, state can be just position in 2D• In more complex cases, state can be a complex

vector including position in 3D, linear and rotational velocities, pitch, roll, yaw, etc.

Page 14: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Bayesian Filters

State (location) at time t is represented by random variable(s) x(t). At each time moment, Bayesian filter represents a probability

distribution over x(t) called belief If we assume a sequence of time indexed sensor observations

the belief becomes

This is the probability distribution over all possible locations (states) x at time moment t, based on all possible sensor data available at time moment t (earlier and present measurements).

1, , tz z

1( ) ( | , , )t t tBel x p x z z

Page 15: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Bayesian Filters: Markov Assumption

Complexity of probability function grows exponentially with sensor measurements

Bayesian systems assume that the dynamic system is a Markov System• State at time t only depends on state at time t-1

Page 16: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Implementing Bayesian Filter

Under Markov assumption, the implementation of Bayesian filter requires following specifications:

1. Representation of the belief2. Perceptual model

– Probability that state x(t) produces observation z(t)

3. System dynamics– Probability that state x(t) follows state x(t-1)

4. Initialization of the belief– Initialized based on prior knowledge, if available– Typically uniform distribution, if no prior knowledge exists

( )tBel x( | )t tp z x

1( | )t tp x x

( )oBel x

Page 17: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Implementing Bayesian Filter

Based on given specifications, Bayesian filter acts in two steps:

1.1. PredictionPrediction. Based on system state at time = t-1, filter computes prediction (a priori estimate) to system state at time moment t

2.2. CorrectionCorrection. When new sensor information corresponding time moment t is received, filter uses it to compute corrected belief (a posteriori estimate) to system state at time moment t

1 1 1( ) ( | ) ( )t t t t tBel x p x x Bel x dx

( ) ( | ) ( )t t t t tBel x p z x Bel x (In the above equation, alpha is simply a normalizing constant ensuring that the posterior over the entire state space sums up to 1.)

Page 18: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Bayesian Filter Example

(a) A person carries a camera that can observe doors, but cannot distinguish different doors. Initialization is uniform distribution.

(b) Sensor sends ”door found” signal. Resulting belief places high probability at locations next to doors and low probability elshewere. Because of sensor uncertainty (noise), also nondoor locations possess small but nonzero probability.

(c) Motion’s effect to belief. Bayes filter shifts the belief (a priori estimate) in the direction of sensed motion, but also smoothens it because of the uncertainty in motion estimates.

(d) Sensor sends ”door found” signal. Based on that observation, filter corrects previous a priori belief estimate to a posteriori belief estimate.

(e) Motion’s effect to belief. Bayes filter shifts the belief (a priori estimate) in the direction of sensed motion, but also smoothens it because of the uncertainty in motion estimates. Compared to case c, belief estimate is converging to one peak that is clearly higher than other ones. One can say that filter is converging or learning.Picture and example from Fox, D., Hightower, J., Liao, L., Schulz, Picture and example from Fox, D., Hightower, J., Liao, L., Schulz,

D., Borriello, G., ”D., Borriello, G., ”Bayesian Filtering for Location EstimationBayesian Filtering for Location Estimation”, ”, IEEE Pervasive Computing 2003.IEEE Pervasive Computing 2003.

Page 19: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Different Types of Bayesian Filters

Kalman Filter• Most widely used variant on Bayesian filters• Optimal estimator assuming that the initial uncertainty is Gaussian

and the observation model and system dynamics are linear functions of the state

←In nonlinear systems, Extended Kalman Filters which linearize the system using first order Taylor series are typically applied

− Best if the uncertainty of the state is not too high, which limits them to location tracking using either accurate sensors or sesors with high update rates

Multihypotesis tracking• MHT overcomes Kalman Filter’s limitation to unimodal

distributions by representing the belief as mixtures of Gaussians• Each Gaussian hypothesis is tracked by using Kalman Filter• Still rely on the linearity assumptions of Kalman Filters

Page 20: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Other Types of Bayesian Filters

Grid-based approaches• Discrete, piecewise constant representations of the belief• Update equations othervise identical to the general Bayesian filter update

equations, but summation replaces integration• Can represent arbitrary distributions over the discrete state space• Disadvantage computational and space complexity

Topological approaches• Topological implementations of Bayesian filters, where a graph represents

the environment• The motion model can be trained to represent typical motion patterns of

individual persons moving through the environment• Main disadvantage

o Location estimates are not fine-grained

Page 21: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Different Bayesian Filters

Particle Filters• Bayesian filter updates are performed according to a sampling

procedure often called sequential importance sampling with resampling

• Ability to represent arbitrary probability densities, can converge to true position even in non-Gaussian, non-linear dynamic systems

• Efficient because they automaticly focus their resources (particles) on the regions in state space with high probability

• One must be careful when applying Particle Filters to high dimensional estimation problems, because worst-case complexity grows exponentially in the dimensions of the state space

Page 22: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Other Types of Bayesian Filters

Particle Filters• Beliefs are represented by sets of samples called particles:

In the equation, each x is a state and w:s are nonnegative weights called importance factors, which sum up to one.

For more detailed treatment of Particle Filters see [Schultz03]

( ) ( )( ) , | 1, ,i it t t tBel x S x w i n

Page 23: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Particle Filter Example

(a) A person carries a camera that can observe doors, but cannot distinguish different doors. A uniformly distributed sample set represents initially unknown position.

(b) Sensor sends ”door found” signal. The particle filter incorporates the measurement by adjusting and normalizing each sample’s importance factor leading to a new sample set, where importance factors are proportional to the observation likelihood p(z|x).

(c) When a person moves, particle filter randomly draw samples from the current sample set with probability given by importance factors. Then the filter use the model to guess (predict) the location for each new particle.

(d) Sensor detects door. By weighting the importance factors in proportion to this probability p(z|x), updated sample set w(x) is obtained.

(e) After the prediction, most of the probability mass is now consistent with person’s true location.

Picture and example from Fox, D., Hightower, J., Liao, L., Schulz, Picture and example from Fox, D., Hightower, J., Liao, L., Schulz, D., Borriello, G., ”D., Borriello, G., ”Bayesian Filtering for Location EstimationBayesian Filtering for Location Estimation”, ”, IEEE Pervasive Computing 2003.IEEE Pervasive Computing 2003.

Page 24: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Bayesian Filters - Conclusions

Dealing with uncertainty Starting for initial estimate, system converges over time to

give more accurate estimates Possibility to exploit several type of sensor measurements

and other available quantitative knowledge of sensing environment (initial estimates, digital maps...)

Suitable Bayesian Filter type depends on sensor type (what informtion is available), sensing environment (indoor, outdoor, noise level...), system model (linear, nonlinear, continuous time, discrete time..)

In addition to localization, several other application fields exists in pervasive computing• Movement recognition, data prosessing

Page 25: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Camera Assisted Localization

What can cameras measure?• Assuming they can identify an object in a scene, they

can measure, the relative angle between two objects

With known rotation and translation of a camera, you also have directional information

Still need to bypass the correspondence problem between camera views

Page 26: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Some Camera Background

World Coordinates Origin

Camera Coordinates Origin

vu

Image Coordinates

Z

X

Y

X

Y

Z

Each camera is characterized by a 3 x 3 rotation matrix R and a 3 x 1 TranslationMatrix T

w(x,yz)

Page 27: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Background: Camera Attributes

Each camera is characterized by:1. Its 3-D coordinates (x,y,z)2. A 3 x 3 rotation matrix R3. A 3 x 1 translation matrix TWorld to camera coordinates are related by

This also applies to transformation between camera coordinate systems.

TRww

World coordinatesCamera coordinates

Page 28: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Background: Camera Errors and Constraints

vuO

Y

X

Zf

z

y

z

x

w

w

f

v ,

w

w

f

u

zw

yx ww ,

TRww

)()(

)()(

yTyz

Tz

xTxz

Tz

TwRfTRv

TwRfTwRu

Basic World to Image

Equations:

Page 29: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Background: Errors and Constraints

vu

Image Coordinates

O

Y

X

Zf

Camera measurement precision is a function of pixel resolution and viewing angle

Error= viewing angle/pixels Each node observation is a vector Each pair of vectors forms a constraint

Page 30: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Problem Statement

Camera Node

Ultrasound Distance Node

Radioless Tag

Given N sensor nodes: t1,t2,t3,…,tN

A subset of the nodes: m<N, t1,t2,…,tm have cameras

A subset of inter-node distances are known Goal:

• Compute 3-D coordinates for all nodes• Compute rotation and translation matrices, R and T for all

camera nodes

Page 31: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Camera as a Sensing Modality

vu

Image Coordinates

O

Y

X

Zf

The 3-D location w of each node is mapped to a 2-D location (u,v) on the image plane.

Each node observation is a unit vector originating at camera’s location and pointing towards node’s 3-D location w.

Each pair of unit vectors forms a constraint. Camera measurement precision is a function of pixel resolution and

viewing angle Error= viewing angle/pixels

w(x,y,z)

Page 32: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Camera Basics

World Coordinates Origin

Camera Coordinates Origin

vu

Image Coordinates

Z

X

YX

Y

Z

w(x,y,z)

Each camera is characterized by: Its 3-D coordinates (x,y,z) A 3 x 3 rotation matrix R A 3 x 1 translation matrix T

World to camera coordinates:

TRww

Camera coordinates World coordinates

Page 33: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Need something lightweight with two cameras

If you could localize nodes using a pair of overlapping camera views then you could use that to create a 3-D coordinate system

If relative R and T are known• Can transform among coordinate systems

o Can form a chain of cameras and consider multihop measurements

So what can you really do with two cameras?• Measured Epipoles (ME)

• Estimated Epipoles (EE)

Page 34: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Camera Epipoles

Epipoles: the points that intersect the image plane on a straight line between two camera centers

C C’

x

epipoles

e e

Epipolar plane

Page 35: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Camera Background

BB

CC

VVab ab

VVbaba

nnaa

nnbb

VVbcbc

VVacac

llab ab

llbc bc

llacac

AA

)vv(R vv

vRv

bcbaabacab

baabab

Tbaab

baba

bbabbab

aabaaba

baba

bcbaabacab

baabab

)(RRR

RRR

)]n(v n v[R

)]n(v n [vR

nRn

)v(vR vv

vRv

From C. Taylor

The points where the unit vectors Vab and Vba intersect with the image planes of cameras A and B respectively are called epipoles

Page 36: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Camera Background (Taylor’s Algorithm)

BB

CC

VVab ab

VVbaba

nnaa

nnbb

VVbcbc

VVacac

llab ab

llbc bc

llacac

AA

0vl)v(Rlvl accabcabbcabab

Given Rab, all the distances can be computed up to a scale Given a single Euclidean

distance, all Euclidean distances can be computed

Page 37: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Estimating the Epipoles

0Fxx'T

What if the two cameras cannot see each other? Assuming that there are at least 8 points in the common field of view of the two cameras, the epipoles

of both cameras can be estimated using the Fundamental matrix (8-point algorithm) The fundamental matrix F relates camera’s A image coordinates x to camera’s B image coordinates

x’ as follows:

This produces an over-constrained linear system of equations. The epipoles e’ and e for the two cameras satisfy the following equations:

Knowing F we can compute estimations for e’ and e. Using the estimated epipoles and the previous formulation proposed by Taylor we can compute the

rotation matrix between the two cameras and all the node-to-camera distances up to a scale 0Fe'T 0Fe

How good are the estimations of the epipoles?

Page 38: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Experimental Results (Indoors)

Estimated epipoles produce inaccurate results Note that the overestimated distances by camera A are underestimated by

camera B and vice versa! When the two cameras can view each other the results are extremely accurate.

Camera as a measurement modality is very accurate!

Page 39: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Refining Estimated Epipoles

Stratified reconstruction (traditional approach in Vision) – too complex for small devices

Alternative formulation

Camera Node

Ultrasound Distance Node

Radioless Tag

Given N sensor nodes: t1,t2,t3,…,tN

A subset of the nodes: m<N, t1,t2,…,tm have cameras

A subset of inter-node distances is known Goal:

Compute 3-D coordinates for all nodes Compute rotation and translation

matrices, R and T for all camera nodes

Page 40: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Refining the Estimated Epipoles

2ajajaiaiij)l,(l ||) vl - vl || - (lmin L ajai

knownlij

Taylor’s algorithm can be applied exactly in the same way The computed distances can be refined by minimizing the following set of

equations:

Can we always minimize this set of equations? NO! Minimization is possible only when there are n known edges among n different nodes and

each one of the n nodes appears in at least 2 different known edges. What is the minimum number of known edges for which L can be minimized?

3. In this case the nodes form a triangle 3 nodes 3 known edges (the edges of the triangle formed by the nodes) Each node appears in at least 2 different edges.

All the distances from the camera nodes to the nodes forming the triangle can now be refined!

Page 41: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Experimental Results

Indoors

Outdoors

Page 42: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Some Rigidity Issues(Slides contributed by Brian Goldenberg)

Physically: • Network of n regular nodes, m beacon nodes existing in space at

locations: {x1…xm,xm+1,…,xn}• Set of some pairwise distance measurements

o Usually between proximal nodes (d < r ) Abstraction

• Given: Graph Gn, {x1,...,xm}, edge weight function δ• Find: Realization of the graph

5

4

1

2

3

1

2

3

4

5

{x1,x2,x3}

{d14, d24, d25, d35, d45}

{x4, x5}

Page 43: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Localization problem “rephrasing”

23

0 4

1

56

0 d01 d02 d03 d04 d05 d06

d10 0 d12 ? ? ? d16

d20 d21 0 d23 ? ? ?d30 ? d32 0 d34 ? ?d40 ? ? d43 0 d45 ?d50 ? ? ? d54 0 d56

d60 d61 ? ? ? d65 0

Given:

0 d01 d02 d03 d04 d05 d06

d10 0 d12 d13 d14 d15 d16

d20 d21 0 d23 d24 d25 d26

d30 d31 d32 0 d34 d35 d36

d40 d41 d42 d43 0 d45 d46

d50 d51 d52 d53 d54 0 d56

d60 d61 d62 d63 d64 d65 0

Find:

Page 44: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

23

0 4

1

56

0 d01 d02 d03 d04 d05 d06

d10 0 d12 ? ? ? ?d20 d21 0 d23 ? ? ?d30 ? d32 0 d34 ? ?d40 ? ? d43 0 d45 ?d50 ? ? ? d54 0 d56

d60 ? ? ? ? d65 0

Given:

0 d01 d02 d03 d04 d05 d06

d10 0 d12 d13 d14 d15 d16

d20 d21 0 d23 d24 d25 d26

d30 d31 d32 0 d34 d35 d36

d40 d41 d42 d43 0 d45 d46

d50 d51 d52 d53 d54 0 d56

d60 d61 d62 d63 d64 d65 0

Cannot Find!

23

0 4

1

56

2

3

0 4

1

56

…24 possibilities

Remove one edge…and the problem becomes unsolvable

Page 45: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

When can we solve the problem?

Given: Set of n points in the plane, Distances between m pairs of points.Find: Positions of all n points………subject to rotation and translationssubject to rotation and translations

1:a

d

b

c

2a: 3a:

ad

a

c

2b: 3:

Page 46: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Discontinuous deformation

a

b

ef

c

d

a

b

e

f

c

d

Discontinuous non-uniqueness: - Can’t move points from one configuration to others while respecting constraints

flip

somethingelse

Page 47: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Continuous deformation

Continuous non-uniqueness: -Can move points from one configuration to another while respecting constraints-Excess degrees of freedom present in configuration

Page 48: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Partial Intuition, Laman’s Condition

Total degrees of freedom: 2n

How many distance constraints are necessary to limit a formation toonly trivial deformations?

==How many edges are necessary for a graph to be rigid?

Page 49: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Each edge can remove a single degree of freedom

How many edges necessary?

Rotations and translations will always be possible, so at least 2n-3edges are necessary for a graph to be rigid.

Page 50: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Is 2n-3 edges sufficient?

n = 3, 2n-3 = 3

yes

n = 4, 2n-3 = 5

yes

n = 5, 2n-3 = 7

no

Need at least 2n-3 “well-distributed” edges.

If a subgraph has more edges than necessary, some edges are redundant.

Page 51: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Condition for rigidity

Purely combinatorial characterization of generic minimal rigidity in the plane:

2n-3 edges necessary for rigidity, and:Laman’s condition:Laman’s condition:

A graph G with 2n-3 edges is rigid in two dimensions A graph G with 2n-3 edges is rigid in two dimensions if and only if no subgraph G’ has more than 2n’-3 if and only if no subgraph G’ has more than 2n’-3 edges.edges.

Laman’s condition is a statement that any rigid graph with n vertices must have a set of 2n-3 well-distributed edges. Analogs are necessary for rigidity in any dimension

G. Laman ‘70 * n’ is the number of nodes in the subgraph G’

Page 52: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Illustration of Laman’s condition

n=5, m=10 > 2n-3=7

There must be at least 3 redundant edges

remove 3 edges

too many edges inred subgraph!

remove 3 edges

2n-3 well-distributed edges

Page 53: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Unique Graph realizability

a e

b

f

c

d

ac

b

e

d

f

G must be 3-connected

G must be redundantly rigid:It must remain rigid upon removal of any single edge

G must rigid

Solution:

Page 54: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Global Rigidity

B. Hendrickson ’95, A. Berg and T. Jordan ‘02

A graph has a unique realization iff it is redundantly rigid and 3-connected

Page 55: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Network Localization Problem (cf. Graph Realization)

Given: Set of n points in the plane, Positions of k of them, Distances between m pairs of points.Find: Positions of all n points.

node with known position (beacon)

node with unknown position

distance measurement

Illustration:

Page 56: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Is the problem solvable?

Problem: By looking only at the graph structure, we neglect our a priori knowledge of beacon positions

Solution: The distances between beacons are implicitly known! • By adding all edges between beacons to network graph, we get the Grounded

Graph, whose properties determine generic solvability By augmenting graph structure in this way, we fully capture all constraint

information available in the graph itself.

is this localizable?

5

4

1

2

3 1

2

3

4

5{x4, x5}

If it is, then I can use {x1,x2,x3} and δ to get

the answer

LANS
its some sort of democratization of the nodes of the graph! We capture fully the fact that the beacons have known positions solely in the logical structure of the graph. At first, I thought this was rather trivial, but upon some thought, it is seeming more and more subtle.
LANS
now lets see the graph properties that lead to unique realizability
Page 57: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Conditions for localizability

Instead of this graph… this one is relevant!

A network is localizable if its grounded graph is globally rigid

Page 58: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Degenerate cases fool abstraction

2

1

3

4

4

2

13

2

1

3

4

{x1, x2, x3}

{d14, d24, d34}

probability 1 case:

probability 0 case:

first case: {x4}second case: ???

2

1

3

?

?

In general, this network is uniquely localizable.

In degenerate case, it is not:The constraints are redundant.

Page 59: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Algorithms for global rigidity

Triconnectivity – well studied Rigidity Testing

• 1985: First polynomial time algorithm• 1988: Matroid Sums - O(n2)

Redundant Rigidity Testing• 1995: Bipartite Matching – O(n2)

Redundantly Rigid Component Discovery• 1995: Pebble Game – O(n2)

Page 60: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Discovering Localizable Nodes

Nodes in redundantly rigid triconnected components (RRTs) containing 3 beacons are uniquely localizable

To identify RRT components, first extract triconnected subgraphs On triconnected subgraphs, discover RR components using

algorithm from computational physics called “the pebble game” (details in paper)

Page 61: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Why is RRT important

Nonlocalizable nodes are mislocalized if included as input to most localization algorithms

Figure compares using MDS over entire network with using MDS on localizable portions and rough estimation for nonlocalizable nodes

Large mislocalization errors under MDS

Page 62: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Localization

5

4

1

2

3

1

2

3

4

5

Decision problem Search problem

Rigiditytheory

Does this have aunique realization?

Yes/No

Grounded

graph 1

2

3

4

5

{x1,x2,x3}

{d14, d24, d25, d35, d45}

This graph has aUnique realization.What is it?

???

{x4,x5}

This problem is in general NP-hard

Page 63: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Robust Quadrilaterals(D. More et. al, SenSys 2004)

More robust method for enforcing rigidity conditions

Real implementation based on MIT’s Cricket nodes

Page 64: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

More things not covered here

More probabilistic methods for localization Localization using distance reconstruction Localization using angles Secure Localization More recent localization technologies

• Ultra-wide band systems, Radio Interferometry

Page 65: Andreas Savvides andreas.savvides@yale Office: AKW 212 Tel 432-1275 Course Website

Localization Conclusions

Localization is still a very challenging problem Lots of good theoretical work contributed

• Majority of the solutions rely heavily on some underlying assumption about the technology

• Most of this technology is still not in place. Majority of pressing issues are related to technology

• How do you measure distances and angles reliably in the presence of obstacles, interference and adversaries?

• How can you do that on a small energy and HW budget? Algorithmic problems and theoretical issues still exist

• Many systems will do OK with existing algorithms, physical layer would be the most pressing component missing

The process of securing localization still has a long way to go….• Better focus on a specific application domain first. A one fits all solution

will be harder to come up with.