1 Sensor Node Energy Roadmap 200020022004 10,0001,000100101.1 Average Power (mW) Deployed (5W) PAC/C Baseline (.5W) (50 mW) (1mW) Rehosting to Low Power.

Post on 25-Dec-2015

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

1

Sensor Node Energy Roadmap

20002000 20022002 20042004

10,0010,0000

1,0001,000

100100

1010

11

.1.1

Ave

rag

e P

ow

er

(mW

)

• Deployed (5W)

• PAC/C Baseline (.5W)

• (50 mW)

(1mW)

Rehosting to Rehosting to Low Power Low Power COTSCOTS (10x)(10x)

-System-On-Chip-System-On-Chip-Adv Power -Adv Power ManagementManagementAlgorithms (50x)Algorithms (50x)

Source: ISI & DARPA PAC/C Program

2

Communication/Computation Technology Projection

Assume: 10kbit/sec. Radio, 10 m range.Assume: 10kbit/sec. Radio, 10 m range.

Large cost of communications relative to computation Large cost of communications relative to computation continuescontinues

1999 (Bluetooth

Technology)2004

(150nJ/bit) (5nJ/bit)1.5mW* 50uW

~ 190 MOPS(5pJ/OP)

Computation

Communication

Source: ISI & DARPA PAC/C Program

3

Design IssuesUnattended Long Life Communication is more energy-expensive than computation 10E3 to 10E6 operations for the same energy

to transmit one bit to 10-100 meters.

Self-organizing Ad hoc Unpredictable and always changing.

Scalable Scale to the size of networks Requires distributed control

4

Sample Layered Architecture

In-network: Application processing, Data aggregation, Query processing

Adaptive topology control, Routing

MAC, Time, Location

Phy: comm, sensing, actuation, SP

User Queries, External Database

Data dissemination, storage, cachingCongestion control

Source: Kurose’s slide

Today’s lecture

5

Impact of Data Aggregation in Wireless Sensor NetworksSlides adapted from the slides from the authors

B. Krishnamachari, D. Estrin, and S. Wicker

6

Aggregation in Sensor Networks

Redundant Data/events

Some services are amenable for in-network computations. “The network is the sensor”

Communication can be more expensive than computation.

By performing “computation” on data en route to the sink, we can reduce the amount of data traffic in the network.

Increases energy efficiency as well as scalability The bigger the network, the more computational resources.

7

Data Aggregation Temperature Reading(source 2)Temperature Reading

(source 1)

Give Me The Average Temperature?( sink )

source 1

source 2

source 2

source 1 & 2

Aggregates the data before routing it

In this example average would aggregate to:<sum, count>

8

Transmission modesAC vs DC

Source 1 Source 2

A B

Sink

Source 1 Source 2

A B

Sink

a) Address-Centric (AC) Routing(no aggregation)

b) Data-Centric (DC) Routing(in-network aggregation)

1

1

2

2

21

1+2

DataAggregation

9

Let there be k sources located within a diameter X, each a distance di from the sink. Let NA, ND be the number of transmissions required with AC and optimal DC protocols respectively.

1. The following are bounds on ND:

2. Asymptotically, for fixed k, X, as d = min(di) is increased,

Theoretical Results on Aggregation

10

Theoretical results (DC)

ND Upper bound. k – 1 sources 1 source nearest sink Each X hop away: ( k – 1 )X + min(di)

ND Lower bound if X = 1 or all sources are at one hop

to the nearest source.

NA: >= k min(di)

11

Optimal Aggregation TreeSteiner Trees

*A minimum-weight tree connecting a designated set of vertices, called terminals, in a weighted graph or points in a space. The tree may include non- terminals, which are called Steiner vertices or Steiner points

b d g

a

e

c

h

f

5

2

5

4 1

1 2

3 2

3

2

3 1

2

1

b d g

a

e h

1

2

1 3 1

*Definition taken from the NIST site.http://www.nist.gov/dads/HTML/steinertree.html

12

Aggregation Techniques

Center at Nearest Source (CNSDC): All sources send the information first to the source nearest to the sink, which acts as the aggregator.

Shortest Path Tree (SPTDC): Opportunistically merge the shortest paths from each source wherever they overlap.

Greedy Incremental Tree (GITDC): Start with path from sink to nearest source. Successively add next nearest source to the existing tree.

13

Aggregation Techniques

a) Clustering based CNS

ClusterHead

SINK SINK

Shortest paths

b) Shortest Path Tree

Nearest sourceShortest path

c) Greedy Incremental

14

Source Placement Models I: Event Radius (ER)

15

Source Placement Models II: Random Sources (RS)

16

Energy Costs in Event-Radius Model

As R increases, the number of hops to the sink increases.

CNS approaches the optimal when R is

large.

17

Energy Costs in Event-Radius Model

More saving with more sources

18

Energy Costs in Random Sources Model

GIT does notachieve optimal

19

Energy Costs in Random Sources Model

20

Aggregation Delay in Event-Radius Model

In AC protocols, there is no aggregation delay. Data can start arriving with a latency proportional to the distance of the nearest source to the sink. In DC protocols the worst case delay is proportional to the distance of the farthest source from the sink.

21

Aggregation Delay in Random Sources ModelAggregation Delay in Random Sources Model

Although bigger in energy saving, it

incurs more latency.

22

Conclusions

Data aggregation can result in significant energy savings for a wide range of operational scenarios

Although NP-hard in general, polynomial heuristics such as the opportunistic SPTDC and greedy GITDC are near-optimal in general and can provide optimal solutions in useful special cases.

The gains from aggregation are paid for with potentially higher delay.

23

Congestion Control in WirelessSensor Networks

Adapted from the slides from:1. Kurose and Ross, Computer Networking, A

top-down approach.2. Mitigating Congestion in Wireless Sensor

Networks, B. Hull et al.

24

Principles of Congestion Control

Congestion:informally: “too many sources sending too much data too fast for the network to handle”different from flow control!manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers)

a top-10 problem!

25

Causes/costs of congestion scenario 1 two senders, two receiversone router, infinite buffers no retransmission

large delays when congestedmaximum achievable throughput

unlimited shared output link buffers

Host Ain : original data

Host B

out

26

Causes/costs of congestion scenario 2 one router, finite buffers sender retransmission of lost packet

finite shared output link buffers

Host A in : original data

Host B

out

'in : original data, plus retransmitted data

27

Causes/costs of congestionscenario 2

always: (goodput)

“perfect” retransmission only when loss:

retransmission of delayed (not lost) packet makes larger than

perfect case for same

in

out

=

in

out

>

in

out

“costs” of congestion: more work (retrans) for given “goodput”unneeded retransmissions: link carries multiple copies of pkt

R/2

R/2in

ou

t

b.

R/2

R/2in

ou

t

a.

R/2

R/2in

ou

t

c.

R/4

R/3

28

Causes/costs of congestionscenario 3

four sendersmultihop pathstimeout/retransmit

in

Q: what happens as and increase ?

in

finite shared output link buffers

Host A

in : original data

Host B

out'in : original data, plus retransmitted data

29

Causes/costs of congestion scenario 3

Another “cost” of congestion: when packet dropped, any “upstream transmission capacity used for that packet was wasted!

Host A

Host B

o

u

t

CongestionCollapse

30

Goals of congestion control

Efficient use of network resources Try to keep input rate as close to

output rate while keeping network utilization high.

Fairness Many flows competing for resources;

flows need to share resources No starvation. Many fairness definitions

Equitable use is not necessarily fair.

31

Congestion is a problem in wireless networks

Difficult to provision bandwidth in wireless networks Unpredictable, time-varying channel A channel (i.e., air) shared by multiple neighboring

nodes. Network size, density variable Diverse traffic patterns

But if unmanaged, congestion leads to congestion collapse.

32

Outline

Quantify the problem in a sensor network testbedExamine techniques to detect and react to congestionEvaluate the techniques Individually and in concert Explain which ones work and why

33

Investigating congestion55-node Mica2 sensor networkMultiple hopsTraffic pattern All nodes route

to one sink

B-MAC [Polastre], a CSMA MAC layer 100 ft.

16,076 sq. ft.

34

Congestion dramatically degrades channel quality

35

Why does channel quality degrade?

Wireless is a shared medium Hidden terminal collisions Many far-away transmissions corrupt

packets

Sender

Receiver

36

Per-node throughput distribution

37

Per-node throughput distribution

38

Per-node throughput distribution

39

Per-node throughput distribution

40

Hop-by-hop flow controlQueue occupancy-based congestion detection Each node has an

output packet queue Monitor

instantaneous output queue occupancy

If queue occupancy exceeds α, indicate local congestion

41

Hop-by-hop congestion control

Hop-by-hop backpressure Every packet header has

a congestion bit If locally congested, set

congestion bit Snoop downstream traffic

of parent

Congestion-aware MAC Priority to congested

nodes

01 Pa

cke

t

42

Rate limitingSource rate limiting Count your parent’s

number of sourcing descendents (N).

Send one (per source) only after the parent sends N.

Limit your sourced traffic rate, even if hop-by-hop flow control is not exerting backpressure

43

Related workHop-by-hop congestion control Wan et al., SenSys 2003 ATM, switched Ethernet networks

Rate limiting Ee and Bajcsy, SenSys 2004 Wan et al., SenSys 2003 Woo and Culler, MobiCom 2001

Prioritized MAC Aad and Castelluccia, INFOCOM 2001

44

Congestion control strategies

No congestion control Nodes send at will

Occupancy-based hop-by-hop CC

Detects congestion with queue length and exerts hop-by-hop backpressure

Source rate limiting Limits rate of sourced traffic at each node

Fusion Combines occupancy-based hop-by-hop flow control with source rate limiting

45

Evaluation setupPeriodic workloadThree link-level retransmitsAll nodes route to one sink using ETXAverage five hops to sink–10 dBM transmit power10 neighbors average

100 ft.

16,076 sq. ft.

46

Metric: network efficiency

Penalizes Dropped packets (buffer drops, channel losses) Wasted retransmissions

Interpretation: the fraction of transmissions that contribute to data delivery.

2 packets from bottom node, no channel loss, 1 buffer drop, 1 received: η = 2/(1+2) = 2/3

1 packet, 3 transmits, 1 received: η = 1/3

47

Hop-by-hop CC improves efficiency

48

Hop-by-hop CC conserves packets

No congestion control Hop-by-hop CC

49

Metric: imbalance

ζ=1: deliver all received dataζ ↑: more data not delivered

Interpretation: measure of how well a node can deliver received packets to its parent

i

50

Periodic workload: imbalance

51

Rate limiting decreases sink contention

No congestion control With only rate limiting

52

Rate limiting provides fairness

53

Hop-by-hop flow control prevents starvation

54

Fusion provides fairness and prevents starvation

55

Synergy between rate limiting and hop-by-hop flow control

56

Alternatives for congestion detection

Queue occupancyPacket loss rate TCP uses loss to infer congestion Keep link statistics: stop sending when

drop rate increases

Channel sampling [Wan03] Carrier sense the channel periodically Congestion: busy carrier sense more

than a fraction of the time

57

Comparing congestion detection methods

58

Correlated-event workload

Goal: evaluate congestion under an impulse of traffic Generate events seen by all nodes at

the same time At the event time each node:

Sends B back-to-back packets (“event size”)

Waits long enough for the network to drain

59

Small amounts of event-driven traffic cause congestion

60

Software architecture

Fusion implemented as a congestion-aware queue above MACApps need not be aware of congestion control implementation

Application

Routing

Fusion Queue

MAC

CC1000 Radio

61

SummaryCongestion is a problem in wireless sensor networksFusion’s techniques mitigate congestion

Queue occupancy detects congestion Hop-by-hop flow control improves efficiency Source rate limiting improves fairness

Fusion improves efficiency by 3× and eliminates starvation

http://nms.csail.mit.edu/fusion

top related