Top Banner
AbstractThe Quality of Service (QoS) in smart grid communications especially in monitoring smart grid assets is becoming significantly important for emerging smart grid applications. Wireless Sensor Networks (WSNs) are expected to be widely utilized in a broad range of smart grid applications due to their numerous advantages along with their successful adoption in various critical areas including military and health. WSNs protocols are not designed to provide QoS provisioning for monitoring applications. Thus, the use of WSNs in transmitting delay-critical data from smart grid assets calls for data prioritization and delay-mitigation schemes. In this paper, we propose a delay-responsive, cross layer scheme with linear backoff (LDRX) mechanism to address delay and service requirements of the smart grid monitoring applications. The LDRX scheme is designed to operate in cluster-tree WSN topology that is suitable for monitoring wide areas such as electrical substations or large installations. We show that LDRX has greater impact on delay reduction compared to previously proposed WSNs delay reduction schemes. Index TermsQoS, medium access control, smart grid, wireless sensor networks, cluster-tree topology, monitoring applications. I. INTRODUCTION Wireless sensor networks (WSNs) are considered as potential tools for monitoring and controlling the smart grid. WSN comprises of a large number of low-power, low-cost and small size sensor nodes. Sensor nodes communicate wirelessly over short distances. One of the major applications of sensor nodes is to collect different types of data, e.g. voltage, temperature, vibration and etc. WSNs are favored for monitoring applications because they are able to operate in harsh environmental conditions, a very low fault tolerance, extremely low power consumption, self-configuration. In environments where high voltages are in use, WSN can also provide necessary insulation. Despite the advantages of WSNs, they have not been utilized extensively for monitoring smart grid assets. This is mostly due to the inherent limitations of WSNs in real-time data delivery. This is due to the fact that WSNs utilize low power communication links in high node density. The abovementioned challenges raise reliability concerns in the smart grid. In fact, reliable data delivery has been widely studied in the WSN literature where the term “reliable” generally refers to ensuring data is delivered from source to destination or sink. In the smart grid, asset monitoring and control varies in importance and criticality, for instance monitoring the temperature of an oil filled transformer is considered neither delay critical nor the packet delivery ratio needs to be 100% all of the times. This is because the instance of temperature monitoring is a continuous process and does no quickly vary with time. On the other hand, transformer partial discharge (PD) monitoring is a highly delay critical monitoring application and the data needs to be transmitted in near real time fashion with highest reliability values. The need for near real time transformer PD monitoring arises in situations where the operator need to analyze all the PD peaks as they happen and without loss of any important data. The significance of predictable reliability, timeliness and Quality of Service (QoS) in smart grid communications has been also outlined in the recent studies [1]. In addition, it is well-known that protocols designed in an application-specific manner improve the performance of the WSN [2-3]. For this reason, we focus on the use of WSNs in the smart grid domain and aim to improve their performance in terms of delay and QoS. In this paper, we propose a delay-responsive, cross layer scheme with linear back-off (LDRX) mechanism to address delay and service requirements of the smart grid monitoring applications. The LDRX scheme is designed to operate in cluster-tree WSN topology that is suitable for monitoring large smart grid assets such as electrical substations or large installations. We propose to implement the LDRX scheme in a WSN with cluster-tree topology to monitor an electrical substation, as shown in Fig. 1 [4]. The proposed scheme can easily be extended to cluster-tree topologies with any size and depth. We show that LDRX has greater impact on delay reduction compared to previously proposed WSNs delay reduction schemes. Fig.1 WSN-based substation monitoring. A Delay Mitigation Scheme for WSN-based Smart Grid Substation Monitoring Irfan Al-Anbagi, Melike Erol-Kantarci, Hussein T. Mouftah School of Electrical Engineering and Computer Science University of Ottawa, Ottawa, ON, Canada [email protected], [email protected], [email protected]
6

A delay mitigation scheme for WSN-based smart grid substation monitoring

Apr 29, 2023

Download

Documents

Farouq Samim
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A delay mitigation scheme for WSN-based smart grid substation monitoring

Abstract— The Quality of Service (QoS) in smart grid

communications especially in monitoring smart grid assets is

becoming significantly important for emerging smart grid

applications. Wireless Sensor Networks (WSNs) are expected to

be widely utilized in a broad range of smart grid applications due

to their numerous advantages along with their successful adoption

in various critical areas including military and health. WSNs

protocols are not designed to provide QoS provisioning for

monitoring applications. Thus, the use of WSNs in transmitting

delay-critical data from smart grid assets calls for data

prioritization and delay-mitigation schemes. In this paper, we

propose a delay-responsive, cross layer scheme with linear

backoff (LDRX) mechanism to address delay and service

requirements of the smart grid monitoring applications. The

LDRX scheme is designed to operate in cluster-tree WSN

topology that is suitable for monitoring wide areas such as

electrical substations or large installations. We show that LDRX

has greater impact on delay reduction compared to previously

proposed WSNs delay reduction schemes.

Index Terms— QoS, medium access control, smart grid,

wireless sensor networks, cluster-tree topology, monitoring

applications.

I. INTRODUCTION

Wireless sensor networks (WSNs) are considered as

potential tools for monitoring and controlling the smart grid.

WSN comprises of a large number of low-power, low-cost and

small size sensor nodes. Sensor nodes communicate wirelessly

over short distances. One of the major applications of sensor

nodes is to collect different types of data, e.g. voltage,

temperature, vibration and etc. WSNs are favored for

monitoring applications because they are able to operate in

harsh environmental conditions, a very low fault tolerance,

extremely low power consumption, self-configuration. In

environments where high voltages are in use, WSN can also

provide necessary insulation.

Despite the advantages of WSNs, they have not been

utilized extensively for monitoring smart grid assets. This is

mostly due to the inherent limitations of WSNs in real-time

data delivery. This is due to the fact that WSNs utilize low

power communication links in high node density. The

abovementioned challenges raise reliability concerns in the

smart grid. In fact, reliable data delivery has been widely

studied in the WSN literature where the term “reliable”

generally refers to ensuring data is delivered from source to

destination or sink.

In the smart grid, asset monitoring and control varies in

importance and criticality, for instance monitoring the

temperature of an oil filled transformer is considered neither

delay critical nor the packet delivery ratio needs to be 100% all

of the times. This is because the instance of temperature

monitoring is a continuous process and does no quickly vary

with time. On the other hand, transformer partial discharge

(PD) monitoring is a highly delay critical monitoring

application and the data needs to be transmitted in near real

time fashion with highest reliability values. The need for near

real time transformer PD monitoring arises in situations where

the operator need to analyze all the PD peaks as they happen

and without loss of any important data. The significance of

predictable reliability, timeliness and Quality of Service (QoS)

in smart grid communications has been also outlined in the

recent studies [1]. In addition, it is well-known that protocols

designed in an application-specific manner improve the

performance of the WSN [2-3]. For this reason, we focus on

the use of WSNs in the smart grid domain and aim to improve

their performance in terms of delay and QoS.

In this paper, we propose a delay-responsive, cross layer

scheme with linear back-off (LDRX) mechanism to address

delay and service requirements of the smart grid monitoring

applications. The LDRX scheme is designed to operate in

cluster-tree WSN topology that is suitable for monitoring large

smart grid assets such as electrical substations or large

installations. We propose to implement the LDRX scheme in a

WSN with cluster-tree topology to monitor an electrical

substation, as shown in Fig. 1 [4]. The proposed scheme can

easily be extended to cluster-tree topologies with any size and

depth. We show that LDRX has greater impact on delay

reduction compared to previously proposed WSNs delay

reduction schemes.

Fig.1 WSN-based substation monitoring.

A Delay Mitigation Scheme for WSN-based

Smart Grid Substation Monitoring

Irfan Al-Anbagi, Melike Erol-Kantarci, Hussein T. Mouftah

School of Electrical Engineering and Computer Science

University of Ottawa, Ottawa, ON, Canada

[email protected], [email protected], [email protected]

Page 2: A delay mitigation scheme for WSN-based smart grid substation monitoring

The rest of the paper is organized as follows. In Section II,

we present the related work. In Section III, we present a brief

description of the analytical model used of the delay estimation

utilized by LDRX. In Section IV, we introduce LDRX scheme

and discuss the results in Section V. Finally, Section VI

concludes the paper.

II. RELATED WORK

In the literature, there are many studies that consider the

use of WSNs in condition monitoring applications [5-10].

In [5] and [6], we have introduced medium access schemes,

namely Delay-Responsive, Cross layer (DRX) [5] and Fair and

Delay-Aware Cross layer (FDRX) [6] data transmission

schemes that aim to address delay and service differentiation

requirements of the smart grid. The DRX and FDRX schemes

are based on delay-estimation and data prioritization

procedures that are performed by the application layer for

which the Medium Access Control (MAC) layer responds to

the delay requirements of the smart grid application and the

network condition. In this paper we have also used delay

estimation and data prioritization. However, we have

implemented totally different techniques. We have also

implemented linear back-off mechanism. We show that LDRX

outperforms both DRX and FDRX. Furthermore, we have

implemented the LDRX scheme in cluster-tree topology which

uses different delay estimation schemes.

In [7], the authors have proposed to implement QoS

scheme in low cost protocols. They have proposed that by

providing differentiated service to traffic of different priority.

They propose an adaptive mechanism by the implementation

of the back off exponent management to reduce packet

collision. In this paper, we have implemented our scheme by

adaptively and simultaneously using linear back-off and

reduced clear channel assessment (CCA) duration in cluster-

tree topology.

In [8], the authors have presented a QoS support mechanism

in a beacon enabled mode using Carrier Sense Multiple

Access/ Collision Avoidance (CSMA/CA) back-off time. Their

algorithm is compatible with the IEEE 802.15.4 standard. Our

scheme is adaptive and uses cross layer interaction to optimize

the performance of the network to provide QoS. In [5] we have

shown that DRX outperformed [8] in delay reduction.

In [9], the authors have presented a priority-based scheme to

guarantee time-bounded delivery of high priority packets in

event-monitoring networks. The authors have proposed to

reduce the number of CCA performed in nodes with high

priority from 2 to 1 and have performed frame tailoring to

avoid packet collision. Their scheme has to be accompanied by

careful frame tailoring procedure which makes it less adaptive

to traffic changes. In this paper, we have not reduced the

number of CCA’s and thus we have not performed frame

tailoring. Furthermore, we have implemented our scheme in

multi-hop cluster-tree topology.

In [10] the authors have used a Markov model to analyze the

characteristics of the WSN in terms of packet delay, energy

consumption and throughput of unsaturated, unacknowledged

IEEE 802.15.4 based WSNs. They have proposed to model

unsaturated state, which is dependent on the traffic condition

and have proposed to use linear back-off period. The authors

have not used their model to provide service differentiation for

delay critical sensor nodes and it does not exhibit the adaptive

and cross layer features that are presented in this paper.

Additionally, our scheme is tailored to provide QoS in cluster-

tree network topology.

III. OVERVIEW OF IEEE 802.15.4 MAC OPERATION

The IEEE 802.15.4 standard defines the MAC and physical

layers including the CSMA/CA process [11]. CSMA/CA is

used with a slotted binary exponential back-off (BO) scheme to

reduce collisions. Two channel access techniques are defined

in the IEEE 802.15.4 standard; these are the beacon-enabled

mode, which employs a slotted CSMA/CA and exponential

back-off, and a basic unslotted CSMA/CA without beacons.

The MAC sub-layer uses four variables to regulate channel

access, these variables are the Number of Back-Offs (NBO),

Contention Window (CW), back-off exponent (BE) and

Retransmission Times (RT). Prior to a particular transmission

in the slotted CSMA/CA, the MAC sub-layer initializes the

four variables as follows: NBO=0, CW=2, BE=minBE and

RT=0. In the next step, the MAC sub-layer delays for a random

number of back-off period ranging from 0 to (2BE

- 1). When

the back-off period becomes zero, the node can perform the

first CCA for a certain amount of time. If two successive

CCAs are idle, then the node is allowed to start packet

transmission. On the other hand, if either of the CCA fails due

to a busy channel, the MAC sub-layer will increase the value

of both NBO and BE by one. This process is repeated until the

maximum value of either the back-offs (MaxBackoffs) or the

maximum value of back-off exponent (MaxBE) is reached, and

at this point the packet is dropped and channel access failure is

declared. On the other hand, if the channel access is successful,

the node initiates the transmission of the packet. If the

acknowledgement (ACK) mechanism is activated, the node

waits for an ACK which indicates successful packet

transmission. If the transmitting node does not receive the

ACK within a specified duration, the RT is increased by one

up to a value equal to MaxFrameRetries. The MAC sub-layer

initializes the two variables CW to 0, BE to MinBE and repeats

the above process when the value of RT is less than

MaxFrameRetries. Otherwise, the packet is discarded due to

exceeding the retry limit. The default MAC parameters of the

IEEE 802.15.4 standard are MinBE = 3; MaxBE = 5;

MaxBackoffs = 4; MaxFrameRetries = 3. The values of other

parameters such as Inter-Frame Spacing (IFS) and the ACK

wait duration are specified in [11].

IV. THE PROPOSED SYSTEM

A. WSN for Smart Grid Condition Monitoring

In this paper, we address the QoS concerns in WSNs for

delay-critical applications. We implement the LDRX in a smart

grid monitoring as an example of a delay critical application.

The LDRX scheme can be implanted (without any

modifications) in many other delay critical applications such

as patient monitoring, military monitoring applications,

Page 3: A delay mitigation scheme for WSN-based smart grid substation monitoring

industrial monitoring and etc. WSNs are favored in many

applications for their low-cost, ubiquity and flexibility. On the

other hand, WSNs may incur high latency when sensor nodes

try to access the communication medium simultaneously.

Furthermore, data from several smart grid applications will

naturally have different delay requirements calling for

prioritization. For instance, metering may tolerate delay while

transformer monitoring will have low latency requirement

particularly during peak load hours. Operators may need near

real-time data to take appropriate control actions in high load

conditions.

The medium access scheme of IEEE 802.15.4 is designed

to regulate medium access in WSNs by using either a random

access mechanism, i.e. CSMA/CA or by granting a minimum

service guarantee all along the path through which the data is

relayed (i.e. using Guaranteed Time Slots (GTSs)). The

CSMA/CA is not tailored for delay-critical smart grid

applications, since it does not have prioritized access nor

delay-responsiveness properties. In the smart grid, asset

monitoring and control applications require priority and delay

aware solutions. To provide priority and delay responsiveness

for WSNs in delay critical smart grid monitoring and control

applications, we present LDRX scheme to reduce the end-to-

end delay for delay critical data.

The LDRX scheme modifies the IEEE 802.15.4 MAC and

it operates as follows; the application layer checks the

measured data and if it measured data triggers an alarm, the

application layer marks the data frames with sequence of bits

indicating high priority, Fig. 2 shows a flowchart describing

this process. Upon the arrival of the frames at the MAC sub-

layer and at the edge of the BO period boundary the MAC sub-

layer checks for the data priority as shown in Fig. 2. If the data

is marked with high priority it initiates the LDRX schemes

otherwise it uses the default IEEE 802.15.4 MAC. LDRX

implements a random delay on a period from 0 to (2BE-1)

instead of 0 to (2BE

- 1). The linear back-off period will be in

any case shorter than the exponential back-off period. Based

on this, the node with high priority data (the tagged node) will

exit the back-off period before other nodes in the Personal

Area Network (PAN) and start sensing the channel. Then the

tagged node starts sensing the channel before other nodes and

it does that in a shorter duration compared to other nodes (i.e.

the CCA duration is divided by two for the high priority node).

If the tagged node finds the channel idle it reduces the CW

until it is zero and then transmits its packets. Otherwise it

repeats the BO process until it reaches maximum number of

back-offs where it declares failure and drops its packets. It is

essential to mention here that if we allow all the nodes to use

linear BO period (as in [11]) then that will deteriorate the

network performance especially when the number of nodes is

high. This degradation of the performance is due to the limited

linear BO interval which leads to having more nodes selecting

the same BO interval and colliding during channel sensing.

In our proposed scheme, we follow the general analytical

model for the slotted CSMA/CA mechanism of the beacon

enabled mode of the IEEE 802.15.4 presented in [12] to

estimate the end-to-end delay E{D}. The model described in

[12] was proposed for a star topology. In this paper, we modify

the delay estimation model to make it suitable for a cluster-tree

based WSNs. Due to space limitations, we do not include

details of the Markov-chain based delay estimation model in

this paper, interested reader can refer to [12].

The main idea behind the delay estimation model of [12] is

that sensor nodes can estimate the busy channel probabilities,

depending on the probability of finding the channel busy

during the first and second CCAs. Using the delay estimation

equations derived in [12], the average estimated delay is given

by the following equation:

(1)

Where ,

(2)

where, and are the durations of collided and successful

data packet transmissions respectively. N is the number of

nodes in a single PAN, is the probability that at least one

of the (N-1) nodes transmits in the same time slot. m =

MaxBackoffs, n = MaxFrameRetrie defined in [11] and a is a

variable defined in [12].

Fig. 2 Application layer process.

B. The Proposed Scenario

As an example, we investigate the performance of our

LDRX scheme in a smart grid environment. We deployed a

cluster-tree WSN to monitor number of transformers in a

substation and transmit the measured data to a sink. We

assume that this sink is connected to a high speed EPON to

provide a near real time monitoring and control access from

remote locations.

A cluster-tree WSN topology is commonly used in

situations where the extension of the communication range is

needed. In a cluster-tree topology, the traffic generated at the

sources (end nodes or leafs) flows towards the sink (root),

through a series of intermediate nodes called the cluster-heads

(CHs) or relays. In particular, each CH receives packets

coming from a specific cluster of end nodes. For example, we

consider the scenario in Fig. 1 where a substation with a

Page 4: A delay mitigation scheme for WSN-based smart grid substation monitoring

number of transformers is to be monitored with a WSN. The

use of star WSN topology in this substation layout is not

convenient since the transmission range of sensor nodes in a

single hop WSN is limited to 10 to 20m. As shown in Fig. 1,

we use multi-hop tree routing to transport data from the source

node to the sink node. The function of CHs is to collect data

from end nodes and forward it to the next level CH in the tree

until the sink is reached. Furthermore, these CHs also forward

traffic from other CHs in the direction of the sink. In a cluster-

tree topology, the communication between CHs is done either

based on contention or based scheduling. The latter is based on

granting a minimum service guarantee all along the path

through which the data is relayed where the CHs utilize the

GTS of the Contention-Free Period (CFP) in the super-frame

[11]. In this scenario, we adhere to the following assumptions:

CHs communicate with each other using GTS duration;

the time slot allocation is controlled by CHs to avoid

beacon frame collisions [14].

Not all CHs in the network can hear each other (because

they are placed far apart); therefore, the use of contention

between CHs will lead to collision due to the hidden

terminal problem.

All end devices generate packets at the same rate and

these devices use CSMA/CA scheme to transmit to their

CHs (i.e. end devices use CSMA/CA in intra-CH

communication).

The traffic received by a CH in an upper level is equal to

aggregate of traffic from CHs at lower levels.

All CHs have an M/G/1/L queues, the difference between

CHs is in the packet arrival rate.

Each cluster is modeled with the same Markov model

described in [12].

Every node in the network knows its location and knows

how many hops it is a way from the sink.

The tagged packet leaves its own CH during the same

super-frame.

To extend the delay estimation model of (1) to the cluster-

tree topology we follow the above-mentioned assumptions.

Equation (1) is used to estimate the delay from any end node to

its cluster-head (CH). Since CHs are assumed to communicate

with each other using the GTS and all end nodes know their

locations then the delay between CHs is known to all the nodes

in the network. Therefore the total end-to-end delay of any end

node in the cluster-tree topology is assumed to be equal to the

sum of the end-to-end delays along the path to the sink node.

The total estimated end-to-end delay is dependent on the

number of nodes and packet generation rate in each level and

its value is given by the following equation:

(3)

where, is the end-to-end delay in each network level (i.e. an

end node to a CH, a CH to a CH or a CH to the sink) and is

equal to the number of levels from the end node for which the

delay to be calculated to the sink. The delay between the CHs

as well as with the sink node ( is given by the following

equation:

(4)

where, is the propagation delay between CHs, its

value depends on the channel bit rate and the packet length (L).

, is the super-frame duration and is included to account

for the delay of the packets in the intermediate CHs when they

miss the current super-frame due packet aggregation from

lower level CHs.

Due to space limitations we do not include details of the

analytical model for evaluating the reliability and the power

consumption (interested reader is referred to [12] for details).

In the analytical model, we consider the end-to-end delay ( )

to be resulting from the time spent during backoff ( ), the

time wasted due to experiencing collisions ( ), and the time

needed to successfully transmit a packet ( ) [12]:

(5)

For simplicity, we assume that . The total end-to-

end delay to transmit a packet in the cluster-tree topology is

assumed to be equal to the sum of the end-to-end delays along

the path from the source node to the sink node. The total end-

to-end delay ( ) is dependent on the number of nodes and

the packet generation rate in each level l and its value is given

by (3).

We compute the average total power consumed in the node

( ) by summing the average power consumed during

backoff ( ), channel sensing ( ), packet transmission ( ),

idle state ( ), buffering ( ), and wake-up ( ) [12]:

(6)

We calculate each of the terms in (6) by knowing the

probability of being at a certain state (i.e. backoff, channel

sensing, packet transmission, idle state, buffering and wake-

up) and the amount of average power consumed at that state

(that depends on the hardware of sensor node). In the cluster-

tree topology where there is a synchronization between CHs,

we assume that there is no power consumed in (backoff,

channel sensing, and retransmissions). For simplicity, we

assume that and are equal to . The total power

consumed is:

(7)

Where, is the total levels of the cluster-tree network.

Reliability ( ) is defined as the probability of successful

packet reception between the end node and its CH and is given

by [12]:

(8)

where, , and

.

where, N is the number of nodes in the cluster, m is the

macMaxCSMABackoffs (defined in [11]) and ( , , and

) are variables related to the Markov chain model derived

in [12]. In the cluster-tree topology we assume that there are no

packets lost in the transmission between CHs due to the

Page 5: A delay mitigation scheme for WSN-based smart grid substation monitoring

employed synchronization and beacon collision avoidance

mechanisms. Hence, the reliability from the low level CHs to

the parent CH is 100% (i.e. the total reliability of the cluster-

tree network is equal to ).

V. SIMULATION RESULTS AND ANALYSIS

We use QualNet network simulator [13] to simulate the

scenario proposed in section IV and to support the analytical

model of the LDRX scheme. We initially assume that there are

10 nodes in each cluster and there are 4 clusters with 4 CHs

and a single sink node at the control room. The clusters could

be 2 hops or 1 hop away from the sink (refer to Fig. 1). All

sensor devices operate in the 2.4 GHz band with data rate

equal to 250 Kbps. For increased accuracy, we run simulations

for 300 seconds and take an average of 10 simulation runs. All

nodes within a cluster use CSMA-CA to gain access to the

medium. We assume that all nodes within a cluster transmit

with sufficient power, which means that all nodes in a single

cluster can hear each other. We also assume that the noise

constant noise factor us constant over the entire network. CHs

communicate during the GTS period of the super-frame. To

avoid beacon frame collisions, we use the beacon frame

collision avoidance approach described in [14]. In this scheme

we allocate the time so that beacon frames and the super-frame

duration of any CH are scheduled during the inactive period of

its neighbor CH. We implemented this approach by carefully

selecting the duty cycle of each CH in the network. This is

done by selecting a specific Beacon Order (BeO) and Super-

frame order (SO). The acknowledgement mechanism is

activated to improve the reliability of the system. We assume

that the power consumed during the buffering state as well as

the back-off state is equal to the power consumed during the

idle state. We set the Transmission power=3.5dBm, noise

factor=10dB, packet size=120B. The rest of the parameters are

taken from the IEEE 802.15.4 standard document [11] and the

actual specification document of MicaZ platform [15].

Fig. 3 shows the analytical and simulation results of the

end-to-end delay as a function of the traffic generation rate for

the default IEEE 802.15.4 MAC and the LDRX scheme. We

vary the number of nodes in a single cluster to investigate the

performance of our scheme for different cluster sizes. We can

see that there is a significant reduction in the end-to-end delay

when we use LDRX scheme for all traffic and network

conditions. Simulation and analytical results of the LDRX

scheme agree for all packet generation rates.

Fig. 4 shows the analytical and simulation results of the

end-to-end reliability (defined as the packet delivery ratio from

any end node in a cluster to the sink through multiple CHs) as

a function of traffic generation rate and for different number of

nodes in a single cluster for the default IEEE802.15.4 MAC

and the LDRX scheme. We can see that the reliability drops

from 100% (for 30 nodes) as the traffic rate increase. This

happens because more nodes within a single cluster are

contending to transmit their data thus leading to more

collisions. In addition, there is no significant change in the

reliability between the two schemes (i.e. the default IEEE

802.15.4 MAC and the LDRX). Simulation and analytical

results of the LDRX scheme agree for all packet generation

rates. This is a good indication that our proposed scheme

succeeds in reducing the delay without affecting the system

performance.

Fig. 5 shows the analytical and simulation results of the

total power consumed by a single node as a function of the

traffic generation rate for different number of nodes in a single

cluster. We show that as the packet generation rate increases

the power consumed by a single node increases. Furthermore,

we show that a node implementing the LDRX scheme does not

have a difference in the power consumed when compared to a

node implementing the default IEEE 802.15.4 MAC setting.

This shows that LDRX does not increase power consumption

while reducing end-to-end delay. The simulation and the

analytical results of the LDRX agree for all traffic conditions.

Fig. 6 shows the simulation results of the end-to-end delay

against the percentage of nodes generating high priority traffic

in a single cluster for different cluster sizes. We show that as

the percentage of nodes generating high priority traffic

increases the end-to-end delay increases in a linear fashion for

all cluster sizes. This increase in the end-to-end delay is

expected since higher number of nodes tries to utilize the

LDRX scheme as they generate high priority traffic which

leads to higher number of collisions and thus reducing the

delay. However, we see that in the worst case scenario when

all the nodes (100% of the nodes) generate high priority traffic

the end-to-end delay reaches it maximum values, which is the

end-to-end delay of the default IEEE 802.15.4 MAC protocol.

Fig. 7 shows the simulation results of the end-to-end delay

of the LDRX, DRX [5] and FDRX [6] schemes for different

traffic generation rates. We see that LDRX outperforms DRX

and FDRX for all traffic rates, this shows that LDRX is more

suitable than previously proposed WSNs QoS schemes in

reducing the end-to-end delay and providing service

differentiation to high priority traffic.

Fig.3 End-to-end delay.

Fig.4 End-to-end reliability.

Page 6: A delay mitigation scheme for WSN-based smart grid substation monitoring

Fig.5 Total power consumed.

Fig.6 Nodes generating high priority traffic.

Fig.7 LDRX vs. DRX and FDRX.

VI. CONCLUSION

In this paper we presented a delay mitigation scheme for

general WSN-based monitoring applications and used smart

grid monitoring applications as an example. The proposed

scheme, namely LDRX, is tailored for a WSN with cluster-tree

topologies which is suitable for monitoring assets distributed

over wide spread locations, such as monitor a number of

transformers in a substation. The cluster- tree topology is the

best topology suitable for monitoring large areas with metal

structures or multiple buildings or obstacles due to

transmission range limitations and path loss factors of ZigBee

based WSNs.

Our simulation and analytical results showed that the

LDRX scheme adaptively reduces the end-to-end delay of high

priority data packets while maintaining constant reliability and

power consumption compared to the default IEEE 802.15.4

MAC protocol. Results have also showed that in the worst case

scenario (i.e. when all the nodes in the cluster generate high

priority traffic), the LDRX can perform as good as the default

IEEE 802.15.4 MAC protocol. Finally we showed that the

LDRX scheme out performs the previously proposed DRX and

FDRX schemes.

As a future work we plan to perform experimental

evaluation of the LDRX scheme in a smart grid test bed.

REFERENCES

[1] Y. Zhang, R. Yu, M. Nekovee, Y. Liu, S. Xie, S. Gjessing, “Cognitive

Machine-to-Machine Communications: Visions and Potentials for the

Smart Grid”, IEEE Network Journal, vol. 26, Issue 3, June 2012, pp. 6 – 13.

[2] W. Heinzelman, A. Chandrakasan, H. Balakrishnan, "An Application-

Specific Protocol Architecture for Wireless Microsensor Networks'', IEEE Transactions on Wireless Communications, Vol. 1, No. 4, October

2002, pp. 660-670. [PDF]

[3] C. Wang, J. Liu, J, Kuang, A.S. Malik, H. Xiang, “An Improved LEACH Protocol for Application-Specific Wireless Sensor Networks”,

in the Proc. Of 5th International Confrence on Wireless Communications, Networking and Mobile Computing, (WiCom '09),

24-26 Sept. 2009, Beijing, China.

[4] Turbosquid 3d models, [Online]. http://www.turbosquid.com/3d-models/substation-3d-model/634956.

[5] I. Al-Anbagi, M. Erol-Kantarci, H. T. Mouftah, “A Low Latency Data

Transmission Scheme for Smart Grid Condition Monitoring Applications”, in the annual Electrical Power and Energy Conference

(EPEC 2012), 10-12 Oct., 2012 London, Ontario.

[6] I. Al-Anbagi, M. Erol-Kantarci, H. T. Mouftah, “Fairness in Delay-Aware Cross Layer Data Transmission Scheme for Wireless Sensor

Networks”, in the Proc. of the 26th Biennial Symposium on

Communications, (QBSC 2012), 28-29 May, 2012, Kingston, Ontario, Canada.

[7] W. Sun, X. Yuan, J. Wang, D. Han, C. Zhang, “Quality of Service

Networking for Smart Grid Distribution Monitoring”, The First IEEE International Conference on Smart Grid Communications, pp. 373 378,

4-6 Oct. 2010, Gaithersburg, MD.

[8] M. Youn, Y. Oh, J. Lee, Y. Kim, “IEEE 802.15.4 Based QoS Support Slotted CSMA/CA MAC for Wireless Sensor Networks”, International

Conference on Sensor Technologies and Applications, SENSORCOMM

2007, pp. 113 – 117, 14-20 Oct. 2007, Valencia, Spain.

[9] T. H. Kim, S. Choi, “Priority-Based Delay Mitigation for Event-

Monitoring IEEE 802.15.4 LR-WPANs”, IEEE Communications

Letters, vol. 10 issue:3, pp. 213 - 215 , March 2006. [10] J. Zhu · Z. Tao · C. Lv, “Performance Evaluation of IEEE 802.15.4

CSMA/CA Scheme Adopting a Modified LIB Model”, Wireless

Personal Communications Journal, Vol. 65, N. 1, pp. 25-51, 30 January 2011.

[11] IEEE Std 802.15.4-2006,“Wireless Medium Access Control (MAC) and

Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (WPANs),” IEEE, 2006. [Online]

http://www.ieee802.org/15.

[12] P. Park, P. Di Marco, P. Soldati, C. Fischione, J. K. Henrik, “A Generalized Markov Chain Model for Effective Analysis of Slotted

IEEE 802.15.4”, (MASS '09), The 6th International Conference on

Mobile Adhoc and Sensor Systems, pp. 130 – 139, 12-15 Oct. 2009, Macau.

[13] QualNet network simulator, http://www.scalable-

networks.com/content/. [14] A. Kuba, A. Cunha, M. Alves, “A Time Division Beacon Scheduling

Mechanism for IEEE 802.15.4/Zigbee Cluster-Tree Wireless Sensor

Networks”, in the Proc. of the 19th Euromicro Real-Time Systems Conference, (ECRTS'07), 4-6 July 2007, pp.125-135, Pisa, Italy.

[15] Micaz Wireless Measurement System,

http://www.openautomation.net/uploadsproductos/micaz_datasheet.pdf.