COLLEGE OF ENGINEERING Multi-Channel Deficit Round Robin Scheduling for Hybrid TDM/WDM Optical Networks A dissertation submitted to Swansea University in partial fulfillment of the requirements for the degree of Master of Research (MRes) by MITHILEYSH SATHIYANARAYANAN 611702 Course Title: MRes in Communication Systems Supervisors: Dr. Kyeong Soo Kim Prof. Thomas Chen College of Engineering, Swansea University October 2012
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
COLLEGE OF ENGINEERING
Multi-Channel Deficit Round Robin Scheduling for Hybrid TDM/WDM Optical Networks
A dissertation submitted to Swansea University in partial fulfillment of the requirements for the degree of
AUTHOR'S DECLARATION I hereby declare that I am the sole author of this research dissertation entitled “Multi-Channel Deficit Round Robin Scheduling for Hybrid TCM/WDM Optical Networks” and the entire work embodied in this dissertation has been carried out by me and investigated except the as in cited references with the guidance of my Supervisor Dr. Kim, and no part of it has been submitted for any degree of any institutions previously. The dissertation was carried out in accordance with the regulations of the Swansea University. I authorize Swansea University to provide my thesis to other institutions or individuals for the purpose of scholarly research. Finally, I understand that my thesis will be submitted electronically to TurnitIn for checking the originality of the thesis, and may be made electronically available to the public. Signature: NAME: PLACE: DATE:
ABSTRACT This research project proposes and investigates the performance of a multi-channel
scheduling algorithm based on the well-known deficit round-robin (DRR) calling it as multi-
channel DRR (MCDRR). The original DRR is then extended to the case of multiple channels
with tunable transmitters and fixed receivers to provide efficient fair queueing in hybrid time
The availability of channels and tunable transmitters are taken into account in extending the
DRR and allow the overlap of `rounds' in scheduling to efficiently utilize channels and
tunable transmitters. When the tunable transmitter is available, it triggers the scheduling
process. At the start of the First Round, the Round-robin pointer starts from the first flow, if
the packet size is lesser than the deficit counter, the packets are served successfully to their
dedicated channels with channel available at that instant. The packets which cannot be served
due to the packet size greater than the deficit counter and the channel available at that
particular instant of time is neglected. Also, if the Flows are empty, then the particular Flows
are neglected. The pointer moves sequentially as the tunable transmitter triggers at a
particular instant of time and scheduling process starts. The pointer continues from the next
flow after the packets are served successfully/unsuccessfully in the previous flow. Once the
pointer moves through all the given flows, we define it as “Completion of one Round’’.
Simulation results show that the proposed MCDRR can provide nearly perfect fairness with
ill-behaved flows for different sets of conditions for interframe times and frame sizes in
hybrid TDM/WDM optical networks with tunable transmitters and fixed receivers.
Index Terms – Multi-channel scheduling, fair queuing, tunable transmitters, hybrid
TDM/WDM, quality of Service (QoS).
DEDICATION
I would like to dedicate this thesis to my lovely parents Bhuvaneshwari Loganathan and Sathiyanarayananan BK.
ACKNOWLEDGEMENTS
I would like to take this opportunity to express a deep sense of gratitude and thank my supervisor Dr. Kim for providing excellent guidance and unbelievable support during my study on this thesis and made me realize of the potential I have within.
It is due to Dr. Kim that I was introduced to the subject of Network Protocols and Architecture and I thank him for guiding me through the research process. My experience of working as a student of Dr. Kim has been an invaluable asset to me, which will benefit my whole life.
Special thanks to my second supervisor Prof. Thomas Chen for his wordly advice and guidance.
It is because of their constant and general interest and assistance that this project has been successful.
My sincere gratitude to Swansea University for making the International Scholarship Bursary possible.
I would also like to thank my family members for their precious advices and sincere support at all times. They have been a source of encouragement and inspiration throughout this project.
Finally, my thanks go to Sharanya for her everlasting love and care. There has been more than one occasion where she motivated me when I used to get mentally exhausted.
Figure 2 : Block diagram of a hybrid TDM/WDM link based on tunable transmitters and fixed receivers ......................................................................................................................................... 15
Figure 3 : General model of FIFO ................................................................................................... 19
Figure 4 : General model of Multi Queuing ..................................................................................... 20
Figure 5 : General model of Round Robin Scheduling ..................................................................... 25
Figure 6 : General Model of DRR Scheduling ................................................................................. 27
Figure 19 : DRR Transmission Diagram after each Rounds ............................................................. 40
Figure 20 : Block diagram of a hybrid TDM/WDM link based on tunable transmitters and fixed receivers ......................................................................................................................................... 42
Table 7 : Simulation Parameters for Scenario 1 ............................................................................... 64
Table 8 : Simulation Parameters for Scenario 2 ............................................................................... 67
Table 9 : Simulation Parameters for Scenario 2 ............................................................................... 69
Table 10 : Simulation Parameters for Scenario 3.............................................................................. 70
Table 11 : Simulation Parameters for Scenario 3.............................................................................. 71
Glossary of Terms BPS – Bits Per Second DC – Deficit Counter
DRR – Deficit Round Robin
FQ – Fair Queuing
GPS – Generalized Processor Sharing
LR – Latency rate
MDRR – Modified Deficit Round Robin
MCDRR – Multi-channel Deficit Round Robin
OLT – Optical Line Terminal
ONU – Optical Network Unit
PON – Passive Optical Network
PS – Packet Size
PQ – Priority Queuing
QoS – Quality of Service
QS – Quantum Size
RF – Relative Fairness
RFB – Relative Fairness Bound
RR – Round Robin
SFQ – Stochastic Fair Queuing
TDM – Time Division Multiplexing
WDM – Wavelength Division Multiplexing
WFQ – Weighted Fair Queuing
WRR – Weighted Round Robin
1
Chapter 1
Introduction:
In today's network environment, scheduling management is continually seeking new and
better techniques to manage with the complexities and large data packets to provide better
fairness, higher throughput and lower latency that are the characteristics of many network
resources.
1.1 Basic Introduction to the Scheduling algorithms
Scheduling is a method of harmonizing the access to system resources among competing data
flows [1]. It is achieved by specifying the order and the allotted time period for packets from
each flow. Scheduling is an important part of networking systems because it not only enables
the sharing of the bandwidth but also guarantees the quality of service (QoS). Well-designed
scheduling algorithms could provide higher throughput, lower latency, and better fairness
with lower complexity in serving the packets. As such, the scheduling plays an important role
in achieving high performance of the networking systems. The utility of the network mainly
depends on the “performance'' of its scheduling scheme.
1.2 Purpose of the Multi-Channel Scheduling
The scheduling of packets in switches and routers has become an important topic in
networking and communication [1]. Due to its importance in the networking and
communication, the scheduling has been extensively studied but mainly in the context of
single-channel communication. The advent of wavelength division multiplexing (WDM)
technology, however, demands the extension of this packet scheduling problem to the case of
multi-channel communication, especially with tunable transmitters for hybrid time division
multiplexing (TDM)/ wavelength division multiplexing (WDM) systems. The main objective
of the multi-channel scheduling is to schedule the transmissions of the data over multiple
2
channels to the users. The important measures in choosing a scheduling algorithm are
throughput, latency, fairness, and complexity. The major focus of existing work is mostly on
the throughput and delay performance of the scheduling algorithm in SUCCESS-HPON [2],
but there is hardly any support for fairness and Quality of Service (QoS) guarantee. The main
objective of this thesis is to study the multi-channel scheduling in hybrid TDM/WDM optical
networks with tunable transmitters and fixed receivers providing fairness in throughput. In
this thesis we propose and investigate the performance of a multi-channel deficit round-robin
(MCDRR) scheduling algorithm which can provide throughput fairness among flows with
different size packets with O(1) processing per packet which is nothing but the constant time
complexity. The time complexity of an algorithm is the amount of time taken by an
algorithm to run a particular input defined in the algorithm. The major running time
complexity of algorithms is as follows:
Running Time Function
0(1) Whatever the input, this will return in a
fixed, finite time. (Constant time)
0(NlogN) It would be sorting an input array with a
good algorithm. (low complexity)
0(logN) It would be looking up a value in a sorted
input array by bisection.
Table 1 : Time Complexity and its function
1.3 Background of the Multi-Channel Scheduling and Scheduling Algorithms in brief
Initially the First Come First Served (FCFS) scheme [16] was used for packet scheduling. In
general it is called as the First In First Out (FIFO), Multi-queuing was introduced later on
and then the priority queuing.
The packet schedulers are classified into two categories [17]:
• Timestamp-based schedulers
• Frame-based schedulers
3
Some of the timestamp-based schedulers are Fair Queuing (FQ), Weighted Fair Queuing
(WFQ), Virtual Clock (VC) and Self-Clocked Fair Queuing (SCFQ) etc. and some of the
frame-based schedulers are Round Robin (RR), Deficit Round Robin (DRR), Weighted
Round Robin (WRR) etc [17]. All the above said schedulers are explained in detail in the
Second Chapter.
The broad spread of packet data networks in communications has created a driving force
towards an improved Quality of Service (QoS). The primary thing of Quality of Service
(QoS) is packet schedulers. In this thesis, we introduce a new frame-based scheduling
technique called Multi-channel Deficit Round Robin (MCDRR) that is mainly designed for
providing lower latency bounds, better fairness and maximum throughput. The MCDRR is
computationally very efficient with an 0(1) per packet work complexity, and it provides
better fairness.
The work in this project is based on the deficit round-robin (DRR) scheduling algorithm
which extends the simple round-robin with deficit counters [3]. In the basic DRR scheme,
stochastic fair queuing (SFQ) is used to assign flows to queues [4]. DRR uses the principle of
Round-robin scheduling. The round-robin scheduling is used for servicing the queues with a
quantum of service assigned to each flow. The DRR scheduler in constant rotation selects
packets from each flow to send out to the destination. So the packets from all flows that have
queued gets a fair chance to transmit to the particular destination. The DRR maintains an
active list or called as service list which keeps the record of the non-empty queues in a round
and to avoid examining the empty queues. In the DRR mechanism, the scheduler visits each
non-empty queue in the active list and based on the quantum size, the packets are served. It
differs from the traditional round-robin because if the packet in the previous round was too
large compared to the quantum size, then the remainder from the previous quantum is added
to the quantum for the present round, so the packets can be served successfully in the present
round. This can continue for the next rounds as well. The queues which are not serviced
completely in a round are compensated in the next round. To service one particular flow
again, that flow must wait N-1 flows to be serviced again irrespective of its weight.
4
During each round, a flow can transmit at once as many packets as possible if there is enough
quantum for them. For each flow, two variables --- i.e., quantum and deficit counter --- are
maintained. Quantum is the amount of credits in bytes assigned to each of the flow within the
period of one round. It is very important to decide the quantum size as it might cause fairness
issues. In general if we expect that the work complexity O(1) per packet for the DRR, then
the quantum for a flow should be larger than the maximum packet size from the flow so that
at least one packet per backlogged flow can be served in a round [3]. Due to various reasons
the DRR has poor delay and burstiness properties.
1.4 Motivation
1.4.1 Deficit Round-Robin
DRR is also called as the Deficit Weighted Round Robin (DWRR). It is modified weighted
round robin scheduling discipline. It has become one of the popular fair scheduling schemes
that can handle variable packet sizes. DRR is a round-robin scheduling algorithm extended
by deficit counters [3]. In the basic DRR scheme, stochastic fair queuing (SFQ) is used to
assign flows to queues [4]. DRR uses the principle of Round-robin scheduling. The round-
robin scheduling is used for servicing the queues with a quantum of service assigned to each
flow. The DRR scheduler in constant rotation selects packets from each flow to send out to
the destination. So the packets from all flows that have queued gets a fair chance to transmit
to the particular destinations. The DRR maintains an active list or called as service list which
keeps the record of the non-empty queues in a round and to avoid examining the empty
queues. In the DRR mechanism, the scheduler visits each non-empty queue in the active list
and based on the quantum size, the packets are served. It differs from the traditional round-
robin because if the packet in the previous round was too large compared to the quantum
size, then the remainder from the previous quantum is added to the quantum for the present
round, so the packets can be served successfully in the present round. This can continue for
the next rounds as well. The queues which are not serviced completely in a round are
compensated in the next round. To service one particular flow again, that flow must wait N-1
flows to be serviced again irrespective of its weight. During each round, a flow can transmit
at once as many packets as possible if there is enough quantum for them. For each flow, two
variables --- i.e., quantum and deficit counter --- are maintained. Deficit counter is the amount
5
of credits in byte, which constantly compares its credits with the packet sizes in the flow. Initially
deficit counter starts at zero. It increments based on the quantum size set. Quantum is the amount
of credits in bytes assigned to each of the flows within the period of one round. The round
robin pointer moving through the set of flows defined using the deficit counters is known as
the period of one round. It is very important to decide the quantum size as it might cause
fairness issues. In general if we expect that the work complexity O(1) per packet for the
DRR, then the quantum for a flow should be larger than the maximum packet size from the
flow so that at least one packet per backlogged flow can be served in a round [3]. Due to
various reasons the DRR has poor delay and burstiness properties.
1.4.2 Multi-Channel Scheduling
The “Design and Performance analysis of scheduling algorithms for WDM-PON under
SUCCESS-HPON architecture'' [2] gives us ain depthth idea of using scheduling algorithms
for the multi-channel case. We consider only tunable transmitters in the multi-channel case.
The multi-channel scheduling and its applications rely on the ability of the whole system to
provide some sort of quality of service guarantees. There are several measures that are to be
considered when choosing a scheduling algorithm. The most important are: fairness, latency
and the complexity [5]. In multi-channel Scheduling, it is the scheduling algorithm that is key
to achieve high performance. The major focus is on the delay and throughput performance of
the whole system, there is hardly any support for fairness and Quality of Service (QoS)
guarantee [5]. Most of the work in FQ has been done for the single channel case; there are
only a few results for the multichannel case but none for the case using the efficient DRR
technique. The DRR is a simple scheduling algorithm which is for a single channel case. We
have extended the DRR to a multi-channel case, which is namely MCDRR.
1.5 Review of the Existing Work
As for multi-channel scheduling with tunable transceivers in hybrid TDM/WDM optical
networks, the work for the SUCCESS-HPON architecture in [2] provides a detailed
6
investigation of several multi-channel scheduling algorithms like batching earliest departure
first (BEDF) and sequential scheduling with schedule time framing S3F under realistic
environments, which is one of the basis for the work in this thesis. Through extensive
simulations using tunable transmitters and receivers, it has been demonstrated that both the
BEDF and S3F improves the throughput and delay performances. Note that we consider the
case of tunable transmitters and fixed receivers in this thesis, while the SUCCESS-HPON
architecture is based on both tunable transmitters and tunable receivers.
The proposed MCDRR is based on the deficit round-robin (DRR) scheduling algorithm
which extends the simple round-robin with deficit counters [3]. The DRR provides good
fairness, lower complexity, and lower implementation cost, which makes it an ideal
candidate for high-speed gateways or routers. In the basic DRR scheme, stochastic fair
queuing (SFQ) is used to assign flows to queues. DRR uses the principle of Round-robin
scheduling. The round-robin scheduling is used for servicing the queues with a quantum of
service assigned to each flow. The DRR scheduler in constant rotation selects packets from
each flow to send out to the destination. So the packets from all flows that have queued gets a
fair chance to transmit to the particular destinations. The DRR maintains an active list or
called as service list which keeps the record of the non-empty queues in a round and to avoid
examining the empty queues. In the DRR mechanism, the scheduler visits each non-empty
queue in the active list and based on the quantum size, the packets are served. It differs from
the traditional round-robin because if the packet in the previous round was too large
compared to the quantum size, then the remainder from the previous quantum is added to the
quantum for the present round, so the packets can be served successfully in the present
round. This can continue for the next rounds as well. The queues which are not serviced
completely in a round are compensated in the next round. To service one particular flow
again, that flow must wait N-1 flows to be serviced again irrespective of its weight. During
each round, a flow can transmit at once as many packets as possible if there is enough
quantum for them. For each flow, two variables --- i.e., quantum and deficit counter --- are
maintained. Quantum is the amount of credits in bytes assigned to each of the flow within the
period of one round. It is very important to decide the quantum size as it might cause fairness
issues. In general if we expect that the work complexity O(1) per packet for the DRR, then
7
the quantum for a flow should be larger than the maximum packet size from the flow so that
at least one packet per backlogged flow can be served in a round [3]. Due to various reasons
the DRR has poor delay and burstiness properties.
Note that the multi-channel scheduling have been studied by others in slightly different
contexts than that of the our thesis scheduler: For instance, optimal wavelength scheduling
for Hybrid WDM/TDM Passive Optical Networks (PONs) [5] inspects the upstream
wavelength scheduling in hybrid wavelength division multiplexing and time division
multiplexing passive optical networks (WDM/TDM PONs), where the minimum resource
allocation unit is a time slot on a wavelength.
They use three optimal wavelength scheduling algorithms for the three kinds of hybrid
WDM/TDM PONs [5].
• Type-I WDM/TDM PONs: Each optical network unit (ONU) has a single optical
transmitter with a tunable wavelength.
• Type-II WDM/TDM PONs: Each ONU still has a single transmitter, but some are
fixed to transmit at a certain wavelength.
• Type-III WDM/TDM PONs: Each ONU has one or more transmitters and all
transmitters can tune their wavelengths.
They proposed algorithms based on the round-robin scheduling (RRS) and shortest channel
first (SCF) concept to calculate the optimal schedule length and achieve the best wavelength
scheduling with the shortest schedule length and the maximum channel utilization.
Also, to provide a fairness guarantee with multiple channels, the extension of fair queueing
(FQ) has been studied in [6], but they are for fixed transceivers as in static WDM systems.
The closest to our work in this paper is the study of multi-server round-robin scheduling in
[7]. Unlike the hybrid TDM/WDM optical network where a specific wavelength is dedicated
to a specific destination, however, they assume that flows can use any of multiple channels.
8
1.6 Problem Description
The thesis is mainly based on the tunable transmitters and fixed receivers in the multi-channel system.
We look to extend fair queuing (FQ) framework to the case of multiple channels with tunable
transmitters and fixed receivers by designing multichannel schedulers for hybrid TDM/WDM PON’s.
It requires investigation in the performance of a multi-channel scheduling algorithm, which can avoid
packet delay and provide fairness (in terms of throughput) for flows with different size packets with
O(1) processing per packet and to reduce end-to-end delay. We need to present an extensive
set of performance and validation test results obtained from simulations conducting using
network models built for each one of the scheduling algorithms under consideration. Later,
we need to discuss the results in detail and explain what needs to be carried out in the future.
If time permits, we will provide mathematical bounds for the results obtained.
9
1.7 Aims and Objectives of the Thesis
1.7.1 The aim of the thesis [8]: In this Research program, we propose new multi-channel scheduling algorithms with tunable
transmitters and fixed receivers which provide fairness and QoS guarantee in hybrid
TDM/WDM optical networks for flows with different size packets (Next-Generation
Networks).
1.7.2 The objectives of the thesis are [8]:
• To extend fair queuing (FQ) framework to the case of multiple channels with tunable
transmitters and fixed receivers.
• To design multichannel schedulers using tunable transmitters for hybrid TDM/WDM
PON’s.
• To design and evaluate the performance of schedulers providing fairness and QoS
guarantee.
• To avoid packet delay and improve throughput of the system (provide fairness).
• To implement detailed simulation models for the designed multi-channel scheduler
for hybrid TDM/WDM PON and evaluate under realistic network and service
environment.
• Finally, to provide mathematical bounds for the simulation models.
10
1.8 Thesis Layout
In Chapter One an Introduction to the concept of multi-channel scheduling and the
importance of quality of service is given. Furthermore, the motivation behind this topic and
the aim of the thesis was identified plus giving a description of the general layout of the
forthcoming chapters.
Chapter Two describes the Overview of the concept multi-channel scheduling and scheduling
algorithms in detail. The review of the existing work is described in detail
Chapter Three describes the Deficit Round Robin Scheduling in detail with an algorithm and
an example, its advantages and disadvantages.
Chapter Four is the core of the thesis and it talks about the Multi-Channel Deficit Round
Robin (MCDRR) Scheduling algorithm in detail. The algorithm and an example is explained
in this chapter.
Chapter Five presents the simulations and discusses the outcomes. We present an extensive
set of performance and validation test results obtained from simulations conducted using
network models built for each one of the scheduling algorithms under consideration.
Finally, Chapter Six gives a conclusion of the work done and the future works are discussed
in detail.
11
Chapter 2
Overview
2.1 Optical Access
A passive optical network (PON) [9] is a point-to-multipoint service network which has an
optical line terminal (OLT) at the service provider's central office and a number of optical
network units (ONUs) at the end users as shown in figure 1. The basic building blocks of a
PON are OLT, ONUs and the fibers and splitters between them. The OLT provides the
interface between the backbone network and the PON. The ONU terminates the PON and
provides service to the users. PONs are deployed for the provision of Fiber-To-The-Home
(FTTH) services by utilizing their ability to share the fiber bandwidth among users in an
economical way. PON is called passive because there are no active element within the access
network other than at the central office, this is not necessary to be true as well. A PON
enables the major services like voice, video and data to the users.
PON enable users with the following services [9]:
• Digital Entertainment:
1) IPTV
2) Video on Demand
3) Video Telephony
4) Audio on Demand etc
• Broadband Data services:
1) High Speed Internet access.
2) Data (Ethernet)
3) Telephone services
4) VoIP ( Voice Over IP)Telephony
12
The different types of PON configurations are [10]:
• Time Division Multipltiplexing (TDM) PONs:
1) Asynchronous Transfer Mode (ATM) PON
2) Broadband PON (BPON)
3) Gigabit PON (GPON)
4) Ethernet PON (EPON)
• Wavelength Division Multiplexing (WDM) PONs
• Hybrid TDM/WDM PONs (HPONs)
In this thesis, we concentrate on the Hybrid TDM/WDM PONs (HPONs) which is called as
Next-Generation Networks. The TDM and WDM combinations give good transmission
properties for favorable economic aspect and the upgradability facilities. The detailed
explanation about the HPONs are as follows:
Figure 1 : Hybrid Passive Optical Network
Hybrid Passive Optical Network (HPON) is a combination of Time division multiplexing
(TDM) and Wavelength division multiplexing (WDM) technologies. To obtain the best
topology properties for data transmission we use both TDM and WDM which have their pros
and cons. The disadvantages of TDM are the advantages of WDM [8].
13
In TDM-PON, the capacity is good, Cost is low and Reliability is best but the upgradability
is difficult whereas in WDM-PON, the capacity is better, the upgradability is easy, reliability
is better but the cost is higher [8].
Characteristics of TDM-PON and WDM-PON [11]
Characteristics TDM-PON WDM-PON
Capacity Good Better
Cost Low Higher
Upgradability Difficult Easy
Reliability Best Better
Table 2 : Characteristics of TDM-PON and WDM-PON
The TDM and WDM combinations give good transmission properties for favorable
economic aspect and the upgradability facilities.
The operation between the OLT and ONT is done using the WDM multiplex and is then
divided by the arrayed waveguide gratings (AWG) demultiplexer into individual
wavelengths, which are then sent to the topology. A part of the network, which is behind the
WDM demultiplexer uses the TDM multiplexer. The TDM demultiplexer is located close to
the target area.
WDM-PON OLT and ONU under the Standford University aCCESS-Hybrid PON
(SUCCESS-HPON) architecture [2] is the target system for this thesis. SUCCESS-HPON is
a research initiative for the next-generation hybrid TDM/WDM optical access architecture.
The SUCCESS-HPON architecture is based on a ring plus distribution tree topology, fast
centralized tunable components and novel scheduling algorithms which exploit the benefits
of flexibility and dynamically-reconfigurable optical networks in access. It has guaranteed a
smooth transition from current TDM-PONs to future WDM-based optical access. The
14
upgrade path from pure TDM-PON to WDM-PONs in SUCCESS-HPON initiative will be
in an economical manner.
The SUCCESS-HPON [2] research initiative has answered a few questions like
• Backward compatibility, which guarantees the coexistence of current-generation
TDM-PONs and next-generation WDM-PONs in the same network.
• Easy upgradeability, which means providing smooth migration paths from TDM-
PON to WDM-PON.
• Protection/restoration capability, which means to support both residential and
business users on the same access infrastructure.
The SUCESS-HPON architecture [2] includes both TDM-PONs and WDM-PONs as its
subsystems for getting the benefits of both TDM-PONs and WDM-PONs. The SUCCESS-
HPON architecture consists of a collector ring with the stars connecting the central office
(CO) and optical networking units (ONU’s). The ONU’s are attached to the remote node’s
(RN’s) . An optical line terminal (OLT) uses tunable transmitters and receivers that are
shared by all optical network units (ONU’s). The tunable transmitters at the OLT are used for
both upstream and downstream transmissions. The sharing of tunable transmitters and
receivers at the OLT and the use of tunable transmitters for both upstream and downstream
transmissions, they use two new scheduling algorithms namely batching earliest departure
first (BEDF) and sequential scheduling with schedule-time framing (S3F) to provide good
transmission efficiency and fairness guarantee between upstream and downstream traffic.
15
Comparison of HPON versus 10G PON [12]
Characteristics HPON 10G PON
Feeder Fiber Capacity 40Gbps ~64 Gbps 10 Gbps
Line Bit Rate Dn:1.25~2.5 Gbps
Up:1.25~2.5 Gbps
Dn:10 Gbps
Up: 1.25 ~ 2.5 Gbps
Reach ~40km ~20 km
Splitting Ratio ~512 ~32
Av. BW/Subs. Symmetric
Dn:78~125 Mbps
Up:78~125 Mbps
Asymmetric
Dn: 312 Mbps
Up: 31.2~312 Mbps
Table 3 : Comparison of HPON and 10G PON
In HPON, The feeder fiber capacity has the average of 50Gbps with a reach of almost 40kms with the splitting ratio of 1:512 which is quite significant in considering for the project. The detailed explanation is as follows:
Figure 2 : Block diagram of a hybrid TDM/WDM link b ased on tunable transmitters and fixed receivers
The multi-channel scheduling with tunable transmitters and receivers in WDM PONs have
been demonstrated by simulations and test bed experiments through projects under the
SUCCESS initiative [5, 6]. The upstream flows are scheduled together with the downstream
16
ones in an integrated way in the SUCCESS-HPON, where both upstream ones and
downstream ones are dependent.
In this thesis, we consider a hybrid TDM/WDM link using tunable transmitters at the
transmitting side and fixed receivers at the receiving ends. This hybrid TDM/WDM link
model as shown in figure 2 is general enough to be applied to downstream flows in a hybrid
TDM/WDM-PON. The performance of this model needs to be investigated.
2.2 Quality of Service
Quality of Service (QoS) [13] in the field of communications can be defined as a set of
specific requirements provided by a network to users in order to achieve required service.
Quality of Service (QoS) for communication networks is a set of methods for establishing
reliable performance of the networks. It is a method to guarantee a bandwidth (throughput) to
the network. The network performance which depends on QoS often includes bandwidth
In frame-based schedulers, packets are served in rounds or by frames. During each round, a
flow receives at least an opportunity to transmit the data packets or frame. The schedulers are
known for their simplicity and low work complexity. Some of the frame-based schedulers are
Round Robin (RR), Deficit Round Robin (DRR), Weighted Round Robin (WRR) etc [17].
Now we review some of the well-known schedulers.
2.3.2 Fair Queuing (FQ)
Fair queuing is a scheduling algorithm which is used to allow multiple packets from the
flows to fairly share the link capacity. It is a timestamp-based scheduling scheme. It can be
interpreted as a packet approximation of generalized processor sharing (GPS). Fair queuing
was proposed by John Nagle in 1985 [18].
Fair queuing achieves the max-min fairness, i.e. firstly, the minimum data rate that a data
flow achieves is maximized; secondly, the second lowest data rate that a data flow achieves
is maximized, etc. This results in lower throughput.
When compared to the round-robin scheduling, fair queuing takes into account data packet
sizes to ensure that each flow gives equal opportunity to transmit an equal amount of data.
Packet-based flows are however transmitted in sequence. The Fair queuing selects the
transmission order for the packets by checking (modeled) the finish time for each packet and
23
they are transmitted bitwise round robin. The packet with the earliest finish time according to
the transmission order (modeling) is the next selected for transmission [19].
2.3.3 Weighted Fair Queuing (WFQ)
Weighted fair queuing (WFQ) is a scheduling scheme which allows different scheduling
priorities to statistically multiplexed data flows [20]. WFQ is a generalization of fair queuing
(FQ). It emulates the Generalized Processor Sharing (GPS). GPS is a theoretical scheduler
unable to be implemented. It is a timestamp-based scheduling scheme. WFQ supports the
flows with different bandwidth requirements by giving each queue a weight that assigns it a
different percentage of output port bandwidth. It supports variable-length packets so that
flows with larger packets are not being allocated more bandwidth than flows with the smaller
packets. Supporting variable-length packet increases considerably the total computational
complexity of this discipline. Furthermore, its complexity is higher since it needs to calculate
the finish time of each packet. Finish time represents the number assigned to each packet to
express the order by which they should be transmitted on the output port (Demers and
Keshav in 1989) [19].
Advantages:
• Protection to each service class.
• Performs well with the variable size packets.
• Weighted and fair share of output port bandwidth to each service class with a
bounded delay.
Limitations:
• It can be implemented only in the software.
• The ill-behaved flows are not protected.
• Scalability problems.
• High Complexity.
24
• The complexity affects its scalability to work in high-speed interfaces.
• There may be large delays though it provides guaranteed delays.
2.3.4 Virtual Clock (VC)
Virtual Clock [21] is derived from a Time Divison Multiplexing (TDM) system. It allocates a
guaranteed throughput to the flows according to their expected packet arrival rates. It is a
timestamp-based scheduling scheme. The Virtual Clock (VC) scheduling algorithm provides the same
end-to-end delay bound as Weighted Fair Queuing (WFQ) with a simple time-stamp computation
algorithm. The disadvantage of this algorithm is that a backlogged packets can be starved for a long
time [19].
2.3.5 Round Robin Scheduling (RR Scheduling)
The Round-Robin Scheduling is a one of the earliest and simplest frame-based scheduling
techniques used in the networking field. It is the least complex scheduling algorithm. It has a
complexity value of O(1) [22]. The RR algorithm services the backlogged packets in a round
robin fashion. Each time the scheduler pointer moves to a particular queue, one packet is
dequeued from that queue unless the queue is empty and then the scheduler pointer goes to
the next queue. This is shown in Figure (5)
The round robin scheduling algorithm does not provide very high fairness in systems with
variable packet lengths, since the Round Robin tends to serve flows with longer packets for a
long time [19].
25
Figure 5 : General model of Round Robin Scheduling
Advantages: • It is simple.
• 0(1) per packet work complexity.
Limitations:
• Poor fairness
• Poor delay
2.3.6 Weighted Round Robin Scheduling (WRR Scheduling)
A Weighted Round Robin (WRR) scheduler [23] is the simplest approximation of
generalized processor sharing (GPS) which serves multiple packets from a flow according to
the flow's normalized weight. WRR operates on the same basis of Round Robin scheduling,
where the weight of a flow is defined as its relative share of the total link bandwidth.
However, unlike round robin scheduling, WRR assigns a weight to each queue In WRR, the
mean packet size for all the flows are known prior to scheduling packets to overcome
unfairness. WRR was designed to differentiate flows or queues to enable various service
rates. The weight of an individual flow is defined as its relative share of the total link
bandwidth (available system bandwidth). This means that, the number of packets dequeued
from a queue varies according to the weight assigned to that queue [19].
26
Advantages:
• It is simple
• It has a 0(1) computational complexity per packet.
Limitations:
• Poor fairness
• Poor delay
2.3.7 Deficit Round Robin Scheduling (DRR Scheduling)
The Deficit Round Robin (DRR) [17] scheduling algorithm was the first frame-based
scheduling algorithm to overcome the unfairness characteristic caused by the variable packet
sizes by the different flows.The DRR scheduling algorithm overcame the unfairness
characteristic of the both round-robin algorithm and weighted round-robin algorithm. The
DRR retains the per packet complexity value of O(1) from the round-robin scheduling.
In DRR scheduling, every flow is allotted with a deficit counter which is initially set to the
desired quantum value. Deficit counter is the amount of credits in a byte, which constantly
compares its credits with the packet sizes in the flow. Initially deficit counter starts at zero. It
increments based on the quantum size set. A quantum is the amount of credits in bytes
assigned to each of the flow within the period of one round. It is very important to decide the
quantum size as it might cause fairness issues. A quantum represents the amount of bytes a
queue may require in order to serve the packets in the flow.
The deficit counter is incremented by the quantum size in every round of the scheduler. The
deficit counter is reduced by the amount of packets being served in every visit by the
scheduler. The packets are served only if the packet sizes are lesser than the deficit counter.
If the packet sizes are lesser than the deficit counter, then the packets are backlogged and
then they inspect the deficit counter in the next round. In the next round the deficit counter
value gets added to the quantum value, so the packets can be served successfully. Once the
27
packets are served, the deficit counter is reduced by the amount of packets being served. If
there are no packets to be served in the flow, then the deficit counter resets to zero.
If the quantum assigned to a flow is significantly lesser than the maximum size of the packets
that arrive to the system, this causes delay and burstiness. At the same time, if the quantum
assigned to a flow is significantly higher than the maximum size of the packets that arrive to
the system, this causes a short-time unfairness of the system, leading to a higher latency
bound which might increase gradually. The DRR algorithm also requires the knowledge of
the packet size prior to scheduling it, this information could be available in the packet header
of IP packets.
Figure 6 : General Model of DRR Scheduling
Figure 7 : DRR Tx Diagram
28
The DRR Tx diagram which has packets in the y-axis and time in x-axis. The DRR Tx
diagram shows that the Round 2 starts after the completion of Round 1.
Advantages:
• It is simple
• Cost-effective
• It has a complexity of O(1).
Limitations:
• There is latency bound for real time applications.
• It does not provide end to end delay guarantees.
• Not very accurate as other schedulers.
On the other hand, DRR is the most popular and deployed schemes. DRR is Cisco’s favorite
scheduling algorithm due to its simplicity and low cost of implementation. Compared with
Fair Queuing (FQ) scheduler that has a complexity of O (log N) (N is the number of active
flows), the complexity of DRR is O(1).
Taking into account all drawbacks of the aforementioned schedulers, we choose the well-
know basic Deficit Round Robin (DRR) scheduler as the queuing scheme and packet
scheduling for our framework. The scheduler we will be designing is conceptually based on
the DRR approach [3]. We propose an enhancement of the DRR, that is called as Multi-
Channel Scheduling Deficit Round Robin (MCDRR). In this MCDRR Scheduling, we are
using the concept of basic DRR Single channel case. We extend the original DRR to the case
of multiple channels with tunable transmitters and fixed receivers to provide efficient fair
queueing in hybrid time division multiplexing (TDM)/wavelength division multiplexing
(WDM) optical networks. We take into account the availability of channels and tunable
transmitters in extending the DRR and allow the overlap of `rounds' in scheduling to
efficiently utilize channels and tunable transmitters. Simulation results show that the
proposed MCDRR can provide nearly perfect fairness with ill-behaved flows for different
29
sets of conditions for interframe times and frame sizes in hybrid TDM/WDM optical
networks with tunable transmitters and fixed receivers.
Scheduling algorithms are the potential solution which has been taken into major concern,
and the present well known algorithms have been briefly described. Among the scheduling
algorithms that have been briefly described before and for the purpose of achieving
efficiency in multi-channel scheduling, the Multi-Channel Deficit Round Robin (MCDRR)
scheduling algorithm’s attributes, properties and architecture have been studied in detail.
2.4 Summary
This chapter presented a brief review of the two major categories of packet schedulers:
frame-based schedulers and timestamp-based schedulers. Different scheduling algorithms
that have been briefly described, their advantages and their disadvantages. As mentioned
before, this thesis is mainly concerned with the the frame-based schedulers because of their
efficiency and simplicity. In the frame-based schedulers, we choose DRR. DRR is the most
popular and deployed schemes. DRR is Cisco’s favorite scheduling algorithm due to its
simplicity and low cost of implementation. Compared with Fair Queuing (FQ) scheduler that
has a complexity of O (log N) (N is the number of active flows), the complexity of DRR is
O(1). The next chapter explains in detail the importance of using DRR in this thesis.
30
Chapter 3
Deficit Round Robin Scheduling
The Deficit Round Robin (DRR) [17] scheduling algorithm was the first frame-based
scheduling algorithm to overcome the unfairness characteristic caused by the variable packet
sizes by the different flows. The DRR scheduling algorithm overcame the unfairness
characteristic of the both round-robin algorithm and weighted round-robin algorithm. The
DRR retains the per packet complexity value of O(1) from the round-robin scheduling. DRR
is also called as the Deficit Weighted Round Robin (DWRR). It is a modified weighted round
robin scheduling discipline. It has become one of the most popular fair scheduling schemes
that can handle variable packet sizes. DRR is a round-robin scheduling algorithm extended
by deficit counters [3]. In DRR scheduling, every flow is allotted with a deficit counter
which is initially set to the desired quantum value. Deficit counter is the amount of credits in
byte, which constantly compares its credits with the packet sizes in the flow. Initially deficit
counter starts at zero. It increments based on the quantum size set. A quantum is the amount
of credits in bytes assigned to each of the flow within the period of one round. It is very
important to decide the quantum size as it might cause fairness issues. A quantum represents
the amount of bytes a queue may require in order to serve the packets in the flow.
In the basic DRR scheme, stochastic fair queuing (SFQ) is used to assign flows to queues.
DRR uses the principle of Round-robin scheduling. The round-robin scheduling is used for
servicing the queues with a quantum of service assigned to each flow. The DRR scheduler in
constant rotation selects packets from each flow to send out to the destination. So the packets
from all flows that have queued gets a fair chance to transmit to the particular destinations.
The DRR maintains an active list or called as service list which keeps the record of the non-
empty queues in a round and to avoid examining the empty queues. In the DRR mechanism,
the scheduler visits each non-empty queue in the active list and based on the quantum size,
the packets are served. It differs from the traditional round-robin because if the packet in the
previous round was too large compared to the quantum size, then the remainder from the
31
previous quantum is added to the quantum for the present round, so the packets can be served
successfully in the present round. This can continue for the next rounds as well. The queues
which are not serviced completely in a round are compensated in the next round. To service
one particular flow again, that flow must wait N-1 flows to be serviced again irrespective of
its weight. During each round, a flow can transmit at once as many packets as possible if
there is enough quantum for them. For each flow, two variables --- i.e., quantum and deficit
counter --- are maintained. Quantum is the amount of credits in bytes assigned to each of the
flow within the period of one round. It is very important to decide the quantum size as it
might cause fairness issues. In general if we expect that the work complexity O(1) per packet
for the DRR, then the quantum for a flow should be larger than the maximum packet size
from the flow so that at least one packet per backlogged flow can be served in a round [3].
Due to various reasons the DRR has poor delay and burstiness properties.
Figure 8 : General Model of the DRR
Figure 9 : DRR Tx Diagram
32
The DRR Tx diagram which has packets in the y-axis and time in x-axis. The DRR Tx
diagram shows that the Round 2 starts after the completion of Round 1.
The Deficit Round Robin scheduling algorithm was the first frame-based scheduling
algorithm to overcome the unfairness characteristic caused by the variable packet sizes by the
different flows.The DRR scheduling algorithm overcame the unfairness characteristic of the
both round-robin algorithm and weighted round-robin algorithm. The DRR retains the per
packet complexity value of O(1) from the round-robin scheduling. In this DRR scheduling,
every flow is allotted with a deficit counter which is initially set to the desired quantum
value. A quantum is the amount of credits in bytes assigned to each of the flow within the
period of one round. It is very important to decide the quantum size as it might cause fairness
issues. A quantum represents the amount of bytes a queue may require in order to serve the
packets in the flow.The deficit counter is incremented by the quantum size in every round of
the scheduler. The deficit counter is reduced by the amount of packets being served in every
visit by the scheduler. The packets are served only if the packet sizes are lesser than the
deficit counter. If the packet sizes are lesser than the deficit counter, then the packets are
backlogged and then they inspect the deficit counter in the next round. In the next round the
deficit counter value gets added to the quantum value, so the packets can be served
successfully. Once the packets are served, the deficit counter is reduced by the amount of
packets being served. If there are no packets to be served in the flow, then the deficit counter
resets to zero. If the quantum assigned to a flow is significantly lesser than the maximum size
of the packets that arrive to the system, this causes delay and burstiness. At the same time, if
the quantum assigned to a flow is significantly higher than the maximum size of the packets
that arrive to the system, this causes a short-time unfairness of the system, leading to a higher
latency bound which might increase gradually. The DRR algorithm also requires the
knowledge of the packet size prior to scheduling it, this information could be available in the
packet header of IP packets.
Advantages of the DRR:
• It is simple
• Cost-effective
• It has a complexity of O(1).
33
Limitations of the DRR:
• There is latency bound for real time applications.
• It does not provide end to end delay guarantees.
• Not very accurate as other schedulers.
Due to its major advantages such as simple mechanism, cost-effective and has a complexity
of O(1), we choose DRR single channel scheduling technique to extend it to multi-channel
case.
3.1 DRR Algorithm
In DRR algorithm, every flow is assigned with a deficit counter which is initially set to the
desired quantum value. In the algorithm for ‘n’ flows, deficit counter is set to zero. The
round-robin pointer moves through all the ‘n’ flows sequentially from the first flow to the nth
flow. Once initialized, we need enqueuing module. Once the packets arrive in the flow, it
gets queued and the deficit counter is set to the quantum for each flow. In the dequeuing
module, the deficit counter is incremented by the quantum size in every round of the
scheduler. The deficit counter is reduced by the amount of packets being served in every visit
by the scheduler. The packets are served only if the packet sizes are lesser than the deficit
counter. If the packet sizes are lesser than the deficit counter, then the packets are backlogged
and then they inspect the deficit counter in the next round. In the next round the deficit
counter value gets added to the quantum value, so the packets can be served successfully.
Once the packets are served, the deficit counter is reduced by the amount of packets being
served. If there are no packets to be served in the flow, then the deficit counter resets to zero.
The DRR algorithm also requires the knowledge of the packet size prior to scheduling it, this
information could be available in the packet header of IP packets. The below table shows the
algorithm of the DRR.
34
Table 4 : Pseudcode of the DRR algorithm
Initialization For ( i=0; i<n; i=i+1) DC=0; Enqueuing module i= Packet Flow (); queue.insert (packet); DCi =Qi ; Dequeuing module If (No. of packets in queue i>0) { DCi = Qi + DCi; While((DCi > 0)) Do{ If (PSi ≤ DCi) [SERVE PACKET]; DCi = DCi – PSi; ElseIf (PSi > DCi) DCi = 0; //Reset } } If (No. of packets in queue i=0) DCi = 0;
35
3.2 DRR Example
Figure 10 : DRR: Start of Round 1
Once the packets arrive in the flow, the scheduling process starts. At the start of the First
Round, the round robin pointer starts from the first flow initialized. The deficit counter
becomes equal to the quantum size. If the packet size is lesser than the deficit counter and
channel is available at that instant of time, the packet is served. If the packet size is greater
than the deficit counter, the pointer is moved to the next flow. In the next round, the packets
backlogged will be transmitted in the next round.
In the example quantum size is considered to be 500 credits, the pointer starts from the Flow
1, the packet of size 110 bytes will be served since it is less than the deficit counter 500
credits and the channel is available at that instant of time. After serving, the deficit counter is
updated, that is DC becomes 390 credits. Once the channel is available, it triggers the
scheduling and now the pointer moves to the Flow 2, the packet of size 250 bytes will be
served since they are less than the deficit counter 500 credits and the channel is available at
that instant of time. DC is updated. Now the scheduling process continues and the pointer
36
moves to Flow 3, the packet size is greater than the deficit counter and the flow is skipped.
DC remains the same. The packet in Flow 4 of size 500 bytes is served successfully and the
DC is updated. Since the pointer has moved through all the given flows, we say it as the
“Completion of one Round''. In this case, it is end of Round 1.
Figure 11 : DRR: End of Round 1
Figure 12 : DRR Transmission Diagram after Round 1
37
Figure 13 : DRR: Start of Round 2
Once the channel is available, it triggers the scheduling process, which is the start of next
Round, that is Second Round. The deficit counter is updated with the quantum size again i.e
quantum is added to all the deficit counters of the respective flows. DC= DC(prev) +
Quantum Size. The pointer starts from the Flow 1 again. The DC becomes 390 credits + 500
credits. In this second round, two packets in Flow 1 had arrived. According to the
description, multiple packets can be served from each flow irrespective of packet size as far
as they satisfy the dequeuing criteria. So the packet size of 150 bytes and 200 bytes can be
served successfully with channel available at that instant. After the service, the DC is updated
again.
After the packets are transmitted successfully and the channel becomes available, the pointer
moves to the second flow, the packet of 100 bytes in Flow 2 is served successfully as it is
less than the deficit counter. The packet in Flow 3 which was not served in the previous
round is being served in this round using TX2 because the DC is 1000 credits now. After
transmission, the DC is updated. Once the channel is available, it triggers the scheduling
38
process and the packet of 150 bytes in Flow 4 is served successfully, that is the End of
Round, which is the Round 2.
Figure 14 : DRR: End of Round 2
Figure 15 : DRR Transmission Diagram after Round 2
39
Figure 16 : DRR: Start of Round 3
Again when the channel is available, it triggers the scheduling process, which is the start of
the next round, that is the Third Round. The DC is updated with the quantum size again. The
pointer starts from the Flow 1. The packet of 100 bytes in the Flow 1 is served and the
pointer is moved to the next Flow 2. In the Flow 2, the packet size of 50 bytes is served
easily. The packet of 200 bytes in Flow 3 can be served at this instant once the channel is
available. Now the pointer moves to the next flow and no packets arrived in this round means
no packets to be served in this Third round i.e Flow is empty, in that case DC is reset to zero
for fairness issues. Once the channel is available, the next round starts from the Flow 1 and
the process continues till all the flows completely become empty which is sequential.
This example covers all the details such as packet size lesser than the deficit counter with
channel available and channel not available at some instant of time, then packet size greater
than the deficit counter with channel available and not available, the flow being empty in one
particular round.
40
Figure 17 : DRR: End of Round 3
Figure 18 : DRR Transmission Diagram after Round 3
Figure 19 : DRR Transmission Diagram after each Rounds
41
The arrows in the above diagrams shows the end of the packet transmission of packet from
each flow and when the channel is available, it triggers the scheduling process. Once the
packets are served from each flow, the scheduling pointer moves to the next flow which is in
sequence. The Figure- Transmission Diagram after each rounds shows the rounds are not
overlapped. The DRR is carried out in such a way, where the next round starts as the
previous round completes successfully. It means that the delay cannot be avoided in DRR
though packets satisfy all the criteria.
3.3 Summary
DRR is one of the frame-based scheduler and is used mainly because of its efficiency and
simplicity. This chapter described the DRR algorithm and an example to illustrate the process
of DRR scheduling. The next chapter explains the importance of using MCDRR in this
thesis. The algorithm and the example are also explained.
42
Chapter 4
MCDRR
MCDRR is called as Multi-channel Deficit Round Robin scheduler which is a frame-based
scheduling algorithm to overcome the unfairness characteristic caused by the variable packet
sizes by the different flows in the mulit-channel case. The scheduling of packets in switches
and routers has been studied mainly in the context of single-channel communication with
fixed transceivers. We extend the packet scheduling problem to the case of multi-channel
communication with tunable transmitters and fixed receivers as shown in the figure. The
MCDRR scheduling algorithm is the frame-based scheduling algorithm to overcome the unfairness
characteristic caused by the variable packet sizes by the different flows in the multi-channel case. We
presented the paper called “ Multi-Channel Deficit Round Robin Scheduling for Hybrid
TDM/WDM Optical Networks” at FOAN-2012 Russia and journal version to be presented
soon to the IEEE. The paper describes the MCDRR algorithm, an example and the simulation
results [24].
Figure 20 : Block diagram of a hybrid TDM/WDM link based on tunable transmitters and fixed receivers
43
The proposed MCDRR, the multi-channel extension of the DRR, takes into account the
availability of channels and tunable transmitters and overlaps ‘rounds' in scheduling to
efficiently utilize channels and tunable transmitters. To service the queues (i.e., virtual output
queues (VOQs)), we use the simple round-robin algorithm with a quantum of service
assigned to each queue as in the case of DRR. The MCDRR can handle variable packet sizes.
The MCDRR retains the per packet complexity value of O(1) from the round-robin
scheduling and DRR. In the MCDRR scheduling, every flow is allotted with a deficit counter
which is initially set to the desired quantum value. Deficit counter is the amount of credits in
byte, which constantly compares its credits with the packet sizes in the flow. Initially deficit
counter starts at zero. It increments based on the quantum size set. A quantum is the amount
of credits in bytes assigned to each of the flow within the period of one round. It is very
important to decide the quantum size as it might cause fairness issues. A quantum represents
the amount of bytes a queue may require in order to serve the packets in the flow.
MCDRR is a round-robin scheduling algorithm extended by deficit counters for multi-
channels. Since the DRR uses the principle of Round-robin scheduling, we use the same
principle in our MCDRR as well. The round-robin scheduling is used for servicing the
queues with a quantum of service assigned to each flow. The MCDRR scheduler in constant
rotation selects packets from each flow as the tunable transmitters triggers to send out to the
destination. So the packets from all flows that have queued gets a fair chance to transmit to
the particular destinations. The MCDRR maintains an active list or called as service list
which keeps the record of the non-empty queues in a round and to avoid examining the
empty queues. In the MCDRR mechanism, once the tunable transmitter triggers, the
scheduling process starts, the scheduler visits each non-empty queue in the active list and
based on the quantum size, the packets are served. It differs from the traditional round-robin
because if the packet in the previous round was too large compared to the quantum size, then
the remainder from the previous quantum is added to the quantum for the present round, so
the packets can be served successfully in the present round. This can continue for the next
rounds as well. The queues which are not serviced completely in a round are compensated in
the next round. To service one particular flow again, that flow must wait N-1 flows to be
44
serviced again irrespective of its weight. During each round, a flow can transmit at once as
many packets as possible if there is enough quantum for them. For each flow, two variables -
-- i.e., quantum and deficit counter --- are maintained. Deficit counter is the amount of
credits in byte, which constantly compares its credits with the packet sizes in the flow.
Initially deficit counter starts at zero. It increments based on the quantum size set. Quantum
is the amount of credits in bytes assigned to each of the flow within the period of one round.
It is very important to decide the quantum size as it might cause fairness issues. In general if
we expect that the work complexity O(1) per packet for the MCDRR, then the quantum for a
flow should be larger than the maximum packet size from the flow so that at least one packet
per backlogged flow can be served in a round. Here we can observe “overlapping of
Rounds” to avoid delay in MCDRR which is not in the case of DRR. In DRR, the round
robin pointer moving through the set of flows defined using the deficit counters is known as
the period of one round and rounds are sequential without any overlap. In MCDRR, since
tunable transmitters are used, the rounds run in parallel and there is always overlapping of
rounds which reduces the delay.
The MCDRR scheduling algorithm overcame the unfairness characteristic of the both round-robin
algorithm and weighted round-robin algorithm. The MCDRR retains the per packet complexity value
of O(1) from the deficit round-robin scheduling.
Figure 21 : MCDRR Scheduler
In this MCDRR scheduling, every flow is allotted with a deficit counter which is initially set
to the desired quantum value. A quantum is the amount of credits in bytes assigned to each of
45
the flow within the period of one round. It is very important to decide the quantum size as it
might cause fairness issues. A quantum represents the amount of bytes a queue may require
in order to serve the packets in the flow. The deficit counter is incremented by the quantum
size in every round of the scheduler. The deficit counter is reduced by the amount of packets
being served in every visit by the scheduler. The packets are served only if the packet sizes
are lesser than the deficit counter. If the packet sizes are lesser than the deficit counter, then
the packets are backlogged and then they inspect the deficit counter in the next round. In the
next round the deficit counter value gets added to the quantum value, so the packets can be
served successfully. Once the packets are served, the deficit counter is reduced by the amount
of packets being served. If there are no packets to be served in the flow, then the deficit
counter resets to zero. If the quantum assigned to a flow is significantly lesser than the
maximum size of the packets that arrive to the system, this causes delay and burstiness. At
the same time, if the quantum assigned to a flow is significantly higher than the maximum
size of the packets that arrive to the system, this causes a short-time unfairness of the system,
leading to a higher latency bound which might increase gradually. The MCDRR algorithm
also requires the knowledge of the packet size prior to scheduling it, this information could
be available in the packet header of IP packets.
4.1 MCDRR Algorithm
In the MCDRR algorithm, every flow is assigned with a deficit counter which is initially set
to the desired quantum value. In the algorithm for ‘n’ flows, deficit counter is set to zero. The
round-robin pointer moves through all the ‘n’ flows sequentially from the first flow to the nth
flow. Once initialized, we need enqueuing module. Once the packets arrive in the flow, it
gets queued and the deficit counter is set to the quantum for each flow. In the dequeuing
module, the deficit counter is incremented by the quantum size in every round of the
scheduler. The deficit counter is reduced by the amount of packets being served in every visit
by the scheduler. The packets are served only if the packet sizes are lesser than the deficit
counter. If the packet sizes are lesser than the deficit counter, then the packets are backlogged
and then they inspect the deficit counter in the next round. In the next round the deficit
counter value gets added to the quantum value, so the packets can be served successfully.
46
Once the packets are served, the deficit counter is reduced by the amount of packets being
served. If there are no packets to be served in the flow, then the deficit counter resets to zero.
The MCDRR algorithm also requires the knowledge of the packet size prior to scheduling it,
this information could be available in the packet header of IP packets. The below table shows
the algorithm of the MCDRR.
47
Initialization; for i � 0 to W-1 do DC[i] = 0; end Arrival on the arrival of a packet p from channel i; if Enqueue(i, p) is successful then if a transmitter is available then (ptr, ch) � Dequeue(); if pkt ≠ NULL then Send(*ptr, ch); if VOQ[i] is empty then DQ[i] 0; end end end Dequeue; startQueueIndex � (currentQueueIndex + 1)%W; for i � 0 to W - 1 do idx i + startQueueIndex%W; if VOQ[idx] is not empty then DC[idx] DC[idx] + Q[idx]; if numPktsScheduled[idx] == 0 then currentQueueIndex � idx; pos � 0; ptr � &packet(V OQ[idx], pos); repeat DC[idx] � DC[idx] -length(*ptr); numPktsScheduled[idx] + +; pos + +; ptr � &packet(V OQ[idx],pos); if ptr is NULL then Exit the loop; until DQ[idx] length(*ptr); Return (&packet(V OQ[idx], 0), currentQueueIndex); end end end Return NULL;
48
Table 5 : Pseudcode of the MCDRR algorithm
This is the new scheduling algorithm called MCDRR algorithm designed for real time multi-channel systems.
In real systems, the MCDRR scheduler in constant rotation selects packets from each flow as
the tunable transmitters triggers to send out to the destination. So the packets from all flows
that have queued gets a fair chance to transmit to the particular destinations. The MCDRR
maintains an active list or called as service list which keeps the record of the non-empty
queues in a round and to avoid examining the empty queues. In the MCDRR mechanism,
once the tunable transmitter triggers, the scheduling process starts, the scheduler visits each
non-empty queue in the active list and based on the quantum size, the packets are served. It
differs from the traditional round-robin because if the packet in the previous round was too
large compared to the quantum size, then the remainder from the previous quantum is added
to the quantum for the present round, so the packets can be served successfully in the present
round. This can continue for the next rounds as well. The queues which are not serviced
completely in a round are compensated in the next round. To service one particular flow
again, that flow must wait N-1 flows to be serviced again irrespective of its weight. During
each round, a flow can transmit at once as many packets as possible if there is enough
Departure at the end of transmission on channel i; numPktsScheduled[i] - -; if numPktsScheduled[i] > 0 then ptr � &packet(V OQ[i], 0); Send(*ptr, i); if VOQ[i] is empty then DC[i] > 0; end else (ptr, ch) � Dequeue(); if ptr ≠ NULL then Send(*ptr, ch); if VOQ[ch] is empty then DC[ch] � 0; end end
49
quantum for them. For each flow, two variables --- i.e., quantum and deficit counter --- are
maintained. It is very important to decide the quantum size as it might cause fairness issues.
In general if we expect that the work complexity O(1) per packet for the MCDRR, then the
quantum for a flow should be larger than the maximum packet size from the flow so that at
least one packet per backlogged flow can be served in a round.
4.2 MCDRR Example
Figure 22 : MCDRR: Start of Round 1
Once the packets arrive in the flow, the scheduling process starts. At the start of the First
Round, the tunable transmitter available triggers the scheduling process. The round robin
pointer starts from the first flow initialized. The deficit counter becomes equal to the
quantum size. If the packet size is lesser than the deficit counter and channel is available at
that instant of time, the packet is served. If the channel is not available, the pointer is moved
to the next flow. When the channel becomes available then the packet will be transmitted in
the next round.
50
In the example quantum size is considered to be 500 credits, now both the tunable
transmitters are available, the pointer starts from the Flow 1, the packet of size 110 bytes will
be served since it is less than the deficit counter 500 credits and the channel is available at
that instant of time. By default they choose tunable transmitter 1. After serving, the deficit
counter is updated, that is DC becomes 390 credits. Since the tunable transmitter 2 is also
available, the pointer moves to the Flow 2, the packet of size 250 bytes will be served since
they are less than the deficit counter 500 credits and the channel is available at that instant of
time. DC is updated. Now TX1 becomes available and triggers the scheduling process, the
pointer moves to Flow 3, the packet size is greater than the deficit counter and the flow is
skipped. DC remains the same. Still the TX1 is available, so the packet in Flow 4 of size 500
bytes is served successfully. Since the pointer has moved through all the given flows, we say
it as “Completion of one Round''.
Figure 23 : MCDRR: End of Round 1
51
Figure 24 : MCDRR Transmission Diagram after Round 1
Figure 25 : MCDRR: Start of Round 2
Now TX2 becomes available and triggers the scheduling process, which is the start of next
Round, that is Second Round. The deficit counter is updated with the quantum size again i.e
quantum is added to all the deficit counters of the respective flows. DC= DC(prev) +
Quantum Size. The pointer starts from the Flow 1 again. The DC becomes 390 credits + 500
credits. In this second round, two packets in Flow 1 had arrived. According to our
description, multiple packets can be served from each flow irrespective of packet size as far
52
as they satisfy the dequeuing criteria. So the packet size of 150 bytes and 200 bytes can be
served successfully with channel available at that instant of time. After the service, the DC is
updated again. At some instant, both the tunable transmitters TX1 and TX2 can be available.
In that case by default TX1 will be chosen. In this case TX2 triggers again, the packet of 100
bytes in Flow 2 is served successfully. The packet in Flow 3 which was not served in
previous round is been served in this round using TX2 because the DC is 1000 credits now.
Now TX1 becomes available and the packet of 150 bytes in Flow 4 is served, that is the End
of Round.
Figure 26 : MCDRR: End of Round 2
53
Figure 27 : MCDRR Transmission Diagram after Round 2
Figure 28 : MCDRR: Start of Round 3
The TX1 becomes available and triggers the scheduling process, which is the start of the next
round, that is the Third Round. The DC is updated with the quantum size again. The pointer
starts from the Flow 1. The packet of 200 bytes in the Flow 1 is served and after some instant
again TX1 becomes available and the packet in Flow 2 is served. Since the packet size is
small, TX1 becomes available at the earliest compared to the TX2. The packet (200 bytes) in
Flow 3 cannot be served at this instant though the tunable transmitter is available because
channel is not available, it means that the packet is still being served from the previous
54
round. So the pointer moves to the next flow and packets arrived in this round means no
packets to be served in this Third round i.e Flow is empty, in that case DC is reset to zero for
fairness issues. So that is the end of this round. Since TX1 is still available, the next round
starts from the Flow 1 and the process continues till all the flows completely become empty
which is sequential.
This example covers all the details such as packet size lesser than the deficit counter with
channel available and channel not available at some instant of time, then packet size greater
than the deficit counter with channel available and not available, flow being empty in one
particular round.
Figure 29 : MCDRR: End of Round 3
55
Figure 30 : MCDRR Transmission DIagram after Round 3
Figure 31 : MCDRR Transmission Diagram after the Rounds
56
Figure 32 : MCDRR Transmission Phase Diagram
The transmission phase diagram shows the scheduling results at each round. The packets
scheduled and transmitted and the packets which are not scheduled.
The tunable transmitter selection diagram describes visually the tunable selection process for the data packets from the flows to the destination through channels where ‘F’ means flows, ‘T’ means tunable transmitters and ‘C’ means Channels.
4.3 Summary
The DRR which is also one of the frame-based scheduler for the single channel case is
mainly used in the thesis because of its efficiency and simplicity, it is extended in the thesis
as MCDRR. This chapter described the MCDRR algorithm and an example to illustrate the
process of MCDRR scheduling. Now MCDRR is also a frame-based scheduling algorithm to
overcome the unfairness characteristic caused by the variable packet sizes by the different
flows in the multi-channel case. The scheduling of packets in switches and routers has been
studied mainly in the context of single-channel communication with fixed transceivers. We
extended the packet scheduling problem in the case of multi-channel communication with
tunable transmitters and fixed receivers. The next chapter is about the simulation model and the
results obtained.
58
Chapter 5
SIMULATION
5.1 Methods to Evaluate a Scheduling algorithm
According to [25] there are three main methods to evaluate a Scheduling algorithm.
Method 1 : Deterministic modelling:
The different algorithms are performed and tested against a predetermined workload. Then
the performance results obtained are compared.
Method 2 : Stochastic modelling/Queuing models:
The different algorithms are evaluated analytically in a mathematical way.
Method 3 : Implementation/Simulation:
It is the most versatile method of testing scheduling algorithms with real time data and
conditions.
In this thesis the third method of evaluation methodology will be considered using
OMNet++.
5.2 OMNeT++ Modeller
OMNeT++ is an Objective Modular Network Testbed in C++ which is an object-oriented
modular discrete event simulator [26]. OMNeT++ is a general purpose tool and not
specifically designed for network simulations. It provides a development environment for
modeling of communication networks and it supports tools for all phases of study, including
model design, data collection, simulation, and data analysis.
Components for network simulations are provided by the frameworks. Modules for optical
network simulations are provided by INET frameworks.
The OMNeT++ simulation model consists of modules. Modules communicate by exchanging
messages.The reception of a message is an event. Modules implement application-specific
behaviour. Modules are C++ objects. Connections are defined using the NED (network
topology description) language. Gates are well-defined interfaces.
59
The Omnet++ simulator can be used for modeling:
• Communication protocols
• Computer networks and traffic modeling
• Multi-processor and distributed systems
• Administrative systems
• Other system where the discrete event approach is suitable.
5.2.1 The Simulation System Components
Figure 34 : Architecture of OMNet++ Simulation Programs
The Simulation System Components consists of
• Simulation Kernel
• Library
• User Interface
The simulation Kernel contains the code that manages the simulation and its execution. The
library used is written in C++. The user interface is used in simulation execution, debugging
etc.
60
5.2.2 General Procedure to Implement a Model
The general procedure to implement a model are:
1. Define modules and network topology (.ned)
2. Define messages (.msg)
3. Implement the behaviour of simple modules (.cc)
4. Define the parameters for the simulation (omnetpp.ini)
5. Define metrics to be observed during simulation
4. Compile the project
5. Run the project and the results are recorded as scalar statistics in the finish() method (→
.sca file) or vector file or both.
Figure 35 : Heirarchy of the executable files
61
5.2.3 GUI Features Tkenv is a graphical runtime environment for executing simulations. Tkenv allows us to
understand and follow what is exactly happening inside the network. It gives us the detailed
picture of the execution of the model. Tkenv supports both simulation execution and
debugging.
The most important features are:
• Animation of the message flow.
• Scheduled messages can be monitored as the simulation is in progresses.
• Event-by-event execution is displayed.
• Output scalars and vectors graphical display can be viewed during simulation
executions.
• Results can be displayed as histograms and time-series diagrams.
• Simulations can be restarted
5.3 Simulated Scenario
5.3.1 System Model
We refer the paper [27], Integration of OMNeT++ Hybrid TDM/WDM-PON Models into
INET Framework. The models were initially developed for the study of Stanford University
aCCESS-Hybrid PON (SUCCESS-HPON) architecture into the INET framework and now
we are using it for general Hybrid TDM/WDM-PON Models.
5.3.2 Simulation Set-up
The operation between the OLT and ONT is done using the WDM multiplex and is then
divided by the arrayed waveguide gratings (AWG) demultiplexer into individual
wavelengths, which are then sent to the topology.
62
Figure 36 : MCDRR Graphical NED Editor
The Host’s, ONU’s , AWG, OLT and Server are designed in the graphical NED editor. We
define modules and network topology using “.ned” file.
Figure 37 : MCDRR Top Level Network
In the top level network, we can see the networks, modules and the submodules. We can inspect the network by double clicking on the node.
From the scenario 3, The throughput is maximum when the frame sizes are fixed to 1000
Bytes in the condition 1 and 500 Bytes in the condition 2. When the inter frame time is set to
16µsec, 64µsec, 80µsec and 128µsec, the throughput decreases as the inter-frame time is set
high. So in our thesis, throughput is maximum when different frame-sizes are considered.
The results obtained cannot be compared with the other related literature because the
algorithms from the literature papers are for single channel case and results obtained in the
project are for multi-channel case.
The simulation results of the investigation of the throughput, fairness and delay are really good for the proposed new multi-channel deficit round robin scheduling for multi-channel case with tunable transmitters are fixed receivers in hybrid TDM/WDM optical networks.
73
5.4 Summary
The simulation results of the investigation of the throughput, fairness and delay are
considered. From the scenario 1 simulation results, we found that the proposed MCDRR
scheduling algorithm provides nearly perfect fairness even with ill-behaved flows for
different sets of conditions for interframe times and frame sizes. So in our results the fairness
index is near to 1 which means all the users are receiving a fair share of system resources.
From the scenario 2, End-to-end mean delay varies only with the interframe times while the
frame sizes are uniformly distributed between 64 and 1518 bytes. But for the given
throughput and fairness, delay is negligible. From the scenario 3, The throughput is
maximum when the frame sizes are fixed to 1000 Bytes in the condition 1 and 500 Bytes in
the condition 2. When the inter frame time is set to 16µsec, 64µsec, 80µsec and 128µsec, the
throughput decreases as the inter-frame time is set high. So in our thesis, throughput is
maximum when different frame-sizes are considered. From the simulation results, the
throughput, fairness and delay are really good for the proposed new multi-channel deficit
round robin scheduling for multi-channel case with tunable transmitters are fixed receivers in
hybrid TDM/WDM optical networks.
74
Chapter 6
Conclusion and Future Research
6.1 Conclusion
Scheduling algorithms is one of the current research issues. Designing a scheduler that is less
complex, more efficient and provides a superior Quality of Service is of great importance to
the next-generation optical access.
In this thesis, a brief description was given for the Deficit Round Robin scheduling. The
DRR scheduling algorithm has been described in depth. After that we attempted to enhance
the throughput of the system with regard to the multi-channel system. Considering the system
complexity, the DRR was mainly chosen to be enhanced since it provides O(1) per packet
complexity. This can assure that the implementation of this scheduler should not be complex
or expensive.
In this thesis we have proposed and investigated the performance of the MCDRR scheduling
algorithm for a multi-channel link with tunable transmitters and fixed receivers, which is
based on the DRR, the well-known single-channel scheduling algorithm. In extending the
DRR to the case of multi-channel scheduling, we try to efficiently utilize the network
resources (i.e., channels and tunable transmitters) by overlapping rounds, while maintaining
its low complexity i.e. O(1). The nearly perfect fairness provided by the MCDRR has been
demonstrated through simulation experiments.
The simulation results of the investigation of the throughput, fairness and delay are really good for the proposed new multi-channel deficit round robin scheduling for multi-channel case with tunable transmitters are fixed receivers in hybrid TDM/WDM optical networks.
75
The low complexity offered by the scheduling system allows a simple and cheap
implementation in hardware. Finally, with all the mentioned advantages, the MCDRR
scheduling framework is a practical solution for the next-generation optical access. However,
further simulations must be carried out in order to assure that the scheduler is also able to
provide the QoS constraints with a larger number of ONU's.
Unfortunately, for a large number of ONU's, we were not able to carry out in our thesis,
mainly because a normal Personal Computer (PC) and Laptop was used for the purpose of
simulating the scheduler. For a large number of ONU’s, we need to use cluster to obtain the
best results. Nevertheless, we attempted to simulate a 30 ONU’s which took more than 6
hours to finish, hence the simulation was aborted for the safety of the system.
6.2 Future Works
A detailed study must be carried out about the end-to-end performance of a network of
MCDRR. We can try to achieve 100% throughput by providing tight bounds. The deficit
counter can be set to negative scale by borrowing credits when the channel is available with
tunable transmitter triggering the scheduling process. This can allow us to minimize the delay
bounds. In the basic DRR and in our MCDRR thesis deficit counter never becomes negative.
Establishing mathematical bounds for the fairness and latency of the MCDRR is an important
future work and also the comparison with other multi-channel scheduling algorithms can be
carried out.
76
Bibliography
[1] Stephens, Donpaul, Zhang, Hui and J. C. Bennett, "http://www.cs.cmu.edu/~hzhang/papers/JSAC99.pdf," [Online].
[2] K. S. Kim, D. Gutierrez, F.-T. An and L. G. Kazovsky, "Design and performance analysis of scheduling algorithms for WDM-PON under SUCCESS-HPON architecture," vol. 23, no. 11, Nov. 2005.
[3] G.Varghese and M. Shreedhar, "Efficient fair queueing using deficit round robin," vol. 25, no. 4, 1995.
[4] P. E. McKenney, "Stochastic fairness queueing," vol. 2, Jan 1991.
[5] C. Wang, W. Wei, W. Zhang, H. Jiang, C. Qiao and T. Wang, "Optimal wavelength scheduling for hybrid WDM/TDM passive optical networks," vol. 3, no. 6, Jun 2011.
[6] B.O''zden and J. M. Blanquer, "Fair queuing for aggregated multiple links," Aug 2011.
[7] Y.Jiang and H. Xiao, "Analysis of multi-server round robin scheduling disciplines," Vols. E87-B, no. 12, Dec-2004.
[8] K.S.Kim, "iat-hnrl.swan.ac.uk," [Online].
[9] A. Banerjee, G. Kramer, Y. Ye, S. Dixit and B. Mukherjee, "Advances in Passive Optical Networks (PONS)," Emerging Optical Network Technologies, pp. 51-73, 2005.
[10] D. Ahamed, "A NOVEL VIEW ON PASSIVE OPTICAL NETWORK STRATEGIES IN THE COMPUTER COMMUNICATION," International Journal of Engineering Science and Technology, vol. 2, pp. 3597-3602, 2010.
[11] J. CHEN, "Design, Analysis and Simulation of Optical Access and Wide-area Networks," Sweden, 2009.
[12] S. J. Park, Hybrid WDM/TDM-PON Presentation, 2008.
[13] "www.ing-steen.se," [Online].
[14] l. n. R.Jain, A Survey of Scheduling and queue management, University of Ohio, 2007.
77
[15] l. n. Mark Handley, Scheduling and Queue Management, University of London, 2006.
[16] J. A. Flores, "Contention Period Management and a Price-Based Scheduling Framework for IEEE 802.16 Networks".
[17] T. J. Al-Khasib, "Mini Round Robin: An Enhanced Frame-Based Scheduling Algorithm for Multimedia Networks".
[18] J.Nagle, "On packet Switches with infinite storage," 1985.
[19] S. Baban, "Design and Implementation of a Scheduling Algorithm for the IEEE 802.16e (Mobile WiMAX) Network".
[21] L. Zhang, "VirtualClock: A New Traffic Control Algorithm for Packet Switching Networks".
[22] A. Demers, S. Keshav and S. Shenker, "Analysis and Simulation of a fair queuing algorithm".
[23] S. S. a. C. C. M. Katevenis, "Weighted round-robin cell multiplexing in a general-purpose
ATM switch chip," p. 1265-1279, 1991.
[24] M. Sathiyanarayanan and K. S. Kim, "http://iat-hnrl.swan.ac.uk/~kks/publications/mcdrr_foan2012.pdf," 2012. [Online].
[25] J. Kubiatowicz, "http://www.cs.berkeley.edu/~kubitron/courses/cs162-F06/Lectures/lec10-scheduling.pdf," 2006. [Online].
[26] "www.omnetpp.org," OMNet++ User Guide. [Online].
[27] K.S.Kim, "Integration of OMNeT++ Hybrid TDM/WDM-PON Models into INET Framework".
[28] R.Jain, D.Chiu and W.Have, "A quantitative measure of fairness and discrimination for resource allocation in shared computer systems," Vols. DEC-TR-301, Sep. 1984..
[29] J. A. C. a. M. Lin, "A theory of multi-channel schedulers for quality of service," vol. 12, no. 1-2, 2002.
[30] J. S. V. a. M. D. Logothetis, "Passive Optical Network Configurations-Performance