9/30/2003 UTD FF-1 Practical Priority Practical Priority Contention Resolution for Contention Resolution for Slotted Optical Burst Slotted Optical Burst Switching Networks Switching Networks Farid Farahmand Farid Farahmand The University of Texas at Dallas The University of Texas at Dallas
25
Embed
FF-1 9/30/2003 UTD Practical Priority Contention Resolution for Slotted Optical Burst Switching Networks Farid Farahmand The University of Texas at Dallas.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
9/30/2003 UTD FF-1
Practical Priority Contention Practical Priority Contention Resolution for Slotted Optical Resolution for Slotted Optical
Burst Switching NetworksBurst Switching Networks
Farid FarahmandFarid Farahmand
The University of Texas at DallasThe University of Texas at Dallas
9/30/2003 UTD FF-2
Overview Overview
OBS Overview Major issues in OBS Switch node architecture of the
Each Destination Queue must be sized for the worst case
To theSwitchFabric
ReceiverBlock
(0)
BHP
ReceiverBlock(P-1)
BHP
SwitchControl
(0)
Scheduler(0)
SwitchControl(P-1)
Scheduler(P-1)
Des. Q (0)
Des. Q(P-1)
Des. Q (0)
Des. Q(P-1)
9/30/2003 UTD FF-10
Scheduling Mechanisms in the Scheduler BlockScheduling Mechanisms in the Scheduler Block
Scheduling mechanism Latest Available Unscheduled
Contention resolution technique Latest Drop Policy (LDP)
With offset-time-based QoS Shortest Drop Policy (SDP)
Supports unlimited service differentiation Performs better than the Latest Drop
Policy
9/30/2003 UTD FF-11
1.00E -05
1.00E -04
1.00E -03
1.00E -02
1.00E -01
1.00E +00
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Utilization (G)B
LR LDPSDP
SEG
(a)
Comparing SDP and LDP Performance Comparing SDP and LDP Performance
Single switch with 4 edge nodes
Each port has 4 channels
Full utilization of wavelength converters
Max data burst duration is 20 slots / Exponentially distributed
3 levels of service differentiation
Performance metric: Burst Loss Rate (BLR)
1.00E -06
1.00E -05
1.00E -04
1.00E -03
1.00E -02
1.00E -01
1.00E +00
0.05 0.25 0.45 0.65 0.85
Utilization(G)
Unslotted/VarSlotted/VarSlotted/Fix
(b)
9/30/2003 UTD FF-12
Hardware Prototyping of the Hardware Prototyping of the Control Packet ProcessorControl Packet Processor
To theSwitchFabric
ReceiverBlock
(0)
BHP
ReceiverBlock(P-1)
BHP
SwitchControl
(0)
Scheduler(0)
SwitchControl(P-1)
Scheduler(P-1)
Des. Q (0)
Des. Q(P-1)
Des. Q (0)
Des. Q(P-1)
Basic assumptions Slotted transmission of BHPs Shortest Drop Policy (SDP) Parallel scheduling
Receiver Block All BHP are verified for correct
parity and framing Each request is reformatted, time
stamped, and passed on to the proper Destination queue
Destination Queues Scheduler
BHP Processor-Regenerator
BHPRegenControl Packet
Processor(CPP)
BHPRegen
O/EO/E
O/EO/E
E/OE/O
E/OE/OScheduler
9/30/2003 UTD FF-13
Hardware Prototyping of the SchedulerHardware Prototyping of the Scheduler
Arbiter
Scheduler Core Section
Processor Channel Manager Update Switch
Setup
Statistics Accumulator
To theSwitchFabric
ReceiverBlock
(0)
BHP
ReceiverBlock(P-1)
BHP
SwitchControl
(0)
Scheduler(0)
SwitchControl(P-1)
Scheduler(P-1)
Des. Q (0)
Des. Q(P-1)
Des. Q (0)
Des. Q(P-1)
To theSwitch
Control block
ChanQueue 2
ChanQueue 0
ChanQueue 1
ChanQueue N-1
Channel Manager
Search
Eng
ine
UpdateS
wS
etup
Statistics Accumulator
Arbiter
Requests fromDest Queues
0
1
2
P-1
RequestProcessor
To the BHPregenerator
Scheduler Core SectionCS_CNT
From the Receiver
9/30/2003 UTD FF-14
Hardware Prototyping of the SchedulerHardware Prototyping of the Scheduler
To theSwitch
Control block
ChanQueue 2
ChanQueue 0
ChanQueue 1
ChanQueue N-1
Channel Manager
Se
arch
En
gin
e
UpdateS
wS
etup
Statistics Accumulator
Arbiter
Requests fromDest Queues
0
1
2
P-1
RequestProcessor
To the BHPregenerator
Scheduler Core SectionCS_CNT
From the Receiver
P inputs along with the counter signal
Flow Control; QoS control
Checks Start and End times; Reserve
Requests
One per channel
If reservation was successful regenerate
BHP
9/30/2003 UTD FF-15
Illustration of the Scheduler OperationIllustration of the Scheduler Operation
Three Channels Assuming all Channel
Queues are empty initially
1213141516171819202122 CS_CNT
Chan 0
Chan 1
Chan 2B3
B1
B2B4
B5
Time
HoQ
CQ0 CQ1 CQ2
B1Time = i B2Time = i+1
B3
B4B5Time = i+6
Time = i+7
9/30/2003 UTD FF-16
Scheduler PrototypeScheduler Prototype
Implemented on Altera EP20k400E FPGA 2.5 million gates Maximum clock rate of 840 MHZ
Core section modeled by Celoxica DK design suite Initially modeled using C-language Modified into Handel-C language Compiled and translated into a gate level VHDL code
Other blocks were designed using VHDL code Tested, verified, and synthesized
Number of clock cycles required to processes packets Number of clock cycles required to processes packets in the Destination Queuein the Destination Queue