Top Banner
August 20 th , 2001 1 A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang Accelerating The Broadband Revolution P M C - S I E R R A
25

A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

Dec 31, 2015

Download

Documents

P M C - S I E R R A. Accelerating The Broadband Revolution. A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang. Outline. LCS: Linecard to Switch Protocol What is it, and why use it? Overview of 2.5Tb/s switch. How to build scalable crossbars. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 1

A 2.5Tb/s LCS Switch Core

Nick McKeownCostas CalamvokisShang-tse Chuang

Accelerating The Broadband Revolution

P M C - S I E R R A

Page 2: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 2

Outline

1. LCS: Linecard to Switch Protocol What is it, and why use it?

2. Overview of 2.5Tb/s switch.3. How to build scalable crossbars.4. How to build a high performance,

centralized crossbar scheduler.

Page 3: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 3

Next-Generation Carrier ClassSwitches/Routers

1 2 3 4 5 6 7 8 9 10111213141516

17181920212223242526272829303132

13 14 15 16 17 18

19 20 21 22 23 24

25 26 27 28 29 30

31 32

1 2 3 4 5 6

7 8 9 10 11 12

1 2 3 4 5 6 7 8 9 10111213141516

17181920212223242526272829303132

Up to 1000ft

Switch Core Linecards

Page 4: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 4

1. Large Number of Ports. Separation enables large number of ports in multiple racks. Distributes system power.

2. Protection of end-user investment. Future-proof linecards.

3. In-service upgrades. Replace switch or linecards without service interruption.

4. Enables Differentiation/Intelligence on Linecard. Switch core can be bufferless and lossless. QoS, discard

etc. performed on linecard.

5. Redundancy and Fault-Tolerance. Full redundancy between switches to eliminate downtime.

Benefits of LCS Protocol

Page 5: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 5

Main LCS Characteristics

1. Credit-based flow control Enables separation. Enables bufferless switch core.

2. Label-based multicast Enables scaling to larger switch cores.

3. Protection CRC protection. Tolerant to loss of requests and data.

4. Operates over different media Optical fiber, Coaxial cable, and Backplane traces.

5. Adapts to different fiber, cable or trace lengths

Page 6: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 6

Linecard

LCS

LCS

1: Req

LCS Ingress Flow control

3: DataSwitch

Scheduler

Switch

Scheduler

2: Grant/credit

Seq num

Switch

Fabric

Switch

Fabric

Switch Port

Req

Grant

Page 7: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 7

LCS Adapting to Different Cable Lengths

Switch

Scheduler

Switch

Scheduler

Switch

Fabric

Switch

Fabric

LCS

LCS

LCS

Switch Core

Linecard

Linecard

Linecard

Page 8: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 8

LCS Over Optical Fiber10Gb/s Linecards

10Gb/s Linecard

LCS

Switch

Scheduler

Switch

Scheduler

Switch

Fabric

Switch

Fabric

10Gb/s Switch Port

LCS

12 multimode fibers

12 multimode fibers

2.5Gb/s LVDS

GENETQuad Serdes

Page 9: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 9

Example of OC192c LCS Port

LCS Protocolto OC192Linecard

12 SerdesChannels

Page 10: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 10

Outline

1. LCS: Linecard to Switch Protocol What is it, and why use it?

2. Overview of 2.5Tb/s switch.3. How to build scalable crossbars.4. How to build a high performance,

centralized crossbar scheduler.

Page 11: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 11

Main Features of Switch Core2.5Tb/s single-stage crossbar switch core with

centralized arbitration and external LCS interface.

1. Number of linecards: 10G/OC192c linecards: 256 2.5G/OC48c linecards: 1024 40G/OC768c linecards: 64

2. LCS (Linecard to Switch Protocol): Distance from line card to switch: 0-1000ft. Payload size: 76+8B. Payload duration: 36ns. Optical physical layers: 12 x 2.5Gb/s.

3. Service Classes: 4 best-effort + TDM.4. Unicast: True maximal size matching.5. Multicast: Highly efficient fanout splitting. 6. Internal Redundancy: 1:N.7. Chip-to-chip communication: Integrated serdes.

Page 12: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 12

2.56Tb/s IP router

LCS

1000ft/300m

Port #256

Port #1

2.56Tb/s switch core

Linecards

Page 13: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 13

PortProcessor

opticsLCS Protocol

optics

PortProcessor

opticsLCS Protocol

optics

Crossbar

Switch core architecturePort #1

Scheduler

Request

Grant/Credit

Cell Data

Port #256

Page 14: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 14

Outline

1. LCS: Linecard to Switch Protocol What is it, and why use it?

2. Overview of 2.5Tb/s switch.3. How to build scalable crossbars.4. How to build a high performance,

centralized crossbar scheduler.

Page 15: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 15

How to build a scalable crossbar

1. Increasing the data rate per portUse bit-slicing (e.g.Tiny Tera).

2. Increasing the number of portsConventional wisdom: N2 crosspoints per chip is a

problem,In practice: Today, crossbar chip capacity is limited by

I/Os. It’s not easy to build a crossbar from multiple chips.

Page 16: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 16

Scaling: Trying to build a crossbar from multiple chips

4 in

pu

ts

4 outputs

Building Block: 16x16 crossbar switch:

Eight inputs and eight outputs required!

Page 17: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 17

Cell time

INT INT

2x4 (2 I/Os)

2x4 (2 I/Os)

Scaling using “interchanging” 4x4 Example

Cell time Reconfigure everycell time

Reconfigure everycell time

A A

A AB B

B B

Reconfigure every half cell time

Reconfigure every half cell time

A A

A A

B

B

B

B

AB

AB

AB

AB

4x4

Page 18: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 18

2x2 fixed

“TDM”

2.56Tb/s Crossbar operation

128 input

s

128 output

s

Crossbar AInterchanger

Crossbar B

AB A A

B B

2x2 fixed

“TDM”AB

128x

256 xbar

128x

256 xbar

A

A A A

B

B B

BInterchanger

Page 19: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 19

Outline

1. LCS: Linecard to Switch Protocol What is it, and why use it?

2. Overview of 2.5Tb/s switch.3. How to build scalable crossbars.4. How to build a high performance,

centralized crossbar scheduler.

Page 20: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 20

How to build a centralized scheduler with true maximal

matching?

Usual approaches1. Use sub-maximal matching algorithms (e.g.

iSLIP) Problem: Reduced throughput.

2. Increase arbitration time: Load-balancing Problem: Imbalance between layers leads to

blocking and reduced throughput.

3. Increase arbitration time: Deeper pipeline Problem: Usually involves out-of-date queue

occupancy information, hence reduced throughput.

Page 21: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 21

How to build a centralized scheduler with true maximal

matching?

Our approach is to maintain high throughput by:

1. Using true maximal matching algorithm.2. Using single centralized scheduler to avoid

the blocking caused by load-balancing.3. Using deep, strict-priority pipeline with up-

to-date information.

Page 22: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 22

Strict Priority Scheduler Pipeline

PortProcessor

opticsLCS Protocol

optics

SchedulerPriority

p=0

SchedulerPriority

p=1

SchedulerPriority

p=2

SchedulerPriority

p=3

Scheduler Plane for • 2.56Tb/s• 1 priority• Unicast and multicast

Page 23: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 23

multicast

Strict Priority Scheduler Pipeline

PortProcessor

opticsLCS Protocol

optics

SchedulerPriority

p=0

Time0 1 2 3 4 5 6

P=0P=0P=0

P=0P=0P=0

P=0 P=0P=0P=0P=0P=0

P=0 P=0P=0

P=0 P=0P=0

P=0 P=0P=0P=0 P=0 P=0P=0

P=0

P=0

P=0P=1P=2P=3

P=1P=2P=3

P=2P=3 P=3

SchedulerPriority

p=1

SchedulerPriority

p=2

SchedulerPriority

p=3P=0 P=0P=1P=2

P=0P=1

Page 24: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 24

Strict Priority Scheduler Pipeline

Why implement strict priorities in the switch core when the router needs to

support such services as WRR or WFQ? 1. Providing these services is a

Traffic Management (TM) function,2. A TM can provide these services

using a technique called Priority Modulation and a strict priority switch core.

Page 25: A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang

August 20th, 2001 25

Outline

1. LCS: Linecard to Switch Protocol What is it, and why use it?

2. Overview of 2.5Tb/s switch.3. How to build scalable crossbars.4. How to build a high performance,

centralized crossbar scheduler.