TCP Pacing in Data Center Networks

Post on 09-Dec-2016

225 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

TCP Pacing in Data Center Networks

1

Monia Ghobadi, Yashar Ganjali

Department of Computer Science, University of Toronto{monia, yganjali}@cs.toronto.edu

TCP, Oh TCP!

2

TCP, Oh TCP!

๏ TCP congestion control

2

TCP, Oh TCP!

๏ TCP congestion control

๏Focus on evolution of cwnd over RTT.

2

TCP, Oh TCP!

๏ TCP congestion control

๏Focus on evolution of cwnd over RTT.

๏ Damages

2

TCP, Oh TCP!

๏ TCP congestion control

๏Focus on evolution of cwnd over RTT.

๏ Damages

๏ TCP pacing

2

The Tortoise and the Hare: Why Bother Pacing?

3

The Tortoise and the Hare: Why Bother Pacing?

๏ Renewed interest in pacing for the data center environment๏Small buffer switches

๏Small round-trip times

๏Disparity between the total capacity of the network and the capacity of individual queues

3

The Tortoise and the Hare: Why Bother Pacing?

๏ Renewed interest in pacing for the data center environment๏Small buffer switches

๏Small round-trip times

๏Disparity between the total capacity of the network and the capacity of individual queues

๏Focus on tail latency cause by short-term

unfairness in TCP

3

TCP Pacing’s Potential

4

TCP Pacing’s Potential

๏ Better link utilization on small switch buffers

4

TCP Pacing’s Potential

๏ Better link utilization on small switch buffers๏ Better short-term fairness among flows of

similar RTTs:๏ Improves worst-flow latency

4

TCP Pacing’s Potential

๏ Better link utilization on small switch buffers๏ Better short-term fairness among flows of

similar RTTs:๏ Improves worst-flow latency

๏ Allows slow-start to be circumvented๏Saving many round-trip time๏May allow much larger initial congestion window to

be used safely

4

Contributions

5

Contributions

๏ Effectiveness of TCP pacing in data centers.

5

Contributions

๏ Effectiveness of TCP pacing in data centers.

๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a

certain threshold (Point of Inflection).

5

Contributions

๏ Effectiveness of TCP pacing in data centers.

๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a

certain threshold (Point of Inflection).

๏ Inconclusive results in previous works.

5

Contributions

๏ Effectiveness of TCP pacing in data centers.

๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a

certain threshold (Point of Inflection).

๏ Inconclusive results in previous works.

๏ Inter-flow bursts.

5

Contributions

๏ Effectiveness of TCP pacing in data centers.

๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a

certain threshold (Point of Inflection).

๏ Inconclusive results in previous works.

๏ Inter-flow bursts.

๏ Test-bed experiments.

5

Inter-flow Bursts

6

Inter-flow Bursts

๏ C: bottleneck link capacity

6

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size

6

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.

6

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.๏ W: packets in every RTT in paced or non-paced

manner.

6

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.๏ W: packets in every RTT in paced or non-paced

manner.๏ X: Inter-flow burst

6

0 RTT 2RTT 3RTT

Inter-flow Bursts

๏ C: bottleneck link capacity๏ Bmax :buffer size๏ N: longed lived flows.๏ W: packets in every RTT in paced or non-paced

manner.๏ X: Inter-flow burst

6

0 RTT 2RTT 3RTT

Modeling

7

Modeling

7

best case of non-paced

Modeling

7

best case of non-paced worst case of paced

Modeling

7

RTT

best case of non-paced worst case of paced

Modeling

7

RTT RTT

best case of non-paced worst case of paced

Modeling

7

RTT RTT

best case of non-paced worst case of paced

Modeling

7

RTT RTT

best case of non-paced worst case of paced

Experimental Studies

8

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients.

8

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients.๏ Bottleneck BW: 1,2, 3 Gbps

8

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients.๏ Bottleneck BW: 1,2, 3 Gbps๏ RTT: 1 to 100ms

8

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients.๏ Bottleneck BW: 1,2, 3 Gbps๏ RTT: 1 to 100ms๏ Bottleneck utilization, Drop rate, average and tail

FCT

8

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion Congestion

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion Congestion

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion Congestion

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

38%

No congestion Congestion

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

38%

No congestion Congestion

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

38%

No congestion Congestion

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

90.1 0.2 0.3 0.40.0630.0390

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Flow Completion Time (sec)

CD

F

pacednon!paced

38%

No congestion Congestion

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

90.1 0.2 0.3 0.40.0630.0390

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Flow Completion Time (sec)

CD

F

pacednon!paced

38%

No congestion Congestion

1RTT

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

90.1 0.2 0.3 0.40.0630.0390

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Flow Completion Time (sec)

CD

F

pacednon!paced

38%

No congestion Congestion

1RTT2RTTs

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

90.1 0.2 0.3 0.40.0630.0390

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Flow Completion Time (sec)

CD

F

pacednon!paced

1 2 3 50.06 70.50.20

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Flow Completion Time (sec)

CD

F

pacednon!paced

38%

No congestion Congestion

1RTT2RTTs

0 50 100 150 200 250 3000

50

100

150

200

250

300

Time (sec)

Bottle

neck

Lin

k U

tiliz

atio

n (

Mbps)

pacednon!paced

Base-Case Experiment: One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

90.1 0.2 0.3 0.40.0630.0390

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Flow Completion Time (sec)

CD

F

pacednon!paced

1 2 3 50.06 70.50.20

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Flow Completion Time (sec)

CD

F

pacednon!paced

38%

No congestion Congestion

1RTT2RTTs 2RTTs

Multiple flows: Link Utilization/Drop/LatencyBuffer size 1.7% of BDP, varying number of flows

10

Multiple flows: Link Utilization/Drop/LatencyBuffer size 1.7% of BDP, varying number of flows

10

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

paced

non!paced

PoIN*

Multiple flows: Link Utilization/Drop/LatencyBuffer size 1.7% of BDP, varying number of flows

10

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

paced

non!paced

PoIN*

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Bo

ttle

ne

ck L

ink

Dro

p (

%)

pacednon!paced

PoIN*

Multiple flows: Link Utilization/Drop/LatencyBuffer size 1.7% of BDP, varying number of flows

10

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e F

CT

(se

c)

pacednon!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

paced

non!paced

PoIN*

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Bo

ttle

ne

ck L

ink

Dro

p (

%)

pacednon!paced

PoIN*

Multiple flows: Link Utilization/Drop/LatencyBuffer size 1.7% of BDP, varying number of flows

100 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Number of Flows

99

th P

erc

en

tile

FC

T (

sec)

pacednon!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e F

CT

(se

c)

pacednon!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

paced

non!paced

PoIN*

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Bo

ttle

ne

ck L

ink

Dro

p (

%)

pacednon!paced

PoIN*

Multiple flows: Link Utilization/Drop/LatencyBuffer size 1.7% of BDP, varying number of flows

100 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Number of Flows

99

th P

erc

en

tile

FC

T (

sec)

pacednon!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e F

CT

(se

c)

pacednon!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

paced

non!paced

PoIN*

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Bo

ttle

ne

ck L

ink

Dro

p (

%)

pacednon!paced

PoIN*

• Number of concurrent connections increase beyond a certain point the benefits of pacing diminish.

Multiple flows: Link Utilization/Drop/LatencyBuffer size 3.4% of BDP, varying number of flows

11

Multiple flows: Link Utilization/Drop/LatencyBuffer size 3.4% of BDP, varying number of flows

110 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Number of Flows

99

th P

erc

en

tile

FC

T (

sec)

paced

non!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e F

CT

(se

c)

paced

non!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N*

PoI

0 20 40 60 80 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Number of flows sharing the bottleneck

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

Multiple flows: Link Utilization/Drop/LatencyBuffer size 3.4% of BDP, varying number of flows

110 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Number of Flows

99

th P

erc

en

tile

FC

T (

sec)

paced

non!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e F

CT

(se

c)

paced

non!paced

N*

PoI

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N*

PoI

0 20 40 60 80 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Number of flows sharing the bottleneck

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

• Aagarwal et al.: Don’t pace!

• 50 flows, BDP 1250 packets and buffer size

312 packets

• N* = 8 flows.

• Kulik et al.: Pace!

• 1 flow, BDP 91 packets, buffer size 10

packets.

• N* = 9 flows.

N* vs. Buffer

12

50 100 150 200 2500

200

400

600

800

1000

Buffer Size (KB)

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N* vs. Buffer

12

50 100 150 200 2500

200

400

600

800

1000

Buffer Size (KB)

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N* vs. Buffer

1250 100 150 200 2500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Buffer size(KB)

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

50 100 150 200 2500

200

400

600

800

1000

Buffer Size (KB)

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

50 100 150 200 2500

0.2

0.4

0.6

0.8

1

1.2

1.4

Buffer Size (KB)

Ave

rag

e �C

T (

se

c)

pacednon!paced

N* vs. Buffer

1250 100 150 200 2500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Buffer size(KB)

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

50 100 150 200 2500

0.4

0.8

1.2

1.6

2

2.4

2.8

Buffer Size (KB)

99

th P

erc

en

tile

�CT

(se

c)

pacednon!paced

50 100 150 200 2500

200

400

600

800

1000

Buffer Size (KB)

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

50 100 150 200 2500

0.2

0.4

0.6

0.8

1

1.2

1.4

Buffer Size (KB)

Ave

rag

e �C

T (

se

c)

pacednon!paced

N* vs. Buffer

1250 100 150 200 2500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Buffer size(KB)

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

50 100 150 200 2500

0.4

0.8

1.2

1.6

2

2.4

2.8

Buffer Size (KB)

99

th P

erc

en

tile

�CT

(se

c)

pacednon!paced

50 100 150 200 2500

200

400

600

800

1000

Buffer Size (KB)

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

50 100 150 200 2500

0.2

0.4

0.6

0.8

1

1.2

1.4

Buffer Size (KB)

Ave

rag

e �C

T (

se

c)

pacednon!paced

N* vs. Buffer

1250 100 150 200 2500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Buffer size(KB)

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

Clustering Effect: The probability of packets from a flow being followed by packets from other flows

13

Clustering Effect: The probability of packets from a flow being followed by packets from other flows

Non-paced: Packets of each flow are clustered together.

13

Clustering Effect: The probability of packets from a flow being followed by packets from other flows

Non-paced: Packets of each flow are clustered together.

Paced: Packets of different flows are multiplexed.

13

Drop Synchronization: Number of Flows Affected by Drop Event

14

Drop Synchronization: Number of Flows Affected by Drop Event

14

NetFPGA router to count the number of flows affected by drop events.

Drop Synchronization: Number of Flows Affected by Drop Event

14

0 10 20 30 400

0.10.20.30.40.50.60.70.80.9

1

Number of Flows Affected by Drop Event

CD

F

paced

non!paced

0 20 40 60 800

0.10.20.30.40.50.60.70.80.9

1

Number of Flows Affected by Drop Event

CD

F

paced

non!paced

0 50 100 150 200 250 300 350 4000

0.10.20.30.40.50.60.70.80.9

1

Number of Flows Affected by Drop Event

CD

F

paced

non!paced

N: 48 N: 96 N: 384

NetFPGA router to count the number of flows affected by drop events.

Drop Synchronization: Number of Flows Affected by Drop Event

14

0 10 20 30 400

0.10.20.30.40.50.60.70.80.9

1

Number of Flows Affected by Drop Event

CD

F

paced

non!paced

0 20 40 60 800

0.10.20.30.40.50.60.70.80.9

1

Number of Flows Affected by Drop Event

CD

F

paced

non!paced

0 50 100 150 200 250 300 350 4000

0.10.20.30.40.50.60.70.80.9

1

Number of Flows Affected by Drop Event

CD

F

paced

non!paced

N: 48 N: 96 N: 384

NetFPGA router to count the number of flows affected by drop events.

Future Trends for Pacing: per-egress pacing.

15

Future Trends for Pacing: per-egress pacing.

15

6 12 24 48 96 1920

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e R

CT

(se

c)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

0.20.40.60.8

11.21.41.61.8

22.2

Number of Flows

99

th P

erc

en

tile

RC

T (

sec)

per!flow pacednon!pacedper!host + per!flow paced

N*PoI

6 12 24 48 96 1920

5

10

15

20

Number of flows

Bottle

neck

Lin

k D

rop (

%)

per!flow pacednonpacedper!host + per!flow paced

Future Trends for Pacing: per-egress pacing.

15

6 12 24 48 96 1920

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e R

CT

(se

c)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

0.20.40.60.8

11.21.41.61.8

22.2

Number of Flows

99

th P

erc

en

tile

RC

T (

sec)

per!flow pacednon!pacedper!host + per!flow paced

N*PoI

6 12 24 48 96 1920

5

10

15

20

Number of flows

Bottle

neck

Lin

k D

rop (

%)

per!flow pacednonpacedper!host + per!flow paced

Future Trends for Pacing: per-egress pacing.

15

6 12 24 48 96 1920

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e R

CT

(se

c)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

0.20.40.60.8

11.21.41.61.8

22.2

Number of Flows

99

th P

erc

en

tile

RC

T (

sec)

per!flow pacednon!pacedper!host + per!flow paced

N*PoI

6 12 24 48 96 1920

5

10

15

20

Number of flows

Bottle

neck

Lin

k D

rop (

%)

per!flow pacednonpacedper!host + per!flow paced

Future Trends for Pacing: per-egress pacing.

15

6 12 24 48 96 1920

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e R

CT

(se

c)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

0.20.40.60.8

11.21.41.61.8

22.2

Number of Flows

99

th P

erc

en

tile

RC

T (

sec)

per!flow pacednon!pacedper!host + per!flow paced

N*PoI

6 12 24 48 96 1920

5

10

15

20

Number of flows

Bottle

neck

Lin

k D

rop (

%)

per!flow pacednonpacedper!host + per!flow paced

Future Trends for Pacing: per-egress pacing.

15

6 12 24 48 96 1920

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e R

CT

(se

c)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

per!flow pacednon!pacedper!host + per!flow paced

PoIN*

6 12 24 48 96 1920

0.20.40.60.8

11.21.41.61.8

22.2

Number of Flows

99

th P

erc

en

tile

RC

T (

sec)

per!flow pacednon!pacedper!host + per!flow paced

N*PoI

6 12 24 48 96 1920

5

10

15

20

Number of flows

Bottle

neck

Lin

k D

rop (

%)

per!flow pacednonpacedper!host + per!flow paced

Conclusions and Future work

๏ Re-examine TCP pacing’s effectiveness:๏Demonstrate when TCP pacing brings benefits in

such environments.๏ Inter-flow burstiness

๏ Burst-pacing vs. packet-pacing.๏ Per-egress pacing.

16

Renewed Interest

17

Traffic Burstiness Survey

๏‘Bursty’ is a word with no agreed meaning. How do you define a bursty traffic?

๏If you are involved with a data center, is your data center traffic bursty?

๏ If yes, do you think that it will be useful to supress the burstiness in your traffic?

๏ If no, are you already supressing the burstiness? How? Would you anticipate the traffic becoming burstier in the future?

18

monia@cs.toronto.edu

19

Base-Case Experiment: One RPC vs Two RPCs, 64KB of buffering, Latency

20

Multiple flows: Link Utilization/Drop/LatencyBuffer size: 6% of BDP, varying number of

21

Base-Case Experiment: One RPC vs Two RPCs, 64KB of buffering, Latency / Queue Occupancy

22

Base-Case Experiment: One RPC vs Two RPCs, 64KB of buffering, Latency / Queue Occupancy

22

Base-Case Experiment: One RPC vs Two RPCs, 64KB of buffering, Latency / Queue Occupancy

22

Base-Case Experiment: One RPC vs Two RPCs, 64KB of buffering, Latency / Queue Occupancy

22

Functional test

23

Functional test

23

Functional test

23

Functional test

23

RPC vs. Streaming

24

1GE10GE 10GE

RTT = 10ms

Paced by ack clocking

Bursty

Zooming in more on the paced flow

25

Multiple flows: Link Utilization/Drop/LatencyBuffer size 6.8% of BDP, varying number of flows

26

Multiple flows: Link Utilization/Drop/LatencyBuffer size 6.8% of BDP, varying number of flows

26

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N*PoI

Multiple flows: Link Utilization/Drop/LatencyBuffer size 6.8% of BDP, varying number of flows

26

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N*PoI

0 20 40 60 80 100−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Number of flows sharing the bottleneck

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

Multiple flows: Link Utilization/Drop/LatencyBuffer size 6.8% of BDP, varying number of flows

26

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e F

CT

(se

c)

pacednon!paced

PoIN*

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N*PoI

0 20 40 60 80 100−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Number of flows sharing the bottleneck

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

Multiple flows: Link Utilization/Drop/LatencyBuffer size 6.8% of BDP, varying number of flows

260 10 20 30 40 50 60 70 80 90 1000

0.4

0.8

1.2

1.6

2

2.4

2.8

3.2

3.6

4

Number of Flows

99

th P

erc

en

tile

FC

T (

sec)

pacednon!paced

PoIN*

0 10 20 30 40 50 60 70 80 90 1000

0.2

0.4

0.6

0.8

1

Number of Flows

Ave

rag

e F

CT

(se

c)

pacednon!paced

PoIN*

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

Number of Flows

Bo

ttle

ne

ck L

ink

Util

iza

tion

(M

bp

s)

pacednon!paced

N*PoI

0 20 40 60 80 100−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Number of flows sharing the bottleneck

Bottl

enec

k lin

k dr

op(%

)

pacednonpaced

top related