May 14, 2003 1 TCP Veno: TCP Enhancemen TCP Veno: TCP Enhancemen t for Transmission Over t for Transmission Over Wireless Access Networks Wireless Access Networks IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 21, NO. 2, FEBRUARY 2003 Speaker : Chen YoChuan
55
Embed
TCP Veno: TCP Enhancement for Transmission Over Wireless Access Networks
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
May 14, 2003 1
TCP Veno: TCP Enhancement foTCP Veno: TCP Enhancement for Transmission Overr Transmission Over
Wireless Access NetworksWireless Access Networks
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS,
VOL. 21, NO. 2, FEBRUARY 2003
Speaker : Chen YoChuan
May 14, 2003 2
OutlineOutline
• Introduction– TCP Reno, TCP Vegas
• TCP Veno algorithm
• Performance Evaluation of TCP Veno
• Concluding Remarks and Future Works
May 14, 2003 3
IntroductionIntroduction
• Wireless communication has been making significant progress– Usually using wired backbone networks– Many applications on the networks run TCP/IP
• Transmission control protocol (TCP)– a reliable connection-oriented protocol– Implement flow control (sliding window)
• TCP Reno is most popular in real network– Include SS, CA, Fast retransmission… algorithm
May 14, 2003 4
IntroductionIntroduction
• Reno treats the occurrence of packet loss as a manifestation of network congestion
• In wireless environment, packet loss is often induced by noise, link error…
• To tackle this problem, 2 parts of solution– How to distinguish between random loss and congesti
on loss?– How to make use of that information to refine the con
gestion-window adjustment process in Reno?
• TCP Veno (modify Vegas & Reno)
May 14, 2003 5
Relative workRelative work
• TCP Reno
• TCP Vegas
May 14, 2003 6
TCP RenoTCP Reno
• Sliding windows• Slow start and congestion avoidance algorithms• Fast retransmit and fast recovery
May 14, 2003 7
TCP RenoTCP Reno
• AIMD (Additive Increase and Multiplicative Decrease) algorithm
• 2 adjustment phases– Traffic load is steady or decreasing
• The internal traffic increases linearly
– Congestion signal is received• When 3 duplicate acks or retransmit timer timeout• Algorithm decreases the window size by half
May 14, 2003 8
TCP RenoTCP Reno
• The Rough Evolution of TCP Reno
May 14, 2003 9
TCP RenoTCP Reno• Slow start
– initiated at the start of a TCP connection– initiated after a retransmission timeout
• Slow start threshold – If cwnd is larger than ssthresh, cwnd is linear growth
May 14, 2003 10
TCP RenoTCP Reno
• Fast retransmit & fast recovery
May 14, 2003 11
TCP Reno TCP Reno --end--end
• Round trip time (RTT)
• Retransmit Time Out (RTO)– Too small, TCP will retransmit segments unne
cessarily– Too large, actual throughput will be low
1) Retransmit the missing packetset ssthresh = cwnd/2set cwnd= ssthresh+3
2) Each time another dup ACK arrives, increment cwnd by one packet.3) When the next ACK acknowledging new data arrives, set cwnd to ssthresh (value in step 1)
Reno Fast recoveryReno Fast recovery
if (DIFF*BaseRTT < β) //random loss due to bit errors is most //likely to have occurred
ssthresh = cwndloss * (4/5);else if
ssthresh = cwndloss /2 ; // congestive loss is most //likely to have occurred
Veno modify step (1) of Reno Veno modify step (1) of Reno
½~1More than ¾is better
May 14, 2003 25
TCP Veno algorithmTCP Veno algorithmVeno reduce window by 1/5
Detail is later
May 14, 2003 26
TCP Veno algorithmTCP Veno algorithmReno reduce window by 1/2
Detail is later
May 14, 2003 27
TCP Veno algorithm TCP Veno algorithm --end--end
• In summary, veno only refines the additive increase , multiplicative decrease (AIMD)
• All other parts of Reno remain intack– Including slow start, fast retransmit, fast recov
ery, computation of retransmission timeout algorithm
May 14, 2003 28
Performance evaluation of Performance evaluation of TCP VenoTCP Veno1. Verification of packet loss distinguishing
scheme in Venoa. Experimental Network setupb. The verification of packet loss distinguishing
scheme
2. Single connection experiments3. Experiment involving multiple co-existing
connections4. Live internet measurements
OutlineOutline
May 14, 2003 29
Performance evaluation of Performance evaluation of TCP VenoTCP Veno• Two experiments
1. Packet loss artificially induced,
2. On real wireless network where random packet loss actually occurs
Network setupNetwork setup
May 14, 2003 30
Performance evaluation of Performance evaluation of TCP VenoTCP Veno
Src1,Src2 : TCP sendersthat run TCP Reno or Veno over NetBSD1.1Dst1,Dst2 : TCP receivers with Red Hat Linux installedRouter : drop-tail router with (FIFO) queues is set up using FreeBSD4.2Bf : forward buffer, Br : backward bufferμf : forward bandwidth, μr : reverse bandwidthτf : forward propagation delay, τr : reverse propagation delayDifferent random packet loss rates can also be artificially inducedusing the embedded system command in FreeBSD4.2
Random loss isartificially induced
in router
Network setupNetwork setup
May 14, 2003 31
Performance evaluation of Performance evaluation of TCP VenoTCP Veno
Random loss actually occurs
in the wireless link
WC1 : wireless clientSrc2 : Veno sourceTCP connection : Src2 WC1UDP connection : Src1Dst1( provide the background traffic)TCPsuit : Tool developed by author, to infer
1.packet loss due to buffer overflow2.packet loss due to wireless link
Network setupNetwork setup
May 14, 2003 32
Performance evaluation of Performance evaluation of TCPTCP1. Single TCP Veno connection without bac
Verification of Packet Loss Verification of Packet Loss Distinguishing SchemeDistinguishing Scheme
May 14, 2003 33
Performance evaluation of Performance evaluation of TCPTCP• First run experiment
– UDP sending rate is 500kb/s– 240 fast retransmits
• 84 non-congestive • 156 in congestive state
– Correct rate in non-congestive 83/84 = 99%– Correct rate in congestive 10/156 = 6.4%, 146 is
misinterpreting • Use Reno’s algorithm (worst case)• Misdiagnosis does not bring any throughput improvement
Verification of Packet Loss Verification of Packet Loss Distinguishing SchemeDistinguishing Scheme
May 14, 2003 34
Performance evaluation of Performance evaluation of TCPTCP
Verification of Packet Loss Verification of Packet Loss Distinguishing SchemeDistinguishing Scheme
EFFECTIVENESS OF VENO’S PACKET LOSS DISTINGUISHING SCHEME UNDER DIFFERENT UDP SENDING
RATES
Veno estimate in non-congestive state
Veno estimate in congestion state
May 14, 2003 35
Performance evaluation of Performance evaluation of TCPTCP• Window-halved reductions corresponding t
o these kinds of random loss may be right actions– Because the available bandwidth has been ful
ly utilized in these situations
Verification of Packet Loss Verification of Packet Loss Distinguishing SchemeDistinguishing Scheme
Really ?
May 14, 2003 36
Performance evaluation of Performance evaluation of TCPTCP• Only one of the source-destination pairs is turne
d on• The figure below shows the evolution of the aver
age sending rate– For loss probabilities ranging from 10-4 to 10-1
– buffer size at the router is set to be 12
– link speed μf = μr = 1.6 MB/s
– RTT = 120ms– Maximum segment size is 1460 Byte– Each data point of the sending rate divided by 160 ms
Single connection experimentsSingle connection experiments
May 14, 2003 37
TCP Reno & Veno TCP Reno & Veno Average sending rate for loss probability ranging from 10-4 to 10-1
May 14, 2003 38
Performance evaluation of Performance evaluation of TCPTCP• When the packet loss rate is relatively small
er– The evolutions of Veno and Reno are similar
• But when the loss rate is close to 10-2 packets/s– Veno shows large improvements over Reno
Single connection experimentsSingle connection experiments
May 14, 2003 39
Performance evaluation of Performance evaluation of TCP VenoTCP Veno• One 32-MB file is transferred for each run• Other parameters are the same as the above ex
periment setting
Single connection experimentsSingle connection experiments
TCP Veno (146.5 kB/s) is 80%higher than that of TCP Reno (81.9 kB/s)
Multiplicative decrease algorithm that performs intelligent window adjustmentintelligent window adjustmentbased on the estimated connection state
Veno and Reno operate similarlyby falling back to the slow start algorithm
May 14, 2003 40
• RFC3002 shows that – wireless channel with IS-95 CDMA-based
data service has an error rate of 1%–2%.
Performance evaluation of Performance evaluation of TCP VenoTCP Veno Single connection experimentsSingle connection experiments
May 14, 2003 41
Performance evaluation of Performance evaluation of TCP VenoTCP Veno
• 4 connections consisting of a mixture of Veno and Reno to share a common bottleneck link of 4 Mb/s
• Maximum segment size is 1460 Bytes• Bf = Br = 28• Round-trip propagation time of 120 ms
Performance evaluation of Performance evaluation of TCP VenoTCP Veno• The performance over WLAN and wide ar
ea network (WAN)
Live Internet MeasurementsLive Internet Measurements
May 14, 2003 47
Performance evaluation of Performance evaluation of TCP VenoTCP Veno
• The base station and the laptop in different room, distance between them is about 8 m
• Use laptop as a client to download an 8-MB file from the data server
• 10/100 Ethernet switch, RTT=20 ms
Live Internet MeasurementsLive Internet Measurements
in WLAN
May 14, 2003 48
Performance evaluation of Performance evaluation of TCP VenoTCP Veno
• 10 throughput measurements over one day duration• Test over a period of five days• Veno connection is about 336 kb/s, 21% improvement over that of
a Reno connection (about 255 kb/s)• In our latest experiments on 802.11, Veno gets throughput improv
ement of up to 35%
Live Internet MeasurementsLive Internet Measurements
in WLAN
May 14, 2003 49
Performance evaluation of Performance evaluation of TCP VenoTCP Veno Live Internet MeasurementsLive Internet Measurements
in WAN
Veno & Reno
metropolitan WANscross-country WANs
Planning
May 14, 2003 50
Performance evaluation of Performance evaluation of TCP VenoTCP Veno Live Internet MeasurementsLive Internet Measurements
• RTT = 45 ms with 10 ms variation over different time slots• The general trends of the results among the five days are similar• Veno only obtains slightly higher throughput than Reno• Based on the reduced amount of timeouts, fast retransmits, and retr
ansmitted packets– Veno does not “steal” bandwidth from Reno and can coexist
May 14, 2003 51
• RTT = 45 ms with 10 ms variation over different time slots• Veno obtains throughput improvement of up to 41% over Reno in dif
ferent time slots• The numbers of timeouts, fast retransmits triggered, and retransmitt
ed packets are much low in Veno
Performance evaluation of Performance evaluation of TCP VenoTCP Veno Live Internet MeasurementsLive Internet Measurements
May 14, 2003 52
Performance evaluation of Performance evaluation of TCP VenoTCP Veno
• RTT = 290 ms with 30 ms variation among different tests• over cross-country WANs• Generally Veno achieves a much higher throughput than Reno, parti
cularly in metropolitan WAN• But, the improvement is not consistent throughout the day
– perhaps due to variability of other factors in the live Internet itself
Live Internet MeasurementsLive Internet Measurements
May 14, 2003 53
Concluding Remarks and Future Concluding Remarks and Future WorksWorks• TCP Veno can achieve significant improvement
– without adversely affecting other concurrent TCP connections in the same network
• Veno is desirable from the following 3 standpoints1. Deployability2. Compatibility3. Flexibility
• This paper has not addressed the issue of bursty packet loss
– SACK option has been proposed and shown to be effective in dealing with multiple packet losses
– Veno and SACK can be combined easily
May 14, 2003 54
Concluding Remarks and Future Concluding Remarks and Future WorksWorks• Generally speaking, we can see TCP Veno
– borrows the idea of congestion detection scheme in Vegas
– intelligently integrates it into Reno’s additive increase phase
• Future Works– how to refine additive increase at the next step – how much the window is to be reduced once fast retra
nsmit is triggered.– we could also use other predictive congestion detectio
ns
May 14, 2003 55
Ending…Ending…
AuthorAuthor
Cheng Peng FuPh.D. degree in information engineeringChinese University of Hong Kong