Top Banner
CSE331: Introduction to Networks and Security Lecture 13 Fall 2002
24

CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

Dec 16, 2015

Download

Documents

Noreen Perry
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331:Introduction to Networksand Security

Lecture 13

Fall 2002

Page 2: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 2

Announcements

• Reminder:– Project 1 due on Monday, Oct. 7th – In-class midterm Wednesday, Oct. 9th

• Monday’s Class– Further Topics in Networking– Review / Question & Answer

Page 3: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 3

Recap

• Application Level Protocols– SMTP– HTTP– SNMP

• Congestion control• Resource Management• Quality of Service

Today

Page 4: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 4

Sharing Resources

• How do we effectively & fairly share resources on the net?– Bandwidth of the links– Buffers space in routers in switches

• Many competing users– What does “fairly” mean?

Page 5: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 5

Contention and Congestion

• Packets contend at a router for use of a link– Multiple packets are enqueued at the router

• Congestion is when packets are dropped because the queue is full– Wasted resources– Can lead to timeouts/retransmission

• Problem is resource allocation

Page 6: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 6

Congestion In Packet-switched Networks

Router

Sources

Destination

Page 7: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 7

Network Resource Allocation

• Challenges– Distributed resources are hard to coordinate– Only way to coordinate is through the network itself!– Not isolated to single level of the protocol hierarchy– Not always possible to “route around” congestion– Bottleneck not always visible from the source

• Resource allocation– Attempt to meet competing demands of applications– Not always possible!

Page 8: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 8

Flows

• A flow is a sequence of packets sent along the same route between a source and dest.

• Connectionless Flows– No per-flow state at the routers– Example: Pure datagram model

• Connection Oriented Flows– Necessary per-flow state at the routers– Explicitly created/removed by signalling– Example: Virtual Circuit Switching– Potentially does not scale

• Soft-state Flows– Some (not strictly necessary) per-flow state– Example: Routing information in Learning Bridges

Page 9: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 9

Multiple flows

Source1

Source2

Source3

Dest.1

Dest.2

Router

Router

Router

Page 10: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 10

Router- vs. Host-Centric

• Router-centric – Each router selects packets to forward & packets

to drop– Routers inform hosts about network conditions

• Host-centric– Hosts observe network behavior by watching

ACKs, Timeouts, ICMP messages, etc.– Adjust behavior accordingly

• Not mutually exclusive approaches

Page 11: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 11

Reservation vs. Feedback

• Reservation– End hosts ask network for certain amount of capacity– If request can’t be satisfied, router rejects the flow– Examples: measure MTU or link capacities– Router-centric approach

• Feedback– End hosts send data without reserving capacity– Adjust behavior based on feedback– Explicit feedback: TCP flow control– Implicit feedback: Packet losses

Page 12: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 12

Throughput, Delay and Load

• Network load is a measure of total link utilization

• Ideally we would– Maximize throughput– Minimize delay

• Increasing #packets in network lengthens queues, which increases delay.

• Power = Throughput/Delay

Page 13: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 13

Power vs. Load

• Ratio of Throughput/Delay as a function of network load• Difficult to control load in fine-grained ways• Need stable mechanism: avoid thrashing

Load

Thr

ough

put/

Del

ay

Optimal Load

Page 14: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 14

Fair Resource Allocation

• What does “Fair” mean?– Equal share of resources for all flows?– Proportional to how much you pay for service?– Should we take route length into account?

Router Router Router Router

Page 15: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 15

FIFO Queuing

• First-in First-out– Scheduling discipline: determines order

• Tail Drop– If queue is full, most recent packet to arrive is dropped– Drop policy: which packets are dropped

• Most widely used in Internet routers– Pushes congestion control & resource allocation to end

hosts (TCP)– Does not discriminate between flows– Trusts end hosts to “share” – but no one is forced to use

TCP, for example.

Page 16: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 16

Priority Queuing

• Simple variant on FIFO– Use the IP Type of Service header field as a

priority– Send all higher priority packets in the queue

before sending lower priority packets

• Problems– Starvation of low-priority flows– Who sets priorities? (Not end user!)

Page 17: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 17

Fair Queing

• Strategy– Maintain a separate queue for each flow being

handled by the router– Individual queues are treated FIFO with tail-drop– Queues are handled round-robin

Flow 1

Flow 2

Flow 3

RoundRobin

Page 18: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 18

Fair Queuing Continued

• Designed to be used with end-to-end congestion control– Doesn’t restrict transmission rates of end hosts– Badly-behaved end hosts only hurt themselves

• Details– Different packet sizes complicates “fairness”– Link is never idle (as long as there is data to send)– If N flows are transmitting, each gets maximum of

1/N bandwidth

Page 19: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 19

Congestion Avoidance Mechanisms

• Try to prevent congestion before it occurs– Unlike TCP, which reacts to existing congestion

• Strategy 1: Routers watch their queues– Routers set a bit in outgoing packets if avg. queue

length > 1 – Receiver copies bit into its ACK– Sender increases/decreases send window based

on # of packets that report congestion– Called the DECbit algorithm

Page 20: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 20

Congestion Avoidance Continued

• Strategy 2: Random Early Detection– Router monitors queue length– If length > dropLevel then drop packet with certain

probability– Source times out on dropped packets– TCP causes send window to decrease

– Much tuning of parameters to optimize performance

Page 21: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 21

Quality of Service Issues

• Sometimes best effort is not enough• Application requirements

– Real time: data must arrive within certain time constraints to be useful

• Telephony, video conferencing

– Jitter (variation in arrival times of packets) is bad• Audio/visual data need low jitter

– Packet loss:can it be tolerated or not?• Mpeg can interpolate missing frames• Remote robot surgeon cannot tolerate packet loss

Page 22: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 22

Playback Buffer Example

Time

Seq

uenc

e #

PacketGeneration

Playback

Packet Arrival

delay

buffer

Page 23: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 23

Integrated Services (RSVP)

• Proposed in 1995-1997 • Service Classes

– Guaranteed arrival service• For delay intolerant applications• Guarantee a maximum delay

– Controlled Load• For loss tolerant, adaptive applications• Emulate lightly loaded network

Page 24: CSE331: Introduction to Networks and Security Lecture 13 Fall 2002.

CSE331 Fall 2002 24

Implementation Mechanisms

• Flowspecs– Describe the kind of service needed

• “I need maximum delay of 100ms”• “I need to use controlled load service”

• Admission Control– Network decides whether it can provide the

desired service

• Resource Reservation– Mechanism to exchange info about requests

• Packet Scheduling– Manage queuing and scheduling.