Policy-based QoS Management on the Internet2 QoS Backbone ... · Policy-based QoS Management on the Internet2 QoS Backbone (QBone) Dr Tham Chen Khong, Michael Yuan Lihua, Zhang Yi,

Post on 24-Mar-2019

249 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

Transcript

Policy-based QoS Management on the Internet2 QoS Backbone (QBone)

Dr Tham Chen Khong, Michael Yuan Lihua, Zhang Yi, Mayank Agarwal, Paul Ng

Dept of Electrical & Computer EngineeringNational University of Singapore

SingAREN Broadband 21 Seminar

Motivationn Set up infrastructure for

inter-domain end-to-end QoS(QBone: Internet2-SingAREN-NUS),including measurement infrastructure

n Deploy high-bandwidth and time-sensitive applications like video streaming and VoIP

n Develop and deploy QoS management functionalityn Resource allocation in local domain coordinated by

Bandwidth Brokersn Policy-based QoS managementn Accounting

Overviewn Need for Quality of Service (QoS)n Differentiated Service (DiffServ)n Internet2 QoS Backbone (QBone)n QBone Measurement Architecturen Inter-Domain QoS Management

n Service Level Agreements (SLAs)n Bandwidth Broker

n NUS (CCN-CIR)-SingAREN-QBone project with iCAIR, NWU/IBM

n Policy-based QoS Managementn Reservations using Bandwidth Brokers and

signallingn Future Work

Need for Quality of Servicen Applications that need QoS

n multimedia, real-time control, distributed interactive simulation (DIS), distributed computing, real-time information transmission

n Common QoS Parametersn Throughputn Delayn Delay variationn Loss rate

Differentiated Servicesn Goal: Provide preferential service without

increasing overhead in core routersn Scalable service differentiation in the Internet:

aggregate flows in the core of the network

n Each core and edge router has a set of pre-defined Per Hop Behaviors (PHBs)n Expedited Forwarding (EF)/Virtual Leased Line

(VLL) n Assured Forwarding (AF)n [mechanisms: PQ, WFQ, CBQ, CB-WFQ, WRR]n Best Effort (BE)

DiffServ Architecture

AppBB

App

BB

Bandwidth Brokers(perform admission control, manage network resources,

configure leaf and edge devices)

Leaf Router(police, mark flows) Egress Router

(shape aggregates)

Ingress Router(classify, mark, police)

Source

Destination

DiffServ Architecturen Traffic Classification and Conditioning

n Classifiersn Traffic conditioners

n Meter, Marker, Shaper/Dropper

n Location of traffic conditionersn Leaf, ingress, egress nodes

Meter

packets Classifier Marker Shaper/Dropper

Differentiated Servicesn Data packets are classified and are marked at the leaf

router by having their DS Code Point (DSCP) set to indicate the required PHB

n Packet classification, marking, policing and shaping need only be implemented at the network boundary or hosts

n Can have multiple DS-domains with inter-domain negotiation conducted by Bandwidth Brokers

DiffServ terminologyn Per-Hop-Behavior (PHB)

n PHB+policy rules=range of servicen Codepoint (in header of IP packet)

n Codepoints-->PHB mapping (e.g. 101110 for EF)n Service Level Agreement (SLA)

n service contract between customer and service provider that specifies the forwarding service a customer should receive; may include traffic conditioning rules

n Traffic Conditioning Agreement (TCA)n agreement specifying classifier rules and any corresponding traffic

profiles and metering, marking, discarding and shaping rules which apply to traffic streams selected by the classifier

0 1 2 3 4 5 6 7D S C P C U

Bandwidth Brokersn Manages the QoS resources within a given domain

n Performs admission control and provisioning/manage domain resources

n Configure leaf and edge routersn Manages the inter-domain communication with adjacent

regions’ BBsn Enforcement of Service Level Agreements (SLAs)

n Communicate with policy manager and perform subset of policy management

BB1

endusers

RARs

AS1

BB2 BB3

SLDsSLDs

AS2 AS3

Inter-domain Communication

Intra-domain Communication

edgerouter

Internet2 – vBNS (Backbone)

Internet2:A university-led effort involvingacademia, government & industry

Note: GigaPOPs connectedby High Speed ATM Backbone

UCAID Abilene POS Network

The new preferred backbone networkOC48 (2.4 Gbps) trunks

Real Time Measurements on Abilene

Note: vBNS and SingAREN peer at STAR TAP. QBone peering ON with 1Mbps CBR initially.Abilene and SingAREN peer at Indianapolis. Will soon have Abilene-SingAREN QBone peering.

Internet2 QBone Initiativen Build inter-domain testbed infrastructure

n Experiment and improve understanding of DiffServn Incrementally improve testbed

n Support intradomain & interdomain deploymentn Lead and follow IETF standards work

n Some parts of DiffServ architecture mature; others far from it

n Experience gained will inform standards processn Openness of R&E community makes it easier

n Users will tolerate the flakiness of an experimental infrastructuren Engineers will share experience and measurement data

QBone Architecture (High Level)n IETF “Diff” (EF PHB) + QBone “Serv”n QBone Premium Service (QPS)

n Use Van Jacobson VLL “Premium” servicen Well-defined SLS:

n Peak rate R & “Service MTU” M implying a token bucket metern Near-zero lossn Low jitter

n Delay variation due to queuing effects should be no greater than the packet transmission time of a service MTU-sized packet (provided reserved interdomain route does not flap)

n QPS has been implemented in core Abilene and ESnetn Plus important value-adds:

n Integrated measurement/dissemination infrastructuren Experimentation with pre-standards inter-domain bandwidth

brokering and signaling

QBone B/W Broker architecturen Based on Simple Interdomain Bandwidth

Broker Signaling (SIBBS)n an end-to-end signaling architecture for QBonen product of Bandwidth Broker Advisory Council

(BBAC)n Define BB model to be used in the QBonen Recommend deployment phasing

(Phase 0, 1, 2)n Specify common inter-domain protocol

Reservationsn Reservation Setup Protocol

n Reservation Request {source, dest, route, startTime, endTime, peakRate, MTU, jitter}

n Now: long-lived manual setupn Proposed: Simple Interdomain Bandwidth Broker

Signaling (SIBBS) protocol between QBone domains, RSVP end-to-end between hosts

Inter-domain Communication (SIBBS)n Simple Inter-domain Bandwidth Broker

Signallingn TCP is transportn Fundamental messages:

n Resource Allocation Request (RAR)n Resource Allocation Answer (RAA)

n Simple request-response protocoln Requires some basic authenticationn Supports setup, modification, takedownn Intended to be flexible and extensible

Draft SIBBS RAR/RAA

SIBBS version RAR ID

Sender ID Sender Signature

Source Prefix Destination Prefix

Ingress Router ID Start Time

Stop Time Flags

GWS ID Service Paramter TLV

Other (Optional) TLVs

Sample Setup

BB

BB

BB

RTR ES RTR RTR RTR ES

RAR RAR

RAA RAA

From end system to end system, e.g. microflows

Policy-based Networking

Directory Serverwith Policy-relatedData

PolicyServer

Other (unspecified)policy-relevantInformation Servers(e.g. time, certificates)

...Host

Information S

tores

PolicyTransaction

Protocol

DirectoryAccess

Protocol

QoSSProtocol

LeafDevice

(Access Router)

Directory Server

LDAPServer

DirectoryAccess

Protocol

(LDAPv3)

Policy-related Data

... Other InformationServers

PolicyTransaction

Protocol

(e.g. COPS,DIAMETER)

Policy Server

LDAPClient

PolicyInterpreter

PTServer

Host

QoSSClient

Policy-ControlledApplication

Access Device

PTClient

PolicyEnforcer

QoSSServer

SignallingProtocol

(e.g. RSVP)

Other

Access Protocols

Policy Mgmt

PBNM researchn Traffic Engineering for Quality of Service in

the Internet, at Large Scale (TEQUILA) project (Europe) (www.ist-tequila.org)

n Objectives: n study, specify, implement and validate a set of

service definition and traffic engineering tools to obtain quantitative end-to-end QoS guarantees through careful planning, dimensioning and dynamic control of scalable and simple qualitative traffic management techniques within the Internet (i.e. DiffServ).

Commercial offeringsn Orchestream, Allot, Cisco etc.

Edge Router

Edge Router

Edge Router

BandwidthBroker

Pol icy Server

Policy Database

Policy Manager

MySQL• PIB• High Level Policy

GUI Java Based

SQL

SQL

RARCOPS

Policy Based Network Management

DiffServ QoS PIBØ A Traffic Control Block (TCB) in the DiffServ interface

consists of zero or more classifiers, meters, actions, algorithmic droppers, queues and schedulers

Ø The PIB models the individual elements that make up the TCBs

Ø Designed for functional abstractions rather than device implementations (very flexible but hard to directly map to a proprietary device)

draft-ietf-diffserv-pib-06.txt, March 2002

QoS DiffServ PIBn Data Path Tablen Classifier Tablen Meter Tablen Action Tablen Queue Tablen Scheduler Tablen Algorithmic Dropper

Table

n Classification Capabilities Table

n Metering Capabilities Table

n Scheduling Capabilities Table

n Element Depth Capabilities Table

n Element Link Capabilities Table

Policy Managern Defines the policy condition and action:

high level and PIBn Control the deployment of a policy on an

interface or a group of interfacesn Add, view and delete the users, servers

applications, services, devices and their groups

Policy Clientn Communicate with Policy Server using COPS

protocol, e.g. issue policy request (REQ)n Receives policy from Policy Server and

configures the Network Element, e.g. edge router

n Usually deployed within the Network Element

Policy Servern Communicate with the Policy Client to service

policy requestsn Execute SQLQuery to retrieve the policy from

policy repository

Policy Client for BBn Accept the Resource Allocation Request (RAR)

from the BB in the same domainn Contact Policy Server which will check the

request against the policy in databasen If the request conforms to the policy

n reply to the BB using an accept message, otherwise send a reject message

COPS Message Types

Ø Request (REQ) PEP →PDP

Ø Decision (DEC) PDP →PEP

Ø Report State (RPT) PEP → PDP

Ø Delete Request State (DRQ) PEP → PDP

Ø Synchronize State Request (SSQ) PDP →PEP

Ø Client Open(OPN) PEP→PDP

Ø Client Accept (CAT) PDP → PEP

Ø Client Close (CC) PDP ?PEP

Ø Keep Alive (KA) PDP ?PEP

Ø Synchronize Complete (SSC) PEP → PDP

Common Open Policy Service (COPS)n COPS policy client models

n Outsourcing (RSVP)n Provisioning (DiffServ)

The COPS message: 0 1 2 3 +--------------+--------------+--------------+--------------+ |Version| Flags| Op Code | Client-type | +--------------+--------------+--------------+--------------+ | Message Length | +--------------+--------------+--------------+--------------+ Op Code=<Request> <Request> ::= <Common Header> <Client Handle> <Context = config request> *(<Named ClientSI>) [<Integrity>] Op Code=<decision> <Decision Message> ::= <Common Header> <Client Handle> *(<Decision>) | <Error> [<Integrity>] <Decision> ::= <Context> <Decision: Flags> [<Named Decision Data: Provisioning >]

also, SIBBS

Program Interaction (Start Up)

Policy Manager

0

7

1

2

34 5

6PolicyCfg

prgPolicyClient

prgPolicyServer

SQLQuery

Policy RepositoryConfiguration Files

COPS REQ

COPS DEC

Policy Client

NetworkElement

Policy Server

8

A High Level Policy:

IF the traffic from CCN video server to CIR video client requesting for resources from the Ingress Router in CCN marked EF from 28/01/2000 to 28/02/2002 not exceeding the peak rate 1MThen acceptedIF it exceeds the peak rate Then rejected.

A Sample Policy for Bandwidth Broker

Policy Data in the Policy Databasen Bandwidth Broker Address: 202.8.94.186n Source Address: 202.8.94.169/29n Destination Address: 202.8.94.149/29n Ingress Router Address: 202.8.94.161/30n Peak Rate: 1Mn Class: EFn Start Time: 28/01/2002n Stop Time: 28/02/2002

Policy-based Management System

Policy [policy_11][general][in][clfr_11]: [clfr_11][clfr_11]: [clfrem111][clfr_11][1][meter_11][filter_11]: [filter_11][Internet][196.168.1.2][0.0.0.255][Internet][202.8.94.139][0.0.0.255][0][tcp][3288][3288][0][32767][Enabled]: [meter_11][action_11s][0.0][tbmeter_11]: [tbmeter_11][tokenBucket][200][4000][50000]: [action_11s][algdrop_11][marker_11][2]: [marker_11][ef]: [algdrop_11][wred][queue_11][11][0][rddrop_11]: [queue_11][schdlr_11][schdpm_11]: [schdpm_11][4][200][0][500][0]: [schdlr_11][][4][0.0]: [rddrop_11][500][0][1000][0][10][10][10]:

Input Interface configuration set timeout 20 spawn telnet 192.168.1.4 expect "Username:" send "zhangyi\r" expect "Password:" send "*****\r" expect "ccn-4700#" send "configure terminal\r" expect "ccn-4700(config)#" send "access-list 111 permit tcp 202.8.94.139 0.0.0.255 196.168.1.2 0.0.0.255 eq 3288\r" expect "ccn-4700(config)#" send "interface ethernet2\r" expect "ccn-4700(config-if)#" send "ip access-group 111 in\r" expect "ccn-4700(config)#" send "class-map clfr_11\r" expect "ccn-4700(config-cmap)#" send "match access-group 111\r" expect "ccn-4700(config-cmap)#" send "exit\r" expect "ccn-4700(config)#" send "policy-map meter_11\r" expect "ccn-4700(config-pmap)#" send "class clfr_11\r" expect "ccn-4700(config-pmap-c)#" send "police 200 4000 4000 conform-action set-dscp-transmit ef exceed-action drop\r" expect "ccn-4700(config-pmap-c)#" send "exit\r" expect "ccn-4700(config-pmap)#" send "exit\r" expect "ccn-4700(config)#" send "interface ethernet2\r" expect "ccn-4700(config-if)#" send "service-policy input meter_11\r" expect "ccn-4700(config-if)#" send "exit\r" expect "ccn-4700(config)#"

Output Interface configuration set timeout 20 spawn telnet 192.168.1.4 expect "Username:" send "zhangyi\r" expect "Password:" send "*****\r" expect "ccn-4700#" send "configure terminal\r" expect "ccn-4700(config)#" send "class-map action_11s\r" expect "ccn-4700(config-cmap)#" send "match ip dscp ef\r" send "exit\r" expect "ccn-4700(config)#" send "policy-map schdlr_11\r" expect "ccn-4700(config-pmap)#" send "class action_11s\r" expect "ccn-4700(config-pmap-c)#" send "random-detect 10\r" expect "ccn-4700(config-pmap-c)#" send "exit\r" expect "ccn-4700(config-pmap)#" send "exit\r" expect "ccn-4700(config)#" send "interface ethernet1\r" expect "ccn-4700(config-if)#" send "service-policy output schdlr_11\r" expect "ccn-4700(config-if)#" send "exit\r" expect "ccn-4700(config)#"

Bandwidth Broker

A Resource Reservation System for DiffServ

Bandwidth Broker – Designn One BB per DiffServ domainn BB manages resources in its own domainn Negotiate with up- and down-stream BB to

establish end-to-end path

Idle

Self-Initializationget list of all

peers

Start TCP sessionbb_start_peer

bb_stop_peer

ActiveConn-ect

bb_connect_fail

Estab-lished

bb_connect_success

bb_accept_peer

Start

reject

bb_stop_peer

bb_stop_peer

Keep-Alive

Maintain a long-lived TCP connection with up- and down-stream brokers (peers)

Connection State Machine

Reservation State Machine

One RSM instance per RAR

Supports both Immediate Reservation and Advance Reservation

Provisioning of QPS Bandwidthat Edge Routers

U32Classifier

EFMarker

EFMeterDropp

er

U32C lassif ier

E FMarke r

E FM e terD ropp

er

U32C lassif ier

E FMarke r

E FM e terD ropp

er

O ther EF

D ropper

NUS (CCN/CIR)-SingAREN-QBone projectn Partners: iCAIR USA, Japan,Korea (APAN)n Applications: video streaming; later: VoIPn Objectives

n Connect to QBone(initially, 1.5 Mbps CBR between STAR TAP and SingAREN QBone routers)

n Set up measurement architecture (Surveyor, RTFM)

n Deploy real-time applications: video and VoIPn Develop and deploy QoS management &

accounting system: BB, policy-based, AAA

NTRCATM

CCN EE4700

STARTAPATM

APANTOKYO

ATM

NUSCCPRODATM

CCN EEATM

1-NETATM

CIRATM

NUSCCRESATM

Sing-ARENSTCATM

SingAREN

HQATM

TRANSPACATM

CIR7507

7505

3COM

7505

HQ12000

STC7507

NTRC7507

NTU CCATM

ROU-TER

ATM & Fast EthernetTestbeds

ATM & GigabitEthernet Testbeds

NUS CampusNetwork

NTU CampusNetwork

KRDLATM & other

Testbeds

Singapore ONENetwork

vBNS &Abilene

Ca*Net II

Japanresearchnetwork

ATM & Fast EthernetTestbeds

IP-levelQuality of Service (QoS)-enabled NetworkNUS-SingAREN-QBone

SingTelATM

SingTel12000

SingTel InternalTestbeds

34 Mbps

14 Mbps

2 Mbps

155 Mbps

155 Mbps

622 Mbps

155 Mbps

155 Mbps

155 Mbps

155 Mbps

155 Mbps

155 Mbps

155 Mbps

155 Mbps

USA

Singapore

CanadaJapan

L.A.TransitATM

SingARENLocal

SingARENInternational

SPRINGi connections

Digital Video StreamingNUS/SingAREN + iCAIR(NWU) & IBM

n Based on IBM VideoChargern Scalable and easy to usen Integrated into applicationsn Streaming and interactiven Real-time and asynchronous (stored)n Unicast and native multicastn Single source to multi-sourcen Resolutions up to HDTV

n Joint work looks at network QoS + server adaptationaccording to QoS

QBone Connectivity -- Physical

KRDLATM

SwitchDS domain in KRDL

Internet 2(QBONE)

1MbpsPVC toStarTap

(Chicago)

NUSCCFORESwitch

CIR7505

ForeAsx200Bx

LS1010 LinuxRouter

(Humper)

LinuxRouter

(Dumper)

Clark

CEO

STC7507

SingARENAOS

Switch

STC7507

CCNFORESwitch

CCN4700

CCNEdge

Video Server

ATMOC-3

Ethernet

QBone Connectivity – IP-level testbed

Internet2/QBone

CIR7505

LinuxRouter

(Humper)

LinuxRouter

(Dumper)

CoreRouter

EdgeRouter

EdgeRouter

DS DomainCiR

CCN4700

LinuxRouter

(ccnedge)

LeafRouter

Video StreamServer

DS DomainCCN

202.8.94.166/29

202.8.94.165/29

202.8.94.162/30

202.8.94.161/30

202.8.94.154/30

202.8.94.185

Clark

CEO202.8.94.186

202.8.94.187

202.8.94.153/30

202.8.94.146/30202.8.94.145/30

202.8.94.129/30 202.8.94.130/30

202.8.94.250/30

STC7507

202.8.94.249/30

192.5.172.70/30

192.5.172.69/30

NUS

SingAREN/NUS

SingAREN

CCN/CIR Video testbed

14Mbps or1Mbps QBone

Note: not traversing Abilene or vBNSVideoCharger server

9,000 miles

(CB-WFQ)

DiffServ over MPLSn Multi-Protocol Label Switching (MPLS) domain

consisting of Cisco and Linux routersn Differentiated Services Over MPLS via

EXP-Inferred-PSC Label Switched Paths (E-LSPs)

n SNMP probes in Linux routers to provide MPLS-specific information which are then graphed using MRTG

DiffServ over MPLS

Measurements

n Cisco NetFlow export from Cisco routers

n Capture and analyse using Splintered flow-tools (used also by Abilene NOC)

Future Workn Design and deployment of traffic-aware

policies, including automatic discovery of good policies

n Advance reservations using Probing Requestmechanism

n Traffic Engineering over MPLSn Provisioningn LSP selection

n Wireless access network, multicast QoSn Server QoSn QoS in Grid and cluster computing

15

Proposed Clearing House ArchitectureProposed Clearing House Architecture

• Introduce logical hierarchy

• Distributed database• reservation status, % link utilization, optimum path

source

ISP1

ISP n

destination

Edge Router

LCH LCH LCH

CH1CH1

CH2

ISP2

Note: make advance reservations based on traffic prediction to reduce latency

(in AIX 4.3.3)

Conclusionn QBone as a testbed to:

n implement QoS services and the applications which use these services

n experiment with advanced inter-domain signalling, measurement and QoS management

n We welcome collaboration with other groups working on QoS, measurements, policy-based managementn QoS in the Internet has to be deployed and not

remain a research curiosity

Acknowledgementsn Funding

n SingAREN/A*STAR Broadband 21 Grant

n NUS QBone Project Team Membersn A/Prof AL Ananda (CIR/SoC)n Michael Yuan (CIR/ECE)n Zhang Yi, Mayank Agarwal, Paul Ng, He You Ming,

Kong Kean Fong, Tan Eng Theng (ECE)

n NUS Computer Centre staffn Roland Yeo

n SingAREN staffn Jonathan Ongn Pua Chin Kok

Bandwidth Broker and Policy-Based Management

Demo by Michael Yuan Lihua, ECE/CIR

Bandwidth Brokern What is it?

n An agent-based resource reservation systemn Receiver-based – RSVPn Sender-based – ST-II n Advantage:

n Signaling only between BBs at the control planen Reservation can be initiated by anybody (Sender,

Receiver, even third party)n Per-domain reservation – scalability

Bandwidth Broker: The nodal model

Bandwidth Broker: The Implementationn Cisco IOS-like TTY user interface

n Most network admins are familiar with itn Apply changes on-the-flyn A single configuration file for all the modulesn A GUI under development

Bandwidth Broker: The Implementationn Immediate Reservation (IR)

n Reserve and use n Release the reservation when finish

n Book-Ahead (BA)n Reserve in advancen Necessary if resource is limitedn Resource usage/reservation information need to be

kept over a long time periodn Implemented using AVL-tree

The Demo: Testbed setup

The Demo: BB-based deploymentn Best-Effort is not enoughn A DiffServ network with admission-control

n No Reservation ? n Go awayn Or? Ask your BB to talk to me (RAR)

n Made Reservation ? n You are most welcome

The Demo: Policy-based Deploymentn Different policy for different user/user group

n Nobody? Poor Guy?n “I will try my best to help, but ……”

n VIP?n Premium Service

top related