Top Banner

of 12

Bandwidth Management for Supporting

Apr 04, 2018

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 7/31/2019 Bandwidth Management for Supporting

    1/12

    Bandwidth Management for SupportingDifferentiated-Service-Aware

    Traffic EngineeringTong Shan, Member, IEEE, and Oliver W.W. Yang, Senior Member, IEEE

    AbstractThis paper presents a bandwidth management framework for the support of Differentiated-Service-aware Traffic

    Engineering (DS-TE) in multiprotocol label switching (MPLS) networks. Our bandwidth management framework contains both

    bandwidth allocation and preemption mechanisms in which the link bandwidth is managed in two dimensions: class type (CT) and

    preemption priority. We put forward a Max-Min bandwidth constraint model in which we propose a novel use it or lend it strategy. The

    new model is able to guarantee a minimum bandwidth for each CT without causing resource fragmentation. Furthermore, we design

    three new bandwidth preemption algorithms for three bandwidth constraint models, respectively. An extensive simulation study is

    carried out to evaluate the effectiveness of the bandwidth constraint models and preemption algorithms. When compared with the

    existing constraint models and preemption rules, the proposed Max-Min constraint model and preemption algorithms improve not only

    bandwidth efficiency, but also robustness and fairness. They achieve significant performance improvement for the well-behaving traffic

    classes in terms of bandwidth utilization and bandwidth blocking and preemption probability. We also provide guidelines for selecting

    different DS-TE bandwidth management mechanisms.

    Index TermsResource management, admission control, differentiated service, traffic engineering.

    1 INTRODUCTION

    AS an important application of multiprotocol labelswitch-ing(MPLS), TrafficEngineering(TE) hasattracted muchattentionfor its ability to achieve end-to-endquality of service(QoS) [4], [5]. In the classical TE mechanism, bandwidth ismanaged on an aggregate basis. All packets toward the samedestination are routed collectively according to the single

    constraint of available link bandwidth and will follow thesame Label Switched Path (LSP).

    Since classical TE operates without referring to differentclasses of services, it may not be optimal in a DifferentiatedService (DiffServ) environment [1], [2]. To address thisissue, the Internet Engineering Task Force (IETF) proposedthe DiffServ-aware TE (DS-TE), which performs TE at a per-class level [6]. Traffic flows toward a given destination canbe transported on separate LSPs on the basis of serviceclasses and may follow different paths.

    In the DS-TE solution, the IETF proposed to advertise theavailable bandwidth on a per-class type (CT) basis toimprove the scalability of link-state advertisement, with aCT being a group of traffic trunks crossing a link that is

    governed by a specific set of bandwidth constraints [6]. TheTE class is introduced in [6] as a pair of a CT and apreemption priority allowed for that CT.

    DS-TE brings the following benefits over classical TE.First, by enforcing different bandwidth commitments fordifferent service classes, DS-TE ensures that the amount oftraffic of each class routed over a link matches theconfiguration of the link scheduler so that the appropriateQoS is provided to different service classes. Second, DS-TE

    simplifies the configuration of link schedulers and avoidsfrequent adaptive adjustment of the scheduling parameters.Third, by balancing the traffic load of different serviceclasses on links, DS-TE prevents performance interferencebetween the real-time and non-real-time traffic classes.

    In general, DS-TE consists of two major functions:bandwidth management and route computation. Althoughmany studies have been conducted on TE, most of themfocused on the route selection algorithms [19], [20], [21], andlittle effort has been put into DS-TE bandwidth manage-ment techniques. Obviously, when TE is performed in aDiffServ scenario, a bandwidth manager is indispensablefor every network node to enforce different bandwidthconstraints for each class, to perform separate admission

    controls for each class, and to flood the network withbandwidth availability information on a per-CT level.Designing a bandwidth manager for the support of TE inthe DiffServ environment is what we focus on in this paper.

    There are more requirements for the DS-TE bandwidthmanagement than for the conventional bandwidth manage-ment. First, the DS-TE bandwidth management must beable to enforce different bandwidth constraints for differentCTs. Second, it must be able to process the bandwidthrequests with at least three parameters: the requestedbandwidth, the CT, and the priority. Third, it is desirableto support bandwidth preemption [1], [6].

    Bandwidth preemption is becoming more useful in theDiffServ environment, where it can be used to assure high-

    priority traffic trunks with relatively favorable paths and

    1320 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 18, NO. 9, SEPTEMBER 2007

    . T. Shan is with Nortel Networks, Ottawa, Ontario, Canada K2H 8E9.E-mail: [email protected].

    . O.W.W. Yang is with the School of Information Technology andEngineering, University of Ottawa, Ottawa, Ontario, Canada K1N 6N5.E-mail: [email protected].

    Manuscript received 23 June 2005; revised 13 Mar. 2006; accepted 12 Oct.2006; published online 9 Jan. 2007.Recommended for acceptance by S. Das.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number TPDS-0303-0605.

    Digital Object Identifier no. 10.1109/TPDS.2007.1052.1045-9219/07/$25.00 2007 IEEE Published by the IEEE Computer Society

  • 7/31/2019 Bandwidth Management for Supporting

    2/12

    reliable resources. So far, the IETF only proposed the basicrules [4], [5] in which each connection is assigned with oneholding priority for keeping resources and one setuppriority for taking resources. A connection with a highsetup priority is allowed to preempt a connection with alower holding priority. For a system to be stable, theholding priority of a connection must be higher (numeri-

    cally smaller) than its setup priority.Previous works on preemption [15], [16], [17], and [18]focused on connection preemption selection, that is, select-ing an appropriate set of connections to be preempted. Inthese works, each connection is essentially associated withtwo parameters: bandwidth and priority. Without consider-ing the CT together with the priority, their algorithms arenot feasible for a full support of DS-TE. Besides, theiralgorithms did not provide the preemption fairness amongCTs, nor did they protect the normal load or underloadedCTs from being overwhelmed by the overloaded CTs.

    Amongthe previous works[6],[7], [8],[9],[25] arethe mostclosely related to this paper, which deals with the DS-TEbandwidth management. The Maximum Allocation Model

    (MAM) [7], the Russian Doll Model (RDM) [8], and theMaximum Allocation with Reservation (MAR) [25] are threeIETF-proposed bandwidth constraint models for supportingDS-TE. The performances of MAM, RDM, and MAR havebeen eval uate d in [9] , [25], but the results are verypreliminary. First, each CT is associated with a differentpriority. Such a one-to-one mapping implies that only inter-CT preemptionwas studied. Second, onlythe blocking/delayand blocking/preemption probabilities were evaluated,without the bandwidth efficiency being discussed. Band-width preemption was recognized as an important piece ofDS-TE bandwidth management, but no preemption strategywas proposed. In summary, none of these previous works

    proposed a comprehensive solution to the DS-TE bandwidthmanagement, with each focused on one or two aspects of theissue. To the best of our knowledge, this work is the mostcomplete solution on the DS-TE bandwidth management.

    The Virtual Partitioning (VP) in [11], [12], [13] is a well-known resource management scheme, which provides a fair,robust, and efficient way of resource sharing. At first glance,the Max-Min constraint model proposed in Section 2.3 mightlook similar to theVP, butit is actually a very different model.The VP is essentially a bandwidth protection mechanism,whereeach traffic class is allocated with a nominal amount ofbandwidth on the basis of its traffic load forecast and QoSrequirements. A traffic class is allowed to exceed its nominalallocation as long as there is a certain amount of protection

    bandwidth still unreserved. Hence, the VP cannot enforcedifferent maximum or minimum bandwidth constraints fordifferent classes of services. Besides, it does not deal withbandwidth preemption. Therefore, it is not feasible for theDS-TE solution.

    As we shall see in Section 2, both the maximum reservablebandwidth and the minimum guaranteed bandwidth of ourproposed Max-Min constraint model are bandwidth con-straints of a CT, which govern the bandwidth allocation of agroup of LSPs belonging tothe CT.Thisis quite differentfromthe Max-Min rate allocation in asynchronous transfer mode(ATM) networksin [26],[27], where the minimum bandwidthguarantee and maximum allowable bandwidth are both

    parameters of an individual flow.

    The bandwidth management and admission controlmechanisms can generally be categorized as centralized ordistributed. In the centralized approach, the bandwidthreservation information is maintained by a centralizedbandwidth broker (BB) in each network domain. Band-width management and admission control modules areinstalled and executed only at the centralized BB [23], [24].

    Although the centralized mechanism tends to achieveoptimized network resource utilization, it has the single-point failure problem and scalability issue [3], [14]. Thus,the distributed approach is used in this work, as describedin Section 1.1, where the bandwidth management andadmission control modules are installed and executed atindividual routers rather than at a centralized BB. Thedistributed approach is closer to the current Internetimplementation. Nevertheless, the proposed constraintmodel, preemption algorithms, and admission controlmechanism are feasible for the networks applying thecentralized approaches.

    1.1 Network Model and Operations

    We consider an MPLS network where DS-TE is applied. EachLSP is classified into one CT and assigned with one holdingpriority and one setup priority. Hence, an LSP establishmentrequest contains four parameters (bw, ct, hp, sp), indicatingthat bw amount of bandwidth is requested for establishing anLSP of class type ct at holding priority hp and setup prioritysp. Note that connections in the same CT can have differentpriorities so that they have different priorities to access andretain the resources [6].

    With the distributed bandwidth management, there arefour main steps in establishing a new LSP. First, the sourcenode computes a route to the destination based on thenetwork topology, the requested bandwidth bw, and theavailable bandwidth of class type ct on all the links along

    the path. Second, the source sends a request with para-meters (bw, ct, hp, sp) to all the routers along the computedpath. Third, each router on the path exercises admissioncontrol and sends a positive reply to the source if itsoutgoing link has enough free bandwidth available to thenew connection. Fourth, if there is not enough freebandwidth, then the router would activate bandwidthpreemption and return a positive reply if the preemptionis successful; otherwise, it would return a negative reply. Ifall the routers along the path return positive replies, thenthe LSP setup is successful, and they would reserve therequested bandwidth on the output links.

    This paper addresses bandwidth management issues atthe LSP level. The packet level behaviors (for example,

    burstiness) are contained in the bandwidth requirement ofindividual LSPs, which may be calculated as the effectivebandwidth of an LSP on the basis of forecast or measure-ment. This kind of approach was used in [11], [12], [13].

    1.2 Contributions

    There are five major contributions in this paper. First, weconduct DS-TE bandwidth management in two dimensions:CT and priority. It allows us to use matrices in bandwidthmanagement.

    Second, we propose a new constraint model in whichboth minimum guaranteed and maximum reservableconstraints are configured for each CT. The key novel idea

    is a use it or lend it strategy, which guarantees a

    SHAN AND YANG: BANDWIDTH MANAGEMENT FOR SUPPORTING DIFFERENTIATED-SERVICE-AWARE TRAFFIC ENGINEERING 1321

  • 7/31/2019 Bandwidth Management for Supporting

    3/12

    minimum bandwidth for each CT without causing frag-mentation and waste.

    Third, three novel bandwidth preemption algorithms arepresented for the MAM, the RDM, and the proposed Max-Min constraint model, respectively. In addition to theobjective of minimizing the disruption to the existingconnections as in [15], [16], [17], [18], our design aims at

    achieving preemption fairness and performance robustness.We propose that the preemption decision be made not onlyon the basis of priority level, but also on the basis of thereservation status and the constraint model characteristics.This way, the bandwidth reservation of the overloaded CTsis more susceptible to preemption than that of the normalload and underloaded ones so that better robustness andfairness can be achieved. The proposed algorithms alsoimprove bandwidth efficiency.

    Fourth, we present a new DS-TE admission controlmechanism. Unlike the traditional methods that onlyconsider the aggregate available link bandwidth, theproposed mechanism makes admission control decisionon the basis of the incoming connections CT, its priorities,

    the requested bandwidth, and the available bandwidth ofthe CT. It is feasible for the DS-TE environment.

    Fifth, we conduct an extensive simulation study todemonstrate the effectiveness of the proposed bandwidthmanagement framework. When compared with the exist-ing schemes, the proposed constraint model and preemp-tion algorithms improve not only bandwidth efficiency,but also robustness and fairness. They achieve significantperformance improvement for the well-behaving trafficclasses in terms of both bandwidth utilization andbandwidth blocking and preemption probability. We alsoprovide guidelines for selecting different DS-TE band-width management mechanisms.

    The rest of this paper is organized as follows: In Section 2,we first review two existing DS-TE constraint models andthen putforward a new model. In Section 3, three bandwidthpreemption algorithms are designed for three DS-TE con-straint models, respectively. The DS-TE admission controlmechanism is presented in Section 4. The performanceevaluation is in Section 5. Finally, Section 6 gives conclusions.

    2 BANDWIDTH ALLOCATION

    We first introduce two matrices to be used in this paper. Amatrix RR is used to record bandwidth reservation, with itselement Ri; j being the bandwidth reserved by theconnections of CT i at holding priority j. RR is an NCT

    NP P matrix, where NCT is the number of CTs, and NP P isthe number of priority levels. Every time a certain amountof bandwidth bw is granted to or released from class type ctat priority level pp, the matrix RR is updated as follows:

    Rct; hp Rct; hp bw; 1

    where bw has a positive value for a bandwidth grant and anegative value for a bandwidth release.

    A matrix AA is used to record bandwidth availability,with each element Ai; j recording the available bandwidthto the LSPs of CT i at setup priority j. Because DS-TEsupports up to eight TE classes [6], matrix AA has up to eightnonzero elements, which are advertised to other network

    nodes for the purpose of constraint-based routing.

    As will be seen in both Sections 3 and 4, with thematrices RR and AA recording the bandwidth reservation andavailability, respectively, the computation complexity ofbandwidth allocation, preemption, and admission control isindependent of the number of admitted connections and thenetwork size. Thus, we have a scalable bandwidth manage-ment framework. Furthermore, compared with the tradi-tional one-dimensional schemes that only considered theclasses, the complexity of our two-dimensional mechanismsis hardly affected because both the number of priorities andthe number of CTs are hard-coded (for example, four CTsand eight priorities) in the algorithms, not the variables.

    2.1 Maximum Allocation Model (MAM)

    The MAM was proposed in [7] to enforce a maximumbandwidth allocation constraint for each CT. It has thefollowing simple rules:

    1. The bandwidth reserved by all the connections of CTi should not exceed the bandwidth constraint BCiof CT i. That is,

    PNPP 1j0 Ri; j BCi.

    2. The total reserved bandwidth should not exceed thelink capacity; that is,

    PNCT1i0

    PNPP 1j0 Ri; j C,

    where C is the link capacity.3. For improving bandwidth efficiency, the sum of the

    bandwidth constraints is allowed to exceed the linkcapacity. That is,

    PNCT1i0 BCi ! C.

    The available bandwidth for the TE class k can becomputed as follows, where the TE class k is associatedwith class type ct and preemption priority sp in this paper:

    availBwTE-class k Act; sp

    min

    BCct

    Xspj0

    Rct; j; CXNCT1i0

    Xspj0

    Ri; j

    :

    2

    2.2 Russian Doll Model (RDM)

    The RDM was proposed in [8] to enforce differentbandwidth constraints for different groups of CTs. Its rulesare described as follows:

    1. The RDM applies cascaded sharing, with eachbandwidth constraint being the upper bound of thebandwidth reservation of a group of CTs; that is,PNCT1

    ib

    PNPP 1j0 Ri; j BCb for 0 b NCT 1.

    2. 0 BCi BCj C for 0 j < i NCT 1.

    The available bandwidth of TE class k can be computedas follows:

    availBwTE-class k Act;sp

    min0ict

    (BCi

    XNCT1li

    Xspj0

    Rl; j

    ):

    3

    It is obvious that if preemption is precluded, then theRDM cannot guarantee bandwidth isolation across CTs, asthe link capacity could be monopolized by CT0.

    2.3 The Proposed Max-Min BandwidthConstraint Model

    Both the MAM and the RDM have no minimum bandwidthguarantees for CTs; thus, they cannot ensure that the servicerate of the applications with stringent QoS requirements is

    independent of the intensity of other traffic classes. To

    1322 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 18, NO. 9, SEPTEMBER 2007

  • 7/31/2019 Bandwidth Management for Supporting

    4/12

    address this issue, we propose a newconstraint model on thebasis of a conventional Max-Min approach in [10], in whichboth the minimum guaranteed and the maximum reservablebandwidth constraints are configured for each CT.

    The conventional approach could cause bandwidthfragmentation and waste because the minimum bandwidthof oneCT cannotbe used bythe connections of other CTsevenif it is idle. To address this problem, we devise a novel use itor lend it strategy and an associated preemption scheme inthenewmodel. Specifically,if a certain CTconsumeslessthanits minimum bandwidth, its unused minimum bandwidthcan be lent to other CTs connections with a nonzero holdingpriority. In case that it increases its demand to its minimumguaranteed bandwidth later on, it is able to preempt theconnections of other CTs borrowing bandwidth so as toobtain at leastits minimum bandwidth.Note thatconnectionswith zero setup or holding priority are not allowed to borrowthe minimum guaranteed bandwidth of other CTs becausethey have the highest priority and cannot be preempted oncethey are established.

    The proposed Max-Min model has five rules:

    1. CT i has a maximum reservable bandwidth con-straint RBmaxi, where 0 i NCT 1. That is,PNPP 1

    j0 Ri; j RBmaxi.

    2. CT i has a minimum guaranteed bandwidthconstraint GBmini. Obviously, GBmini RBmaxi,where 0 i NCT 1.

    3. The sum of the maximum reservable bandwidth isallowed to exceed the link capacity; that is,PNCT1

    i0 RBmaxi ! C.4. To avoid congestion, the sum of the minimum

    guaranteed bandwidth should not exceed the linkcapacity; that is,

    PNCT1i0 GBmini C.

    5. The total reservation should not exceed the link

    capacity; that is,P

    NCT1i0P

    NPP 1j0 Ri; j C.Thus, the available bandwidth of TE class k can be

    computed as follows:

    availBwTE-class k Act; sp

    minff2 f3; RBmaxct

    Rct; 0g for sp 0

    minff2 f3 f4 f5

    ;

    RBmaxct Psp

    j0 Rct;jg for sp > 0;

    8>>>>>:

    4

    where

    x max0; x

    shared CXNCT1i0

    GBmini;5

    f1 XNCT1i0

    Xspj0

    Ri; j GBmini

    !; 6

    f2 GBminct Xspj0

    Rct;j

    !; 7

    f3 shared f1

    ; 8

    f4 Xi6ct

    GBmini XNPP 1j0

    Ri; j

    !; 9

    f5 f1 shared

    : 10

    The available bandwidth availBwTE classk in (4) iscalculated differently for zero and nonzero setup priorities.

    Because the zero-priority connections cannot be preemptedonce they are established, and they are not allowed toborrow the minimum bandwidth from other CTs. Thefraction of link bandwidth shared in (5) is shareable amongall the CTs.

    In (6), f1 computes the bandwidth reservation of theconnections with priorities higher (numerically smaller)than or equal to sp, which either occupies the sharedbandwidth or borrows other CTs minimum guaranteedbandwidth. In (7), f2 computes the portion of the class typects minimum guaranteed bandwidth, which is available tothe new connections of class type ct with setup priority sp.In (8), f3 is the portion of the shared bandwidth that is

    available to the new connections of class typect

    with setuppriority sp.In (9), GBmini

    PNPP 1j0 Ri; j

    computes the fractionof the CT is minimum guaranteed bandwidth that is notreserved by CT i; hence, it can be borrowed by other CTs.Therefore, f4 computes the total amount of the minimumguaranteed bandwidth belonging to the CTs other than ct,which can be borrowed by other CTs.

    In (10), f5 is the reservation of the connections withpriorities higher than or equal to sp, which borrows otherCTs minimum bandwidth. f4 f5

    is the amount ofbandwidth that can be borrowed by the new connections ofclass type ct with nonzero sp.

    Although the proposed model has different configuration

    and sharing methods from both the MAM and the RDM, arouter implementing it is able to interoperate with othernodes implementing the MAM or the RDM. The IETF doesnotmandate thebandwidth constraint model used at routers.That is, the nodal bandwidth management scheme can beproprietary without affecting interoperability. The onlyrequirement is that each node maintains a record of availablebandwidth per TE class, which is computed according to theconstraint model in use and advertised by the Open ShortestPath First Protocol (OSPF) or Intermediate System-to-Inter-mediate System Protocol (IS-IS) routing system [22].

    3 BANDWIDTH PREEMPTION

    When a router receives a request (bw, ct, hp, sp), it maydiscover that the requested bandwidth bw is less than orequal to the available bandwidth Act; sp of class type ct atpriority level sp, but there is inadequate unreservedbandwidth to accommodate the new demand because aportion of the available bandwidth has been reserved by theexisting connections with holding priorities lower than sp.In this case, if bandwidth preemption is allowed, then acertain amount of bandwidth would be preempted from theexisting connections with lower holding priorities so thatthe new request can be accepted. Otherwise, the requestwould be rejected.

    First, we compute the unreserved bandwidth of the

    TE class k at the arrival time of the new request (bw, ct, hp, sp):

    SHAN AND YANG: BANDWIDTH MANAGEMENT FOR SUPPORTING DIFFERENTIATED-SERVICE-AWARE TRAFFIC ENGINEERING 1323

  • 7/31/2019 Bandwidth Management for Supporting

    5/12

    unrsvdTE-class k min

    (C

    XNCT1i0

    XNPP 1j0

    Ri; j; BCct

    XNpp1j0

    Rct;j

    )

    when the MAM is used;

    11

    unrsvdTE-class k min0hct

    (BCh

    XNCT1ih

    XNPP 1j0

    Ri; j

    )

    when the RDM is used;

    12

    unrsvdTE-class k

    min

    CPNCT1

    i0

    PNPP 1j0

    Ri;j;

    RBmaxctPNPP 1j0 Rct;j when the proposedmodel is used for sp > 0 13a

    min

    s1s2s3s4

    ;

    RBmaxctPNPP 1j0

    Rct;j

    when the proposed

    model is used for sp 0; 13b

    8>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>:where

    s1

    shared

    XNCT1i0

    XNPP 1j0

    Ri; j GBmini

    !; 14

    s2 GBminct XNPP 1j0

    Rct;j

    !; 15

    s3 XNCT1i0

    XNPP 1j0

    Ri; j GBmini

    !shared

    !; 16

    s4 Xi6ct

    GBmini XNPP 1j0

    Ri; j

    !: 17

    Both (11) and (12) compute the unreserved bandwidth of

    the TE class k when the MAM and the RDM are used,respectively.

    Both (13a) and (13b) compute the unreserved bandwidth

    when the proposed Max-Min model is used. A new

    connection with nonzero setup and holding priorities can

    be accommodated with any unreserved portion of the link

    capacity; that is, CPNCT1

    i0

    PNPP 1j0 Ri; j. Besides, the

    accommodation of the new connection should not cause

    class type cts reservation to exceed RBmaxct. Hence, we

    obtain (13a).Because the bandwidth once allocated to a zero-priority

    connection cannot be preempted from it, the new connection

    with zero priority can only be allocated with two portions of

    bandwidth. The first is the unreserved portion of sharedbandwidth, which is s1 in (14). The second is the unreservedportion ofGBminct. Equation (16) calculates s3, which is thetotal amount of bandwidth borrowed by all CTs, and (17)obtains s4, which is the amount of bandwidth that can beborrowed from CTs other than ct. Thus, s3 s4

    is thebandwidth borrowed from ct. Equation (15) computes s2,

    which is the portion of GBminct that is not reserved by theclass type ct. Hence, s2 s3 s4 obtains the unreserved

    portion ofGBminct. Therefore, we obtain (13b).In Sections 3.1, 3.2, and 3.3, we present three bandwidth

    preemption algorithms for the MAM, the RDM, and theproposed Max-Min model, respectively. They are respon-sible for locating both the CTs and priorities for bandwidthpreemption, and sending preemption requests to theconnection management. It is up to the connection manage-ment to select the connections and to decide whether to teardown the connections or to reduce the bandwidth allocationof some elastic connections, which is not the concern of thispaper. In each algorithm, a matrix RR0 records the reserva-tion after the preemption, and it is used to update RR if and

    only if the preemption is successful. A vector bb bwbw is used torecord the amount of bandwidth to be preempted, and bothvectors bb ctct and bb pppp record the CTs and the priority levels,respectively, from which bb bwbw is to be preempted.

    3.1 Preemption Algorithm for MAM

    A simple preemption algorithm for the MAM is provided inFig. 1. It is activated by a router when the requestedbandwidth bw in an incoming request (bw, ct, hp, sp) of TEclass k is smaller than or equal to availBwk, as obtained in(12), but larger than the unreserved bandwidth unrsvdk, ascalculated in (11).

    Unlike the basic preemption rules in [4], [5], where thepreemption decision is made based only on the priority

    level, we propose to make preemption decisions based onboth the priority level and the bandwidth reservation statusso as to achieve preemption fairness and to minimize theamount of the bandwidth preempted.

    In Fig. 1, lines 3-4, we check whether granting bw to thenew connection of class type ct would cause the constraintBCct to be exceeded. If so, we need to preempt bump bwamount of bandwidth from class type ct, where bump bw iscalculated in line 5. The while loop (lines 6-11) searches therow ct of matrix RR0 for the nonzero items with the lowestpriorities until bump bw amount of bandwidth is located forpreemption and updates the vectors bb bwbw, bb ctct, and bb pppp,where b bwnb is the amount of bandwidth to be preempted

    from the connections of CT b ctnb at priority level b ppnb.In Fig. 1, lines 13-15,we identify whether granting bw tothenew connection would cause the total reservation to exceedthe link capacity C. If so, bump bw amount of bandwidthneeds to be preempted, which is computed in line 14. Thewhile loop in line 15 searches in matrix RR0 for the nonzeroitems with the lowest priorities until bump bw amount islocated for preemption and updates bb bwbw, bb ctct, bb pppp, and RR0.This way, the low-priority connections are preempted beforethe high-priority ones, which minimizes the number ofrerouted sessions. At last, a bandwidth preemption requestwith parameters bb ctct, bb pppp, and bb bwbw is sent to the connectionmanagement, which selects connections for disestablishmentor bandwidth compression and for returning an acknowl-

    edgment of preemption success or failure.

    1324 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 18, NO. 9, SEPTEMBER 2007

  • 7/31/2019 Bandwidth Management for Supporting

    6/12

    3.2 Preemption Algorithm for RDM

    Because the RDM has a cascaded bandwidth sharingmechanism, where a certain CT i is restricted by a set ofbandwidth constraints BC0; . . . ; BCi, we propose todesign its preemption algorithm based on the cascadedsharing characteristics, in addition to the priority levels andbandwidth reservation status, as shown in Fig. 2.

    The for loop in line 3 varies the value of h. For any valueofh, it is determined whether granting bandwidth bw to theclass type ct would cause the constraint BCh beingexceeded. The key idea is to check the constraints in the

    sequence of increasing constraint sizes, that is, in the orderof BCct; . . . ; BC0. The purpose is to prevent preemptingmore bandwidth than necessary. If the largest constraintBC0 were checked first, then some low-priority connec-tions of CT0 would get preempted unnecessarily becauselater checks of BCh for h 1; . . . ; ct would still requirefurther preemption of the CT h connections. This designalso provides better fairness, as BCct is the first to bechecked when class type cts request is processed.

    If it is determined that accommodation of the requestedbw would cause BCh to be exceeded, then the while loop

    in line 6 searches the rows h; . . . ; NCT 1 of matrix RR0 for

    the nonzero items with the lowest priorities until bump bw

    amount of bandwidth is located for preemption and

    updates the vectors bb bwbw, bb ctct, and bb pppp and the matrix RR0.

    3.3 Preemption Algorithm for the ProposedMax-Min Model

    The preemption algorithm for the proposed Max-Min

    model is similar to that for the MAM. The only difference

    lies in the while loop (Fig. 3, lines 15-23), where a search is

    conducted in matrix RR0 for the nonzero items with thelowest priorities until bump bw amount of bandwidth is

    located for preemption. The searched nonzero item R0i; j

    must satisfyPNPP 1

    j0 R0i; j > GBmini for i 6 ct because a

    new connection of class type ct is not allowed to preempt

    another CT is existing connections if CT is reservation is

    below its guaranteed bandwidth GBmini. Thus, the max-

    imum amount of bandwidth that can be preempted from a

    CT i 6 ct is PNPP 1

    j0 R0i; j GBmini.

    SHAN AND YANG: BANDWIDTH MANAGEMENT FOR SUPPORTING DIFFERENTIATED-SERVICE-AWARE TRAFFIC ENGINEERING 1325

    Fig. 1. The bandwidth preemption algorithm for MAM.

    Fig. 2. The bandwidth preemption algorithm for RDM.

  • 7/31/2019 Bandwidth Management for Supporting

    7/12

    4 ADMISSION CONTROL

    The DS-TE admission control procedure is presented inFig. 4, which is activated by a router when a bandwidthrequest (bw, ct, hp, sp) is received. The admission controldecision must be made on the basis of the incomingconnections requested bandwidth, CT, priorities, and the

    available bandwidth of the CT.The algorithm returns the decision of either accepting

    or rejecting the new request and both the updatedmatrices RR and AA. First, it determines whether the requestis for granting or releasing bandwidth. If the value of bwis negative, then it is a bandwidth releasing request,which is handled in lines 24-26. Otherwise, it is abandwidth demand, and the requested bandwidth bw iscompared with the available bandwidth Act; sp (line 3).If bw > Act;sp, then the request is rejected (line 22).Otherwise, the unreserved bandwidth unrsvdk is com-puted (line 4). If bw unrsvdk, then there is adequatefree bandwidth available to accommodate the newdemand, and the request is accepted (lines 6-8). Other-

    wise, the free bandwidth is inadequate, and the preemp-tion is needed. If preemption is not allowed, then therequest is rejected (line 20). Otherwise, the preemptionprocedure is invoked. If the preemption is successful, thenthe request is accepted (lines 13-16); otherwise, the requestis rejected (line 18).

    5 PERFORMANCE EVALUATION

    Simulations have been conducted to evaluate the perfor-mances of the DS-TE bandwidth constraint models andpreemption algorithms. In our simulations, there are threeCTs, and the LSPs in each CT have two possible priorities.Priority-zero LSPs have higher priority and can preempt the

    priority-one LSPs. We manage bandwidth at the LSP level,

    and each LSP may carry a number of service level Internetflows (for example, voice and data); thus, it is reasonable toassume that the amount of bandwidth requested by one LSPis uniformly distributed between 64 kilobits per second and5 megabits per second. The establishment requests of LSPsarrive according to a Poisson distribution, which is areasonableassumption, sincewe are dealing withLSP sessionarrivals andnot individualpacketarrivals. Theholding timesof LSPs are modeled with the exponential distribution.

    Six bandwidth management mechanisms are studied,including the basic MAM (bMAM), the basic RDM (bRDM),the enhanced MAM (eMAM), the enhanced RDM (eRDM),the proposed Max-Min, and the conventional Max-Minmechanisms. Both the bMAM and the bRDM use the basicpreemption rules in [4], [5]; that is, a connection with a highsetup priority is allowed to preempt an existing connectionof lower holding priority. The eMAM and the eRDM use theproposed preemption algorithms in Sections 3.1 and 3.2,respectively. The proposed Max-Min mechanism uses thepreemption algorithm in Section 3.3. The conventional Max-

    Min mechanism in [10] does not use preemption.In the simulations, the link capacity is 100 Mbps. In twoMAM and two Max-Min mechanisms, the maximumbandwidth constraints of CTs 0, 1, and 2 are set, respectively,to80percent,75percent,and55percentofthelinkcapacity.Intwo RDM mechanisms, BC0, BC1, and BC2 are set,respectively, to 100 percent, 75 percent, and 55 percent of thelink capacity, because BC0 is inherently equal to 100percentof thelinkcapacity in theRDM.In two Max-Min mechanisms,the minimum guaranteed bandwidth constraints of CTs 0, 1,and 2 are set, respectively, to 25 percent, 30 percent, and40 percent of the link capacity.

    After running simulations with different traffic loads, weobtain that bandwidth blocking and preemption rates are in

    therangeof104

    102

    whenthe loads of CT0,CT1,and CT2

    1326 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 18, NO. 9, SEPTEMBER 2007

    Fig. 3. Preemption algorithm for the proposed Max-Min model.

  • 7/31/2019 Bandwidth Management for Supporting

    8/12

    are5, 5, and4 erlangs,respectively.This setof traffic loads canbe viewed as the forecast loads, and satisfying the QoS can beobtained when the bandwidth constraints are configuredaccordingly as above. It is also called the normal load.

    We study two scenarios to evaluate the performances ofthe above six mechanisms. In both scenarios, the CT1 andCT2 have their normal loads, whereas the traffic load of CT0increases from a lighter-than-normal load to a heavier one,

    which brings the total load on the link from the normal tooverloaded situation. We investigate different behaviors ofsix mechanisms under both the normal-load and over-loaded situations in Scenario 1. We study the impact of thepercentage of the LSPs with priority zero (the highestpriority) on the performances, where the percentage of theCT0 LSPs with priority zero increases from 50 percent inScenario 1 to 75 percent in Scenario 2.

    5.1 Scenario 1

    In this scenario,the CT0LSPs have an average holding time of33.3 slots, and its interarrival time decreases from 33.3 to1.33 slots; that is, its trafficloadincreases from 1 to 25 erlangs.The CT1 LSPs have a load of 5 erlangs, with an average

    interarrival time of 4 slots and an average holding time of20 slots. The CT2 LSPs have a load of 5 erlangs, with anaverage interarrival time of 6.25 slots and an average holdingtime of 25 slots. Half of the LSPs in each CT are assigned withpriority zero, whereas the other half has priority one. NotethatitisforthesakeofsimplicitythatweassignallLSPsinthesame CT with the same average interarrival time and thesame average holding time in our simulations. In reality, theLSPs in the same CT may carry a number of various servicelevel flows, thus having different traffic parameters.

    In Fig. 5a, the bandwidth utilization of CT0 increaseswith the increasing load of CT0. The increase is slowerwhen two Max-Min mechanisms are used than that whenthe RDM or the MAM is used. This is because a minimum

    guaranteed bandwidth is provisioned for each CT in the

    Max-Min mechanisms, which provides better bandwidthisolation and makes the misbehaving CT0 less intrusive.

    In Fig. 5b, the bandwidth blocking and preemptionprobability of CT0 increases slightly with the increasing CT0load when the RDM or the MAM is used, and it increasessharply when the Max-Min mechanisms are used. It meansthat the misbehaving CT0 is punished more severely in theMax-Min models. The eRDM and eMAM induce sharper

    increases than the two basic methods. This is becausepreemption fairness is supported in our proposed preemp-tionalgorithms,andthereservationofthemisbehavingCT0ismore likely to be preempted than that of the conforming CTs.

    In Figs. 5c and 5e, the bandwidth utilization of CT1 andCT2 decrease with the increasing load of CT0 when bRDM,bMAM, eMAM, or eRDM is used in the order of decreasingspeed. It means that the enhanced methods provide betterrobustness for the conforming CTs than the two basic ones.When the Max-Min mechanisms are used, the bandwidthutilization of CT1 and CT2 only decrease slightly, whichmeans that the conforming CTs are well protected.

    In Figs. 5d and 5f, the bandwidth blocking andpreemption probabilities of CT1 and CT2 increase slightly

    with the increasing CT0 traffic load when the two Max-Minmechanisms are used. They increase quickly when the RDMor the MAM is used, and the eRDM and the eMAM inducesmoother increases than the two basic methods. Again, theproposed preemption algorithms achieve significant im-provement over the basic preemption rules in terms ofbetter protection and robustness for the conforming CTs.

    In Fig. 5g, the eRDM achieves the highest averagebandwidth utilization, and the eMAM obtains the secondhighest utilization. The utilization achieved by the proposedMax-Min mechanism is nearly 10 percent lower than thehighest value. This is because, as per (2), (3), and (4), theavailable bandwidth for the zero-priority LSPs is less thanthat for the nonzero-priority LSPs in the proposed Max-Min

    model. With 50 percent of LSPs having priority zero, it is an

    SHAN AND YANG: BANDWIDTH MANAGEMENT FOR SUPPORTING DIFFERENTIATED-SERVICE-AWARE TRAFFIC ENGINEERING 1327

    Fig. 4. Admission control mechanism.

  • 7/31/2019 Bandwidth Management for Supporting

    9/12

    expected result. The eRDM and eMAM achieve two highest

    bandwidth efficiency values, but they induce four to seven

    times higherbandwidth blocking andpreemption rates to the

    conforming CT1 and CT2 than the proposed Max-Min

    mechanism. Thus, there is a trade-off between bandwidth

    efficiency and robustness.Summarizing the results, we can see that with our

    proposed preemption algorithms, the eRDM and the

    eMAM obtain better fairness, robustness, and efficiency

    than the basic methods. The proposed Max-Min mechanism

    achieves much better robustness and protection than both

    the eRDM and the eMAM, but it may not be the most

    desirable mechanism when there are a high percentage of

    connections with the highest priority (say, 50 percent) due

    to the low bandwidth efficiency.

    1328 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 18, NO. 9, SEPTEMBER 2007

    Fig. 5. Simulation results of Scenario 1.

  • 7/31/2019 Bandwidth Management for Supporting

    10/12

    5.2 Scenario 2

    In Scenario 2, the average interarrival time and holding time

    of each CT are the same as that in Scenario 1. The only

    difference from Scenario 1 is that 75 percent of the CT0 LSPs

    have priority zero, and 25 percent of the CT0 LSPs are

    assigned with priority one. It means that CT0 is more

    misbehaving than it was in Scenario 1.

    Comparing Fig. 6 with Fig. 5, we can see that when the

    proposed Max-Min mechanism is used, the CT0 obtainslower bandwidth utilization and higher blocking andpreemption probability than that in Scenario 1. This isbecause the zero-priority LSPs have less available bandwidththan the nonzero-priority LSPs in the proposed mechanism.With 25 percent more CT0LSPs havingzero priority than that

    in Scenario 1, less bandwidth is available to CT0, which

    SHAN AND YANG: BANDWIDTH MANAGEMENT FOR SUPPORTING DIFFERENTIATED-SERVICE-AWARE TRAFFIC ENGINEERING 1329

    Fig. 6. Simulation results of Scenario 2.

  • 7/31/2019 Bandwidth Management for Supporting

    11/12

    causes lower bandwidth utilization and higher blocking rateforCT0. We also observe that both CT1and CT2obtain higherbandwidth utilization and lower blocking and preemptionprobability than that in Scenario 1. Thus, when the proposedmechanism is used, if a certain CT becomes more misbehav-ing in terms of more connections with the highest prioritythan forecast, then its performances will deteriorate, whereas

    other conforming CTs will obtain better performances.When the RDM or the MAM is used, the CT0 achieves

    slightly higher bandwidth utilization and lower blockingand preemption probability than that in Scenario 1, whereasCT1 and CT2 obtain slightly lower bandwidth utilizationand higher blocking and preemption probability. This isbecause the available bandwidth computation is the samefor the zero and nonzero-priority LSPs in the RDM and theMAM. With 25 percent more zero-priority CT0 LSPs thanthat in Scenario 1, more CT0 LSPs can be established bypreempting lower priority LSPs of other CTs. Hence, themisbehaving CT0 obtains better performances than that inScenario 1, whereas the CT1 and CT2 performances become

    worse. Thus, with the RDM or the MAM, if a certain CTbecomes more misbehaving in terms of more connectionswith the highest priority than forecast, then its perfor-mances will improve, whereas the performances of otherconforming CTs will deteriorate.

    In Fig. 6g, it can be seen that the total bandwidthutilization achieved by the proposed Max-Min mechanismis lower than that in Scenario 1 because 25 percent moreCT0 LSPs have priority zero, which have less availablebandwidth than the nonzero-priority LSPs in the proposedmechanism. The RDM and the MAM achieve higher totalbandwidth utilization than that in Scenario 1, but theyinduce much higher blocking and preemption probabilitiesto the conforming CT1 and CT2, as shown in Figs. 6d and

    6f. Again, there is a trade-off between the bandwidthefficiency and robustness.

    5.3 Discussions

    We have also studied many other scenarios, and the resultsare not presented due to space limitation. In one scenario,we use the same traffic parameters for each CT as that inScenario 1, except that only 20 percent of LSPs have priorityzero. We observe that the bandwidth utilization achievedby the proposed Max-Min mechanism is much higher thanthat in Scenario 1 and only 1.5 percent lower than thehighest value achieved by the eRDM. This is expectedbecause having fewer zero-priority LSPs implies that morebandwidth can be shared flexibly among CTs in theproposed Max-Min mechanism. Thus, the proposed Max-Min mechanism is the most desirable mechanism whenthere is just a small percentage of connections with thehighest priority (say, 20 percent).

    In another scenario, we use the same traffic parametersas those in Scenario 1. The only difference is that themaximum bandwidth constraint of CT0 is reduced to50 Mbps. We observe that when the MAM or the proposedMax-Min methods are used, the total bandwidth utilizationis much lower than that in Scenario 1, whereas theconforming CT1 and CT2 obtain better performances. Thus,reducing the bandwidth constraint values may not be anappropriate way to improve bandwidth protection because

    small constraint values put a curb on bandwidth sharing

    and further result in low bandwidth utilization. It is betterto apply traffic shaping to make the traffic flows moreconforming, rather than provisioning small bandwidthconstraint values.

    Note that one-hop simulation is sufficient in this paperbecause we focus on the nodal bandwidth managementmechanisms, without addressing the routing aspect and

    network load balancing. The proposed mechanisms managebandwidth on both CT and priority dimensions, withoutconsidering the selection of individual end-to-end connec-tions for preemption or bandwidth compression. Themultihop simulation will be useful in our future works toinvestigate the impact of the nodal mechanisms on therouting computation and global network performance. Ourfuture works also include the mathematical analysis of theconstraint models, the scheduling algorithms for imple-menting the constraint models, and so forth.

    6 CONCLUSIONS

    We have presenteda bandwidth managementframework for

    the support of DS-TE, in which the link bandwidth ismanaged in two dimensions: CT and priority. We putforward a Max-Min constraint model, in which we proposeda novel use it or lend it strategy. Furthermore, we designedthree new bandwidthpreemptionalgorithms for three DS-TEconstraint models, respectively. An extensive simulationstudy demonstrated that the proposed Max-Min constraintmodel and preemption algorithms improve not only band-width efficiency, but also robustness and fairness. Theyachieve significant performance improvement for the well-behaving traffic classes in terms of both bandwidth utiliza-tion and bandwidth blocking and preemption probability.We also provided guidelines for selecting different DS-TEbandwidth management mechanisms.

    REFERENCES[1] F. Le Faucheur et al., MPLS Support of Differentiated Services, IETF

    RFC 3270, May 2002.[2] S. Blake et al., An Architecture for Differentiated Services, IETF RFC

    2475, Dec. 1998.[3] D. Awduche et al., RSVP-TE Extension to RSVP for LSP Tunnels,

    RFC 3209, Dec. 2001.[4] D. Awduche et al., Overview and Principles of Internet Traffic

    Engineering, RFC 3272, May 2002.[5] D. Awduche et al., Requirements for Traffic Engineering over MPLS,

    IETF RFC 2702, Sept. 1999.[6] F. Le Faucheur and W. Lai, Requirements for Support of Differentiated

    Service Aware MPLS Traffic Engineering, IETF RFC 3564, July 2003.[7] F. Le Faucheur et al., Maximum Allocation Bandwidth Con-

    straints Model for Diff-Serv-Aware MPLS Traffic Engineering,IETF Internet draft, 2004.[8] F. Le Faucheur et al., Russian Dolls Bandwidth Constraints

    Model for Diff-Serv-Aware MPLS Traffic Engineering, IETFInternet draft, 2004.

    [9] W.S. Lai, Traffic Engineering for MPLS, Proc. Internet Perfor-mance and Control of Network Systems III Conf., vol. 4865, pp. 256-267, July 2002.

    [10] F. Kamoun and L. Kleinrock, Analysis of Shared Finite Storage ina Computer Network Node Environment under General TrafficConditions, IEEE Trans. Comm., vol. 28, no. 7, pp. 992-1003, July1980.

    [11] D. Mitra and I. Ziedins, Hierarchical Virtual Partitioning:Algorithms for Virtual Private Networking, Proc. IEEE GLOBE-COM 97, pp. 1784-1891, 1997.

    [12] S.C. Borst and D. Mitra, Virtual Partitioning for Robust ResourceSharing: Computational Techniques for Heterogeneous Traffic,

    IEEE J. Selected Areas in Comm., vol. 16,no. 5, pp.668-678,June 1998.

    1330 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 18, NO. 9, SEPTEMBER 2007

  • 7/31/2019 Bandwidth Management for Supporting

    12/12

    [13] E. Bouillet, D. Mitra, and K.G. Ramakrishnan, The Structure andManagement of Service Level Agreements in Networks, IEEE

    J. Selected Areas in Comm., vol. 20, no. 4, pp. 691-699, May 2002.[14] W. Jia et al., Distributed Admission Control for Anycast Flows,

    IEEE Trans. Parallel and Distributed Systems, vol. 15, no. 8, pp. 673-686, Aug. 2004.

    [15] M. Peyravian and A.D. Kshemkalyani, Decentralized NetworkConnection Preemption Algorithms, Computer Network and ISDNSystem, vol. 30, no. 11, pp. 1029-1043, June 1998.

    [16] S. Jeon, R.T. Abler, and A.E. Goulart, The Optimal ConnectionPreemption Algorithm in a Multi-Class Network, Proc. IEEE IntlConf. Comm. (ICC 02), pp. 2294-2298, Apr. 2002.

    [17] J.C. de Oliveira et al., A New Preemption Policy for DiffServ-Aware Traffic Engineering to Minimize Rerouting, Proc. IEEEINFOCOM 02, May 2002.

    [18] C. Scoglio et al., TEAM: A Traffic Engineering AutomatedManager for DiffServ-Based MPLS Networks, IEEE Comm.

    Magazine, pp. 134-145, Oct. 2004.[19] R. Guerin, A. Orda, and D. Williams, QoS Routing Mechanisms

    and OSPF Extensions, Proc. GLOBECOM 97, pp. 1903-1908, Nov.1997.

    [20] Q. Ma and P. Steenkiste, Supporting Dynamic Inter-ClassResource Sharing: A Multi-Class QoS Routing Algorithm, Proc.IEEE INFOCOM 99, pp. 649-660, 1999.

    [21] W.C. Lee, M.G. Hluchyj, and P.A. Humblet, Routing Subject toQuality-of-Service Constraints in Integrated Communication Net-

    works, IEEE Network, vol. 9, no. 4, pp. 14-16, July-Aug. 1995.[22] F. Le Faucheur et al., Protocol Extensions for Support of Differ-entiated-Service-Aware MPLS Traffic Engineering, IETF Internetdraft, Mar. 2004.

    [23] Z. Duan et al., A Core Stateless Bandwidth Broker Archi-tecture for Scalable Support of Guaranteed Services, IEEETrans. Parallel and Distributed Systems, vol. 15, no. 2, pp. 167-181, Feb. 2004.

    [24] B. Teitelbaum et al., Internet2 QBone: Building a Testbed forDifferentiated Services, IEEE Network, vol. 13, no. 5, pp. 8-16,Sept. 1999.

    [25] J. Ash, Max Allocation with Reservation Bandwidth ConstraintsModel for DiffServ-Aware MPLS Traffic Engineering and Perfor-mance Comparison, IETF Internet draft, Jan. 2004.

    [26] L. Kalampoukas, A. Verma, and K.K. Ramakrishnan, An EfficientRate Allocation Algorithm for ATM Networks Providing Max-Min Fairness, Proc. Sixth IFIP Intl Conf. High-Performance

    Networking, pp. 143-154, Sept. 1995.[27] Y.T. Hou, S.S. Panwar, and H.H.-Y. Tzeng, On Generalized Max-Min Rate Allocation and Distributed Convergence Algorithm forPacket Networks, IEEE Trans. Parallel and Distributed Systems,vol. 15, no. 5, pp. 401-416, May 2004.

    Tong Shan received the bachelors degree fromthe Nanjing University of Posts and Telecom-munications, the masters degree from the ChinaAcademy of Posts and Telecommunications,and the PhD degree from the University ofOttawa, all in electrical engineering. Her re-search interests include traffic control, resourcemanagement, quality of service, network re-storation, and security in both wired and wireless

    networks. She is a member of the IEEE.

    Oliver W.W. Yang received the PhD degree inelectrical engineering from the University ofWaterloo, Ontario, Canada. He is currently aprofessor in the School of Information Technol-ogy and Engineering at the University of Ottawa,Ontario, Canada. He has worked for NorthernTelecom Canada Ltd. and has done variousconsulting. He was an editor of the IEEECommunication Magazine and an associatedirector of OCIECE (Ottawa-Carleton Institute

    of Electrical and Computer Engineering). He is currently on the editorialboard of the IEEE Communication Surveys & Tutorials. His researchinterests are in the modeling, analysis and performance evaluation ofcomputer communication networks, their protocols, services, and

    interconnection architectures. The CCNR Lab, under his leadership,has been working on various projects in the switch architecture, trafficcontrol, traffic characterization, and other traffic engineering issues inboth wireless and photonic networks, of which the results can be foundin many technical papers. He is a senior member of the IEEE.

    . For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

    SHAN AND YANG: BANDWIDTH MANAGEMENT FOR SUPPORTING DIFFERENTIATED-SERVICE-AWARE TRAFFIC ENGINEERING 1331