Top Banner
ELL 785–Computer Communication Networks Lecture 3 Introduction to Queueing theory 3-1 Contents Motivations Discrete-time Markov processes Review on Poisson process Continuous-time Markov processes Queueing systems 3-2 Circuit switching networks - I Traffic fluctuates as calls initiated & terminated Telephone calls come and go People activity follow patterns: Mid-morning & mid-afternoon at office, Evening at home, Summer vacation, etc. Outlier Days are extra busy (Mother’s Day, Christmas, ...), disasters & other events cause surges in traffic Providing resources so Call requests always met is too expensive Call requests met most of the time cost-effective Switches concentrate traffic onto shared trunks: blocking of requests will occur from time to time Fewer trunks Many lines 3-3 Circuit switching networks - II Fluctuation in Trunk Occupancy 1 2 3 4 5 6 7 Trunk number All trunks busy, new call requests blocked Number of busy trunks active active active active active active active active active active – minimize the number of trunks subject to a blocking probability 3-4
46

Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Mar 19, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

ELL 785–Computer Communication Networks

Lecture 3Introduction to Queueing theory

3-1

Contents

Motivations

Discrete-time Markov processes

Review on Poisson process

Continuous-time Markov processes

Queueing systems

3-2

Circuit switching networks - I

Traffic fluctuates as calls initiated & terminated• Telephone calls come and go• People activity follow patterns: Mid-morning & mid-afternoon at

office, Evening at home, Summer vacation, etc.• Outlier Days are extra busy (Mother’s Day, Christmas, ...),

disasters & other events cause surges in trafficProviding resources so• Call requests always met is too expensive• Call requests met most of the time cost-effectiveSwitches concentrate traffic onto shared trunks: blocking of requestswill occur from time to time

Fewertrunks

Manylines

3-3

Circuit switching networks - II

Fluctuation in Trunk Occupancy

1

2

3

4

5

6

7T

runk n

um

be

r

All trunks busy, new call requests blockedNumber of busy trunks

active

active

active

activeactive

active

active

active

active

active

– minimize the number of trunks subject to a blocking probability

3-4

Page 2: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Packet switching networks - I

Statistical multiplexing

• Dedicated lines involve not waiting for other users, but lines areused inefficiently when user traffic is bursty

• Shared lines concentrate packets into shared line; packets buffered(delayed) when line is not immediately available

A1 A2

B1 B2

C2C1

A2B1 B2 C2C1A1

(b) Shared lines

(a) Dedicated lines

Buffer

A

B

C

Input lines

Output line

3-5

Packet switching networks - II

Fluctuations in Packets in the System

A1 A2

B1 B2

C2C1

A2B1 B2 C2C1A1(b) Shared line

(a) Dedicated lines

Number of packets in the system

3-6

Packet switching networks - III

Delay = Waiting times + service times

Packet arrivesat queue

Packet beginstransmission

Packet completestransmission

Servicetime

WaitingtimeP1

P1

P2

P2

P3

P3

P4

P4

P5

P5

• Packet arrival process• Packet service time

– R bps transmission rate and a packet of L bits long– Service time: L/R (transmission time for a packet)– Packet length can be a constant, or random variables

3-7

Random (or Stochastic) Processes

General notion• Suppose a random experiment specified by the outcomes ζ from

some sample space S , and ζ ∈ S• A random process (or stochastic) is a mapping ζ to a function of

time t: X(t, ζ)– For a fixed t, e.g., t1, t2,...: X(ti , ζ) is random variable– For ζ fixed: X(t, ζi) is a sample path or realization

0 1 2 n n+1 time

n+2

– e.g. # of people in Cafe coffee day, # of rickshaws at IIT maingate

3-8

Page 3: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Discrete-time Markov process I

A sequence of integer-valued random variables, Xn , n = 0, 1, . . . ,is called a discrete-time Markov process

If the following Markov property holds

Pr[Xn+1 = j|Xn = i,Xn−1 = in−1, . . . ,X0 = i0]= Pr[Xn+1 = j|Xn = i]

• State: the value of Xn at time n in the set S• State space: the set S = {n|n = 0, 1, . . . , }

– An integer-valued Markov process is called Markov chain (MC)

– With an independent Bernoulli seq. Xi with prob. 1/2, isYn = 0.5(Xn + Xn−1) a Markov process?

– Is the vector process Yn = (Xn,Xn−1) a Markov process?

3-9

Discrete-time Markov process II

Time-homogeneous, if for any n,

pij = Pr[Xn+1 = j|Xn = i] (indepedent of time n)

which is called one-step (state) transition probability

State transition probability matrix:

P = [pij ] =

p00 p01 p02 · · · · · ·p10 p11 p12 · · · · · ·· · · · · · · · · · ·

pi0 pi1 pi2 · · · · · ·...

......

. . . . . .

which is called a stochastic matrix with pij ≥ 0 and

∑∞j=0 pij = 1

3-10

Discrete-time Markov process III

A mouse in a maze

4 5 6

8

1 2 3

7 9

• A mouse chooses the next cell to visit withprobability 1/k, where k is the number ofadjacent cells.

• The mouse does not move any more once it iscaught by the cat or it has the cheese.

P =

1 2 3 4 5 6 7 8 9

1 0 12 0 1

2 0 0 0 0 02 1

3 0 13 0 1

3 0 0 0 03 0 1

2 0 0 0 12 0 0 0

4 13 0 0 0 1

3 0 13 0 0

5 0 14 0 1

4 0 14 0 1

4 06 0 0 1

3 0 13 0 0 0 1

37 0 0 0 0 0 0 1 0 08 0 0 0 0 1

3 0 13 0 1

39 0 0 0 0 0 0 0 0 1

3-11

Discrete-time Markov process IV

n-step transition probability matrix:

p(n)ij = Pr[Xl+n = j|Xl = i] for n ≥ 0, i, j ≥ 0.

– Consider a two-step transition probability

Pr[X2 = j,X1 = k|X0 = i] = Pr[X2 = j,X1 = k,X0 = i]Pr[X0 = i]

= Pr[X2 = j|X1 = k] Pr[X1 = k|X0 = i] Pr[X0 = i]Pr[X0 = i]

= pikpkj

– Summing over k, we have

p(2)ji =

∑k

pikpkj

3-12

Page 4: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Discrete-time Markov process IV

The Chapman-Kolmogorov equations:

p(n+m)ij =

∞∑k=0

p(n)ik p(m)

kj for n,m ≥ 0, i, j ∈ S

Proof:

Pr[Xn+m = j|X0 = i] =∑k∈S

Pr[Xn+m = j|X0 = i,Xn = k] Pr[Xn = k|X0 = i]

(Markov property) =∑k∈S

Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i]

(Time homogeneous) =∑k∈S

Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i]

Pn+m = PnPm ⇒ Pn+1 = PnP

3-13

Discrete-time Markov process V

In a place, the weather each day is classified as sunny, cloudy or rainy. Thenext day’s weather depends only on the weather of the present day and noton the weather of the previous days. If the present day is sunny, the nextday will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when thepresent day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy.

0.70.2

0.1

Sunny Cloudy Rainy0.5

0.25

0.25

0.4

0.3

0.3

P =

S C R S 0.7 0.1 0.2C 0.5 0.25 0.25R 0.4 0.3 0.3

– Using n-step transition probability matrix,

P3 =

0.601 0.168 0.2300.596 0.175 0.2330.585 0.179 0.234

and P12 =

0.596 0.172 0.2310.596 0.172 0.2310.596 0.172 0.231

= P13

3-14

Discrete-time Markov process VI

State probabilities at time n– π(n)

i = Pr[Xn = i] and π(n) =[π

(n)0 , . . . , π

(n)i , . . .](row vector)

– π(0)i : the initial state probability

Pr[Xn = j] =∑i∈S

Pr[Xn = j|X0 = i] Pr[X0 = i]

π(n)j =

∑i∈S

p(n)ij π

(0)i

– In matrix notation: π(n) = π(0)Pn

Limiting distribution: Given an initial prob. distribution, π(0),

~π = limn→∞

π(n) → π(∞)j = lim

n→∞p(n)

ij

– n →∞: π(n) = π(n−1)P → ~π = ~πP and ~π · ~1 = 1– The system reaches “equilibrium" or “steady-state"

3-15

Discrete-time Markov process VII

Consider A Markov model for a packetized speech model: if the nthpacket contains silence, then the probability of silence in the nextpacket is 1− α and the probability of speech activity is α. Similarly, ifthe nth packet contains speech activity, then the probability of speechactivity in the next packet is 1− β and the probability of silence is β.

(a) Find the state transition probability matrix, P.(b) Find an expression of Pn.

(a)

P =[

1− α α

β 1− β

]

(b) We can write Pn as

Pn = N−1ΛnN .

3-16

Page 5: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Discrete-time Markov process VIII

Using the spectral decomposition of P, i.e.,

|P − λI | = (1− β − λ)(1− α− λ) = 0

we have λ1 = 1 and λ2 = 1− α− β.The eigenvectors are ~e1 = [1, β/α] and ~e2 = [1,−1]. Thus, we have

N =[~e1

~e2

]=[

1 βα

1 −1

]and N−1 = 1

α+ β

[α β

α −α

].

We can write Pn as

Pn = N−1

[1 00 (1− α− β)n

]N = 1

α+ β

[α+ βθn β − βθn

α− αθn β + αθn

],

where θ = 1− α− β.

3-17

Discrete-time Markov process IX

If P(n) has identical rows, then P(n+1) has also. Suppose

P(n) =

rr...r

Then, we have

PP(n) =

· · ·pj1 pj2 · · · pjn

· · ·

rr...r

=

· · ·pj1r + pj2r + · · ·+ pjnr

· · ·

=

· · ·r· · ·

= P(n)

3-18

Discrete-time Markov process X

Stationary distribution:– zj and z = [zj ] denote the prob. of being in state j and its vector

z = z · P and z · ~1 = 1

• If zj is chosen as the initial distribution, i.e., π(0)j = zj for all j, we

have π(n)j = zj for all n

• A limiting distribution, when it exists, is always a stationarydistribution, but the converse is not true

P =[

0 11 0

], P2 =

[1 00 1

], P3 =

[0 11 0

]Global balance equation:

~π = ~πP ⇒ (each row) πj∑

ipji =

∑iπipij

3-19

Discrete-time Markov process XI

Back to the weather example on page 3-16

• Using ~πP = ~π, we have

π0 =0.7π0 + 0.5π1 + 0.4π2

π1 =0.1π0 + 0.25π1 + 0.3π2

π2 =0.2π0 + 0.25π1 + 0.3π2

- Note that one equation is always redundant• Using 1 = π0 + π1 + π2, we have 0.3 −0.5 −0.4

−0.1 0.75 −0.31 1 1

π0

π1

π2

=

001

π0 = 0.596, π1 = 0.1722, π2 = 0.2318

3-20

Page 6: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Discrete-time Markov process XII

Classes of states:• State j is accessible from state i if p(n)

ij > 0 for some n• States i and j communicate if they are accessible to each other• Two states belong to the same class if they communicate with

each other• MC having a single class is said to be irreducible

1 2

0

3

Recurrence property• State j is recurrent if

∑∞n=1 p(n)

jj =∞– Positive recurrent if πj > 0– Null recurrent if πj = 0

• State j is transient if∑∞

n=1 p(n)jj <∞

3-21

Discrete-time Markov process XIII

Periodicity and aperiodic:• State i has period d if

p(n)ii = 0 when n is not a multiple of d,

where d is the largest integer with this property.• State i is aperiodic if it has period d = 1.• All states in a class have the same period

– An irreducible Markov chain is said to be aperiodic if the statesin its single class have period one

State

Recurrent

Transient:

Positive recurrent

Periodic:

Aperiodic:

Null recurrent:

ergodic

3-22

Discrete-time Markov process XIV

In a place, a mosquito is produced every hour with prob. p, and dieswith prob. 1− p

• Show the state transition diagram

0 1 2 3 … …

• Using global balance eqns, find the (stationary) state prob.:

pπi = (1− p)πi+1 → πi+1 = p1− pπi =

(p

1− p

)iπ0

• All states are positive recurrent if p < 1/2, null recurrent ifp = 1/2 (see

∑∞i=0 πi = 1), and transient if p > 1/2

3-23

Discrete-time Markov process XV

An autorickshaw driver provides service in two zones of New Delhi.Fares picked up in zone A will have destinations in zone A withprobability 0.6 or in zone B with probability 0.4. Fares picked up inzone B will have destinations in zone A with probability 0.3 or in zoneB with probability 0.7. The driver’s expected profit for a trip entirelyin zone A is 40 Rupees (Rps); for a trip entirely in zone B is 80 Rps;and for a trip that involves both zones is 110 Rps.

• Find the stationary prob. that the driver is in each zone.• What is the expected profit of the driver?

(40× 0.6 + 110× 0.4)πA + (80× 0.7 + 110× 0.3)πB

= 68πA + 89πB

= 68πA + 89(1− πA) = 89− 21πA

3-24

Page 7: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Discrete-time Markov process XVI

Diksha possesses 5 umbrellas which she employs in going from herhome to office, and vice versa. If she is at home (the office) at thebeginning (end) of a day and it is raining, then she will take anumbrella with her to the office (home), provided there is one to betaken. If it is not raining, then she never takes an umbrella. Assumethat, independent of the past, it rains at the beginning (end) of a daywith probability p.

• By defining a Markov chain with 6 states which enables us todetermine the proportion of time that our TA gets wet, draw itsstate transition diagram by specifying all state transitionprobabilities (Note: She gets wet if it is raining, and all umbrellasare at her other location.)

• Find the probability that our TA gets wet.• At what value of p, can the chance for our TA to get wet be

highest?

3-25

Review on Poisson process IProperties of a Poisson process, Λ(t):P1) Independent increment for some finite λ (arrivals/sec):

Number of arrivals in disjoint intervals, e.g., [t1, t2] and [t3, t4], areindependent random variables. Its probability density function is

Pr[Λ(t) = k] = (λt)k

k! e−λt for k = 0, 1, . . .

P2) Stationary increments:The number of events (or arrivals) in (t, t + h] is independent of t.Using PGF of distribution Λ(t + h), i.e.,E [zΛ(t)] =

∑∞k=0 zk Pr[Λ(t) = k] = eλt(z−1),

E [zΛ(t+h)] = E [zΛ(t) · zΛ(t+h)−Λ(t)]

= E [zΛ(t)] · E [zΛ(t+h)−Λ(t)], due to P1.

⇒ E [zΛ(t+h)−Λ(t)] = eλ(t+h)(z−1)

eλ(t)(z−1) = eλh(z−1).

3-26

Review on Poisson process II

P3) Interarrival (or inter-occurrence) times between Poisson arrivals areexponentially distributed:Suppose τ1, τ2, τ3, . . . are the epochs of the first, second and thirdarrivals, then the interarrival times t1, t2 and t3 are given byt1 = τ1, t2 = τ2 − τ1 and t3 = τ3 − τ2, generally, tn = τn − τn−1with τ0 = 0.

time

1. For t1, we have Pr[Λ(t) = 0] = e−λt = Pr[t1 ≥ t] for t ≥ 0, whichmeans that t1 is exponentially distributed with mean 1/λ.

2. For t2, we getPr[t2 > t|t1 = x] = Pr[Λ(t+x)−Λ(x) = 0] = Pr[Λ(t) = 0] = e−λt ,which also means that t2 is independent of t1 and has the samedistribution as t1. Similarly t3, t4, . . . are iid.

3-27

Review on Poisson process III

P4) The converse of P4 is true:If the sequence of interarrival times {ti} is iid rv’s with exp. densityfun. λe−λt , t ≥ 0, then the number of arrivals in interval [0, t],Λ(t), is a Poisson process.

≈ time

Let Y denote the sum of j independent rv’s with exp. density fun.,then Y is Erlang-j distributed, fY (y) = λ(λy)j−1

(j−1)! e−λy:

Pr[Λ(t) = j] =∫ t

0Pr[0 arrival in(y, t]|Y = y]fY (y)dy

=∫ t

0eλ(t−y) · fY (y)dy = (λt)je−λt

j! .

3-28

Page 8: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Review on Poisson process IV

P5) For a short interval, the probability that an arrival occurs in aninterval is proportional to the interval size, i.e.,

limh→0

Pr[Λ(h) = 1]h = lim

h→0

e−λh(λh)h = λ.

Or, we have Pr[Λ(h) = 1] = λh + o(h), where limh→0o(h)

h = 0

P6) The probability of two or more arrivals in an interval of length hgets small as h → 0. For every t ≥ 0,

limh→0

Pr[Λ(h) ≥ 2]h = lim

h→0

1− e−λh − λhe−λh

h︸ ︷︷ ︸L’Hopital’s rule

= 0

3-29

Review on Poisson process V

P7) Merging: If Λi(t)’s are mutually independent Poisson processeswith rates λi ’s, the superposition process Λ(t)

(=∑k

i=1 Λi(t))is

a Poisson process with rate λ(

=∑k

i=1 λi

)Note: If the interarrival times of the ith stream are a sequence ofiid rv’s but not necessarily exponentially distributed, then Λ(t)tends to a Poisson process as k →∞. [D. Cox, Renewal Theory]

Merging

Splitting

P8) Splitting: If an arrival randomly chooses the ith branch withprobability πi , the arrival process at the ith branch, Λi(t), isPoisson with rate λi(= πiλ). Moreover, Λi(t) is independent ofΛj(t) for any pair of i and j (i 6= j).

3-30

Continuous-time Markov process I

A stochastic process is called continuous-time MC if it satisfies

Pr[X(tk+1) = xk+1|X(tk) = xk ,X(tk−1) = xk−1, . . . ,X(t1) = x1]= Pr[X(tk+1) = xk+1|X(tk) = xk ]

If X(t) is a time-homogeneous continuous-time MC if

Pr[X(t + s) = j|X(s) = i] = pij(t) (independent of s)

which is analogous to pij in a discrete-time MC

time

1

2

3

4

: sojourn time in state

: time of state change

A sample path of continuous time MC 3-31

Continuous-time Markov process II

State occupancy time follows exponential dist.

• Let Ti be the sojourn (or occupancy) time of X(t) in state i beforemaking a transition to any other state.– Ti is assumed to be exponential distribution with mean 1/vi .

• For all s ≥ 0 and t ≥ 0, due to Markovian property of this process,

Pr[Ti > s + t|Ti > s] = Pr[Ti > t] = e−vit .

Only exponential dist. satisfies this property.

Semi-Markov process:

• The process jumps state j. Such a jump depends only on theprevious state.

• Tj for all j follows a general (independent) distribution.

3-32

Page 9: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Continuous-time Markov process III

State transition rate

qii(δ) = Pr[the process remains in state i during δ sec]

= Pr[Ti > δ] = e−viδ = 1− viδ

1 + (viδ)2

2! − · · · = 1− viδ + o(δ)

Or, let vi be the rate that the process moves out of state i,

limδ→0

1− qii(δ)δ

= limδ→0

viδ + o(δ)δ

= vi

: Poisson process with mean rate vi

3-33

Continuous-time Markov Process IV

Comparison between discrete- and continuous time MC

time

1

2

8

: sojourn time in state

: time of state change

time

1

8≈

: discrete-time Markov process

: continuous-time Markov process

3-34

Continuous-time Markov Process V

A discrete-time MC is embedded in a continuous-time MC.

time

1

2

4

: continuous-time Markov process

3

Each time a state, say i, is entered, an exponentially distributed stateoccupancy time is selected. When the time is up, the next state j isselected according to transition probabilities, pij .

When the process enters state j from state i,

qij(δ) = (1− qjj(δ))pij = vipijδ + o(δ) = γijδ + o(δ),

where γij = limδ→0 qij(δ)/δ = vipij , i.e., rate from state i to j.

3-35

Continuous-time Markov process VI

State probabilities πj(t) = Pr[X(t) = j]. For δ > 0,

πj(t + δ) = Pr[X(t + δ) = j]

=∑

iPr[X(t + δ) = j|X(t) = i]︸ ︷︷ ︸

=qij(δ)

Pr[X(t) = i]

=∑

iqij(δ)πi(t)⇐⇒ π

(n+1)i =

∑j

pjiπ(n)j (DTMC)

Transition into state j from any other state:

time

state

3-36

Page 10: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Continuous-time Markov process VII

Subtracting πj(t) from both sides,

πj(t + δ)− πj(t) =∑

iqij(δ)πi(t)− πj(t)

=∑

iqij(δ)πi(t) + (qjj(δ)− 1)πj(t)

Dividing both sides by δ,

limδ→0

πj(t + δ)− πj(t)δ

= dπj(t)dt

= limδ→0

[∑i

qij(δ)πi(t) + (qjj(δ)− 1)︸ ︷︷ ︸γii=−vi

πj(t)]

=∑

iγijπi(t),

which is a form of the Chapman-Kolmogorov equations

dπj(t)dt =

∑iγijπi(t)

3-37

Continuous-time Markov process VIII

As t →∞, the system reaches ‘equilibrium’ or ‘steady-state’

dπj(t)dt → 0 and πj(∞) = πj

0 =∑

iγijπi or vjπj =

∑i 6=j

γijπi

(γjj = −vj = −

∑i 6=j

γij

)which is called the global balance equation, and

∑j πj = 1.

State transition rate diagram:

… … ……

3-38

Continuous-time Markov process IX

As a matrix form,

d~π(t)dt = ~π(t)Q and ~π(t)~1 = 1

whose solution is given by

~π(t) = ~π(0)eQt

As t →∞, ~π(∞) , ~π = [πi ],

~πQ = 0 with Q =

−v0 γ01 γ02 γ03 . . .

γ10 −v1 γ12 γ13 . . .

γ20 γ21 −v2 γ23 . . ....

......

.... . .

and ~π · ~1 = 1,

where Q is called the infinitesimal generator or rate matrix.

3-39

Two-state CTMC I

A queueing system alternates between two states. In state 0, thesystem is idle and waiting for a customer to arrive. This idle time isan exponential random variable with mean 1/α. In state 1, thesystem is busy servicing a customer.The time in the busy state is anexponential random variable with mean 1/β.

• Find the state transition rate matrix.

Q =[γ00 γ01γ10 γ11

]=[−α α

β −β

]

• Draw state transition rate diagram

0 1

3-40

Page 11: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Two-state CTMC II

Find the state probabilities with initial state probabilities π0(0) andπ1(0): use dπj(t)

dt =∑

i γijπi(t)

π′0(t) = −απ0(t) + βπ1(t) and π′1(t) = απ0(t)− βπ1(t)

• Using π0(t) + π1(t) = 1, we have

π′0(t) =− απ0(t) + β(1− π0(t)) and π0(0) = p0

=− (α+ β)π0(t) + β

• Assume π0(t) = C1e−at + C2:(a) Find homogeneous part: π′0(t) + (α+ β)π0(t) = 0

(b) Find particular solution– Using the solution in (a), determine coefficients withπ′0(t) + (α+ β)π0(t) = β

(c) The solution of the above is

π0(t) = β

α+ β+ Ce−(α+β)t and C = p0 −

β

α+ β

3-41

Cartridge Inventory I

An office orders laser printer cartridges in batches of four cartridges.Suppose that each cartridge lasts for an exponentially distributed timewith mean 1 month. Assume that a new batch of four cartridgesbecomes available as soon as the last cartridge in a batch runs out.

• Find the state transition rate matrix:

Q =

−1 0 0 11 −1 0 00 1 −1 00 0 1 −1

• Find the stationary pmf for N (t), the number of cartridges

available at time t.

3-42

Cartridge Inventory II

• Transient behavior of πi(t): ~π(t) = ~π(0)eQt = ~π(0)EeΛtE−1

– E and Λ are given by

E = 12

1 1 1 11 i −i −11 −1 −1 11 −i i −1

and Λ =

0 0 0 20 −1− i 0 00 0 −1 + i 00 0 1 −2

– note that i =

√−1, use ‘expm’ in Matlab

time t

0 1 2 3 4 5 60

0.2

0.4

0.6

0.8

1

π1(t)

π2(t)

π3(t)

π4(t)

3-43

Barber shop I

Customers arrive at a Barbor shop with a Poisson process with rate λ.One barber serves those customers based on first-come first-servebasis. Its service time, Si is exponentially distributed with 1/µ (sec).The number of customers in the system, N (t) for t ≥ 0, forms aMarkov chain

N (t + τ) = max(N (t)− B(τ), 0) + A(τ)

State transition probabilities (see properties of Poisson process):Pr[0 arrival (or departure) in (t, t + δ)]

= 1− λδ + o(δ) (or 1− µδ + o(δ))Pr[1 arrival (or deparutre) in (t, t + δ)]

= λδ + o(δ) (or µδ + o(δ))Pr[more than 1 arrivals (or departure) in (t, t + h)] = o(h)

3-44

Page 12: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Barber shop II

Find Pn(t) , Pr[N (t) = n]. For n ≥ 1

Pn(t + δ) = Pn(t) Pr[0 arrival & 0 departure in (t, t + δ)]+ Pn−1(t) Pr[1 arrival & 0 departure in (t, t + δ)]+ Pn+1(t) Pr[0 arrival & 1 departure in (t, t + δ)] + o(h)

= Pn(t)(1− λδ)(1− µδ) + Pn−1(t)(λδ)(1− µδ)+ Pn+1(t)(1− λδ)(µδ) + o(δ).

Rearranging and dividing it by δ,

Pn(t + δ)− Pn(t)δ

= −(λ+ µ)Pn(t) + λPn−1(t) + µPn+1(t) + o(δ)δ

As δ → 0, for n > 0 we havedPn(t)

dt =− (λ+ µ)︸ ︷︷ ︸rate out of state n

Pn(t) + λ︸︷︷︸rate from state n − 1 to n

Pn−1(t)

+ µ︸︷︷︸rate from state n + 1 to n

Pn+1(t).3-45

Barber shop III

For n = 0, we havedP0(t)

dt = −λP0(t) + µP1(t).

As t →∞, i.e., steady-state, we have Pn(∞) = πn with dPn(t)dt = 0.

λπ0 = µπ1

(λ+ µ)πn = λπn−1 + µπn+1 forn ≥ 1.

State transition rate diagram

……

Solution of the above equations is (ρ = λ/µ)

πn = ρnπ0 and 1 = π0

(1 +

∞∑i=1

ρi)⇒ π0 = 1− ρ

3-46

Barber shop IV

ρ: the server’s utilization (< 1, i.e., λ < µ)Mean of customers in the system

E [N ] =∞∑

n=0nπn = ρ

1− ρ

= ρ(in server) + ρ2/(1− ρ)(in queue)

An M/M/1 system with 1/µ = 1

ρ

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Num

ber

of c

usto

mer

s in

the

syst

em

0

5

10

15

20SimulationAnalysis

ρ

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Mea

n sy

stem

res

pons

e tim

e (s

ec)

0

5

10

15

20SimulationAnalysis

3-47

Barbershop V

Recall the state transition rate matrix, Q on page 3-40 as

~πQ = 0 with Q =

−v0 γ01 γ02 γ03 . . .

γ10 −v1 γ12 γ13 . . .

γ20 γ21 −v2 γ23 . . ....

......

.... . .

and ~π · ~1 = 1,

– What is γij , and vi in M/M/1 queue ?

γij =

λ, if j = i + 1,µ, if j = i − 1.−(λ+ µ), if j = i0, otherwise

If a and b denote interarrival and service time, respectively, then vi is themean of an exponential distribution, i.e., min(a, b).

What is pi,i+1 or pi+1,i?

pi,i+1 = Pr[a < b] = λ/(λ+µ) and pi+1,1 = Pr[b < a] = µ/(λ+µ).3-48

Page 13: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Barbershop VI

Distribution of sojourn time, T :

TN = S1 + S2 + · · ·+ SN︸ ︷︷ ︸customers ahead

+SN+1

An arriving customer finds N customers in the system (including thecustomer in the server)– By the memoryless property of the exponential distribution, theremaining service time of the customer in service is exponentiallydistributed:

fT(t) =∞∑

i=0µ

(µt)i

i! e−µtπi

=∞∑

i=0µ

(µt)i

i! e−µtρi(1− ρ) = µ(1− ρ)e−µ(1−ρ)t

which can be obtained via Laplace transform of distribution of Si .

3-49

Barbershop simulation I

Discrete event simulation

Generate an arrival

sim_time = sim_time + interarrival time

sim_time = 0

Queue = Queue + 1

Scheduling an eventnext interarrival time

< service time

Arrival? yes

Queue = Queue ─ 1

service time < next interarrival time

Queue is empty?

Yes

sim_time = sim_time + service time

No

3-50

Barbershop simulation IIclear% Define variablesglobal arrival departure mservice_timearrival = 1; departure = -1; mservice_time = 1;% Set simulation parameterssim_length = 30000; max_queue = 1000;% To get delay statisticssystem_queue = zeros(1,max_queue);k = 0;for arrival_rate = 0.1:0.025:0.97

k = k + 1;% x(k) denotes utilizationx(k) = arrival_rate*mservice_time;% initializesim_time = 0; num_arrivals = 0; num_system =0; upon_arrival = 0; total_delay = 0; num_served =0;% Assuming that queue is emptyevent = arrival; event_time = exprnd(1/arrival_rate);sim_time = sim_time + event_time;while (sim_time < sim_length),

% If an arrival occurs,if event == arrival

num_arrivals = num_arrivals + 1;num_system = num_system + 1;% Record arrival time of the customersystem_queue(num_system) = sim_time;upon_arrival = upon_arrival + num_system;% To see whether one new arrival comes or new departure occurs[event, event_time] = schedule_next_event(arrival_rate);

3-51

Barbershop simulation III

% If a departure occurs,elseif event == departure

delay_per_arrival = sim_time - system_queue(1);system_queue(1:max_queue-1) = system_queue(2:max_queue);total_delay = total_delay + delay_per_arrival;num_system = num_system - 1;num_served = num_served + 1;if num_system == 0

% nothing to serve, schedule an arrivalevent = arrival;event_time = exprnd(1/arrival_rate);

elseif num_system > 0% still the system has customers to serve[event, event_time] = schedule_next_event(arrival_rate);

endendsim_time = sim_time + event_time;

endana_queue_length(k) = (x(k)/(1-x(k)));ana_response_time(k) = 1/(1/mservice_time-arrival_rate);% Queue length seen by arrivalsim_queue_length(k) = upon_arrival/sim_length;sim_response_time(k) = total_delay/num_served;

end

3-52

Page 14: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Barbershop simulation IV

function [event, event_time] = schedule_next_event(arrival_rate)

global arrival departure mservice_time

minter_arrival = 1/arrival_rate;inter_arrival = exprnd(minter_arrival);service_time = exprnd(mservice_time);if inter_arrival < service_time

event = arrival;event_time = inter_arrival;

elseevent = departure;event_time = service_time;

end

3-53

Relation between DTMC and CTMC I

Recall an embedded MC: each time a state, say i, is entered, anexponentially distributed state occupancy time is selected. When the timeis up, the next state j is selected according to transition probabilities, pij

time

1

2

4

: continuous-time Markov process

3

• Ni(n): the number of times state i occurs in the first n transitions• Ti(j): the occupancy time the jth time state i occurs.

The proportion of time spent by X(t) in state i after the first n transitions

time spent in state itime spent in all states =

∑Ni(n)j=1 Ti(j)∑

i

∑Ni(n)j=1 Ti(j)

3-54

Relation between DTMC and CTMC II

As n →∞, using πi = Ni(n)/n we haveNi(n)

n1

Ni(n)∑Ni(n)

j=1 Ti(j)∑i

Ni(n)n

1Ni(n)

∑Ni(n)j=1 Ti(j)

= πiE [Ti ]∑i πiE [Ti ]

∣∣∣E[Ti ]=1/vi

= φi ,

where πi is the unique pmf solution to

πj =∑

iπipij and

∑jπj = 1 (∗)

The long-term proportion of time spent in state i approaches

φi = πi/vi∑i πi/vi

= cπi

vi→ πi = viφi

c

Substituting πi = (viφi)/c into (∗) yieldsvjφj

c = 1c∑

iviφipij → vjφj =

∑iφivipij =

∑iφiγij

3-55

Relation between DTMC and CTMC III

Recall M/M/1 queue

0 1 32 4

0 1 32 4

… …

… …

a) CTMC

b) Embedded MC

In the embedded MC, we have the following global balance equations

π0 = qπ1π1 = π0 + qπ2

...πi = pπi−1 + qπi+1

πi =(

pq

)πi−15

3-56

Page 15: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Relation between DTMC and CTMC IV

Using the normalization condition,∑∞

i=0 πi = 1,

πi =(

pq

)i−1 1q π0 and π0 = 1− 2p

2(1− p)

Converting the embedded MC into CTMC,

φ0 = cv0π0 = c

λπ0 and φi = cπi

vi= cλ+ µ

πi

Determine c,∞∑

i=0φi = 1→ c

(π0

λ+ 1λ+ µ

∞∑i=1

πi

)= 1→ c = 2λ

Finally, we get φi = ρi(1− ρ) for i = 1, 2, . . .

3-57

Queueing systems I

The arrival times, the size of demand for service, the service capacityand the size of waiting room may be (random) variables.Queueing discipline: specify which customer to pick next for service.• First come first serve (FCFS, or FIFO)• Last come first serve (LCFS, LIFO)• Random order, Processor sharing (PS), Round robin (RR)• Priority (preemptive:resume, non-resume; non-preemptive)• Shortest job first (SJF) and Longest job first (LJF)

3-58

Queueing systems II

Customer behavior: jockeying, reneging, balking, etc.Kendall’s notation:

Population size (default )Queue size (default )# of serversService time distributionArrival time distribution

For A and B:• M: Markovian, exponential dist.• D: Deterministic• GI: General independent• Ek : Erlang-k• Hk : Mixture of k exponentials• PH : Phase type distributionE.g.: M/D/2, M/M/c, G/G/1, etc.; Barbershop is M/M/1 queue.

3-59

Queueing system III

Performance measure:

• N (t) = Nq(t) + NS(t): number insystem

• Nq(t): number in queue• NS(t): number in service• W : Waiting time in queue• T : total time (or response time) in

the system• τ : service time

• Throughput: γ , mean # of customers served per unit time1. γ for non-blocking system = min(λ,mµ)2. γ for a blocking system = (1− PB)λ, PB = blocking probability

• Utilization: ρ , fraction of time server is busy

ρ = loadcapacity = lim

T→∞

λTµT = λ

µfor a single server queue

= limT→∞

λTmµT = λ

mµ for an m-server queue3-60

Page 16: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Little’s theorem I

Any queueing system in steady state: N = λT

1

2

4

3

5

6

Nu

mb

er o

f ar

riva

ls o

r d

epar

ture

s

timeCustomer 1

Customer 2

• N : average numberof customers in thesystem

• λ: steady-statearrival rate, need notto be a Poisson

• T : average delay percustomer

Proof: For a system with N (0) = 0 and N (t) = 0, as t →∞

Nt = 1t

∫ t

0N (τ)dτ = 1

t

α(t)∑i=1

Ti = α(t)t

∑α(t)i=0 Ti

α(t) = λt · Tt .

If N (t) 6= 0, we have β(t)t

∑β(t)i=0

Ti

β(t) ≤ Nt ≤ λtTt .

3-61

Little’s theorem II

As an alternative, for the cumulative processes,

N (t) = α(t)− β(t) = γ(t) −→divided by t

N (t)/t = γ(t)/t = Nt

See the variable, ‘num_system’ in the previous Matlab code

‘num_arrvials’ in the code (t corresponds to ‘sim_length’)

λt = α(t)/t

Response time per customer from ‘total_delay’

Tt = γ(t)α(t) = γ(t)

t · tα(t) = Nt

λt

As t →∞, we have

λT = λ(W + x) = Nq + ρ

valid for any queue (even with any service order) as long as the limitsof λt and Tt exist as t →∞

3-62

Little’s theorem III

Finite queue

… …

Network of queues

3-63

Increasing the arrival and transmission rates by thesame fator

In a packet transmission system,• Arrival rate (packets/sec) is increased from λ to Kλ for K > 1• The packet length distribution remains the same (exponential),

with mean 1/µ bits• The transmission capacity (C bps) is increased by a factor of KPerformance• The average number of packets in the system remain the same

N = ρ

1− ρ with ρ = λ/(µC )

• Average delay per packet

λW = N →W = N/(Kλ)

Aggregation is better: increasing a transmission line by K times canallow K times as many packets/sec with K times smaller averagedelay per packet

3-64

Page 17: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Statistical multiplexing vs TDMA or FDMA

Multiplexing: m Poisson packet streams each with λ/m (packets/sec)are transmitted over a communication link with 1/µ exponentiallydistributed packet transmission time

……

a) Statistical multiplexing b) TDMA or FDMA

T = 1µ− λ

< T = mµ− λ

When do we need TDMA or FDMA?– In a multiplexer, packet generation times overlap, so that it mustbuffer and delay some of the packets

3-65

Little’s theorem: example I

Estimating throughput in a time-sharing systemSec. 3.2 Queueing Models-Little's Theorem

Average reflectiontime R

ComputerB

Average job processingtime P

C

161

(3.6)

Figure 3.4 N terminals connected with a time-sharing computer system. Toestimate maximum attainable throughput, we assume that a departing user im-mediately reenters the system or, equivalently, is immediately replaced by a newuser.

Combining this relation with A = NIT [cf. Eq. (3.3)], we obtain

N-R+P

The throughput A is also bounded above by the processing capacity of the computer. Inparticular, since the execution time of a job is P units on the average, it follows that thecomputer cannot process in the long run more than II P jobs per unit time, that is,

IA<--P (3.7)

(3.8)

(This conclusion can also be reached by applying Little's Theorem between the entry andexit points of the computer's CPU.)

By combining the preceding two relations, we obtain the bounds

N {I N}---::-:-:c- < A < min - ---R+NP- - P'R+Pfor the throughput A. By using T = N I A, we also obtain bounds for the average user delaywhen the system is fully loaded:

max {NP, R+ P} -s; T -s; R+ NP (3.9)

These relations are illustrated in Fig. 3.5.It can be seen that as the number of terminals N increases, the throughput approaches

the maximum liP, while the average user delay rises essentially in direct proportion withN. The number of terminals becomes a throughput bottleneck when N < I + RIP, inwhich case the computer resource stays idle for a substantial portion of the time while allusers are engaged in reflection. In contrast, the limited processing power of the computerbecomes the bottleneck when N > I + RIP. It is interesting to note that while the exactmaximum attainable throughput depends on system parameters, such as the statistics of thereflection and processing times, and the manner in which jobs are served by the CPU, the

Suppose a time-sharing computer system with N terminals. A user logsinto the system through a terminal and after an initial reflection period ofaverage length R, submit a job that requires an average processing time Pat the computer. Jobs queue up inside the computer and are served by asingle CPU according to some unspecified priority or time-sharing rule.What is the maximum of sustainable throughput by the system?– Assume that there is always a user ready to take the place of a departinguser, so the number of users in the system is always N

3-66

Little’s theorem: example II

The average time a user spends in the system

T = R + D → R + P ≤ T ≤ R + NP

– D: the average delay between time time a job is submitted to thecomputer and the time its execution is completed, D = [P,NP]Combining this with λ = N/T ,

NR + NP ≤ λ ≤ min

{1P ,

NR + P

}– throughput is bounded by 1/P, maximum job execution rate

162

,<....::J0..r::Cl::Jo1:I-

'"::c'"c:t

11P

Bound induced byIimited numberof terminals

Delay Models in Data Networks

Bound induced byCPU processingcapacity

Guaranteedthroughputcurve

Chap. 3

Upper bound for delayE'"1;;>en'".sc

'"Ei=

:::>'"Cl:;>'"><{

o 1 + RIP

R+P/1

R/ I

II1/V

//1

0

Number of Terminals N

(a)

Lower bound for delay due to limitedCPU processing capacity

Delay assuming no waiting in queue

Number of Terminals N

(b)

Figure 3.5 Bounds on throughput and average user delay in a time-sharingsystem. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on averageuser time in a fully loaded system [Eq. (3.9)]. The time increases essentially inproportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation tothe generality of Little's Theorem.

3.3 THE M /M /1 QUEUEING SYSTEM

The M / ]\[ / I queueing system consists of a single queueing station with a single server(in a communication context, a single transmission line). Customers arrive accordingto a Poisson process with rate A, and the probability distribution of the service time isexponential with mean 1/ f.1 sec. We will explain the meaning of these terms shortly.The name AI/AI / I reflects standard queueing theory nomenclature whereby:

1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem-oryless, which here means a Poisson process (i.e., exponentially distributed inter-

3-67

Little’s theorem: example III

Using T = N/λ, we can rewrite

max{NP,R + P} ≤ T ≤ R + NP

162

,<....::J0..r::Cl::Jo1:I-

'"::c'"c:t

11P

Bound induced byIimited numberof terminals

Delay Models in Data Networks

Bound induced byCPU processingcapacity

Guaranteedthroughputcurve

Chap. 3

Upper bound for delayE'"1;;>en'".sc

'"Ei=

:::>'"Cl:;>'"><{

o 1 + RIP

R+P/1

R/ I

II1/V

//1

0

Number of Terminals N

(a)

Lower bound for delay due to limitedCPU processing capacity

Delay assuming no waiting in queue

Number of Terminals N

(b)

Figure 3.5 Bounds on throughput and average user delay in a time-sharingsystem. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on averageuser time in a fully loaded system [Eq. (3.9)]. The time increases essentially inproportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation tothe generality of Little's Theorem.

3.3 THE M /M /1 QUEUEING SYSTEM

The M / ]\[ / I queueing system consists of a single queueing station with a single server(in a communication context, a single transmission line). Customers arrive accordingto a Poisson process with rate A, and the probability distribution of the service time isexponential with mean 1/ f.1 sec. We will explain the meaning of these terms shortly.The name AI/AI / I reflects standard queueing theory nomenclature whereby:

1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem-oryless, which here means a Poisson process (i.e., exponentially distributed inter-

3-68

Page 18: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Poisson Arrivals See Time Average (PASTA) theorem I

Suppose a random process which spends its time in different states Ej

In equilibrium, we can associate with each state Ej two differentprobabilities

• The probability of the state as seen by an outside random observer– πj : prob. that the system is in the state Ej at a random instant

• The probability of the state seen by an arriving customer– π∗j : prob. that the system is in the state Ej just before (arandomly chosen) arrival

In general, we have πj 6= π∗j

When the arrival process is Poisson, we have

πj = π∗j

3-69

PASTA theorem II

For a stochastic process, N ≡ {N (t), t ≥ 0} for t ≥ 0 and anarbitrary set B ∈ N :

U (t) ={

1, if N (t) ∈ B,0, otherwise. ⇒ V (t) = 1

t

∫ t

0U (τ)dτ.

For a Poisson arrival process A(t),

Y (t) =∫ t

0U (τ)dA(τ) ⇒ Z (t) = Y (t)/A(t)

Lack of Anticipation Assumption (LAA): For each t ≥ 0,{A(t + u)−A(t), u ≥ 0} and {U (s), 0 ≤ s ≤ t} are independent:Future inter-arrival times and service times of previously arrivedcustomers are independent.Under LAA, as t →∞, PASTA ensures

V (t)→ V (∞) w.p. 1 if Z (t)→ V (∞) w.p.1

3-70

PASTA theorem

Proof:• For sufficiently large n, Y (t) is approximated as

Yn(t) =n−1∑k=0

U (k(t/n))[A((k + 1)t/n)−A(kt/n)︸ ︷︷ ︸(λ(k+1)t−λkt)/n

]

• LAA decouples the above as

E [Yn(t)] = λtE[ n−1∑

k=0U (kt/n)/n

]• As n →∞, if |Yn(t)| is bounded,

limn→∞

E [Yn(t)] = E [Y (t)] = λtE [V (t)] = λE[ ∫ t

0U (τ)dτ

]. �

: the expected number of arrivals who find the system in state Bequals arrival rate times the expected length of time it is there.

3-71

Systems where PASTA does not hold

Ex1) D/D/1 queue• Deterministic arrivals every 10 msec• Deterministic service times of 9 msec

0 9 10 19

… …

20

A sample path of D/D/1 queue

• Arrivals always finds the system empty.• The system is occupied on average with 0.9.

Ex2) LAA violated: Service times for a current customer depends onan inter-arrival time of a future customer• Your own PC (one customer, one server)• Your own PC is always free when you need it, π∗0 = 1• π0= proportion of time the PC is free (< 1)

3-72

Page 19: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/M/1/K I

M/M/1/K: the system can accommodate K customers (including onein service)

… …

waiting customers

• State balance equations

λπ0 = µπ1

(λ+ µ)πi = λπi−1 + µπi+1 for 1 ≤ i ≤ K

After rearranging, we have

λπi−1 = µπi for 1 ≤ i ≤ K

• For i ∈ {0, 1, . . . ,K}, steady-state probabilities are

πn = ρnπ0 andK∑

n=0πn = 1 ⇒ π0 = 1− ρ

1− ρK+13-73

M/M/1/K II

• πK : the probability that an arriving customer finds the system full.Due to PASTA, this is a blocking probability

πK = 1− ρ1− ρK+1 ρ

K

• Blocking probability in simulation

PB = total # of blocked arrivals upon arrival instantstotal # of arrivals at the system

ρ

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

PB

0

0.025

0.05

0.075

0.1

0.125

0.15Analysis, K = 10

Simulation, K = 10

Analysis, K = 5

Simulation, K = 5

3-74

M/M/1/K Simulation Iclear% Define variablesglobal arrival departure mservice_timearrival = 1; departure = -1; mservice_time = 1;% Define simulation parameterssim_length = 30000; K = 10; system_queue = zeros(1,K);k = 0; max_iter = 5;for arrival_rate = 0.1:0.025:0.97

k = k + 1;x(k) = arrival_rate*mservice_time;% initializesim_time=0; num_arrivals=0; num_system=0; upon_arrival=0; total_delay=0; num_served=0; dropped=0;% Assuming that queue is emptyevent = arrival; event_time = exprnd(1/arrival_rate);sim_time = sim_time + event_time;for iter = 1:max_iter

while (sim_time < sim_length),% If an arrival occurs,if event == arrival

num_arrivals = num_arrivals + 1;if num_system == K

dropped = dropped + 1;else

num_system = num_system + 1;system_queue(num_system) = sim_time;upon_arrival = upon_arrival + num_system;

end% To see whether one new arrival comes or new departure occurs[event, event_time] = schedule_next_event(arrival_rate); 3-75

M/M/1/K Simulation II

% If a departure occurs,elseif event == departure

delay_per_arrival = sim_time - system_queue(1);system_queue(1:K-1) = system_queue(2:K);total_delay = total_delay + delay_per_arrival;num_system = num_system - 1;num_served = num_served + 1;if num_system == 0

% nothing to serve, schedule an arrivalevent = arrival;event_time = exprnd(1/arrival_rate);

elseif num_system > 0% still the system has customers to serve[event, event_time] = schedule_next_event(arrival_rate);

endendsim_time = sim_time + event_time;

endPd_iter(iter)=dropped/num_arrivals;

endpiK(k) = x(k)^K*(1-x(k))./(1-x(k)^(K+1));Pd(k) = mean(Pd_iter);

end

%%%%%%%%%%%%% use the previous schedule_next_event function

3-76

Page 20: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/M/m queue I

M/M/m: there are m parallel servers, whose service times areexponentially distributed with mean 1/µ.

…… …

State transition rate diagram of M/M/m

When m servers are busy, the time until the next departure, X , is

X = min(τ1, τ2, . . . , τm)⇒ Pr[X > t] = Pr[min(τ1, τ2, . . . , τm) > t]

=m∏

i=1Pr[τi > t] = e−mµt (i.i.d.)

Global balance equations:

λπ0 = µπ1

(λ+ min(n,m)µ)πn = λπn−1 + min(n + 1,m)µπn+1 for n ≥ 13-77

M/M/m queue II

The previous global balance equation can be rewritten as

λπn−1 = min(n,m)µπn for n ≥ 0

Using a = λ/µ and ρ = λ/mµ

πn = ρmax(0,n−m) am

m! π0

From the normalization condition, π0 is obtained

1 =∞∑

i=0πi = π0

{m−1∑i=0

ai

i! + am

m!

∞∑i=m

ρi−m}

Erlang C formula, C (m, a),

C (m, a) = Pr[W > 0] = Pr[N ≥ m] =∞∑

i=mπi = (mρ)m

m! · π0

1− ρ

3-78

M/M/c/c I

c-server and only c customers can be accommodated

… …

Balance equations are (a = λ/µ called Erlang)

λπn−1 = nµπn ⇒ πn = an πn−1 = an

n! π0

Using∑c

n=0 πn = 1, we have

πn = an

n!

{ c∑i=0

ai

i!

}−1

Erlang B formula: B(c, a) = πc– valid for M/G/c/c system. Note that this depends only on themean of service time distribution

3-79

M/M/c/c II

Erlang capacity: Telephone systems with c channels

offered traffic intensity, a10-1 100 101

B(c,a)

10-4

10-3

10-2

10-1

100

c = 1

2

3

4 5 6 7 8 9 10

offered traffic intensity, a0 20 40 60 80 100

B(c,a)

10-4

10-3

10-2

10-1

100

10 20 30

40 50 60

70 80 90 100

a

0 0.5 1 1.5 2 2.5 3PB

10-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

100

Analysis, c = 3

Simulation, c = 3

Analysis, c = 5

Simulation, c = 5

3-80

Page 21: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Example: a system with blocking I

In Select-city shopping mall, customers arrive at the undergroundparking lot of it according to a Poisson process with a rate of 60 carsper hour. Parking time follows a Weibull distribution with mean 2.5hours and the parking lot can accommodate 150 cars. When theparking lot is full, an arriving customer has to park his car somewhereelse. Find the fraction of customers finding all places occupied uponarrival

x (hours)0 1 2 3 4 5 6 7 8

f(x)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Weibull: α = 2.7228, k = 5

f(x) = kα

(

)k−1e(

xα)k

f(x) = 1αe−

exponential

two different distributions with the same mean

– Mean of Weibull distribution: αΓ(1 + 1/k), and Γ(x) =∫∞

0 tx−1e−tdt is calledthe gamma function 3-81

Example: a system with blocking II

• c = 150 and a = λ/µ = 60× 2.5 = 150

B(c, a)∣∣∣c=150,a=150

=ac

c!∑ci=0

ai

i!

• Divide the numerator and denominator by∑c−1

n=0 an/n!,

B(c, a) =ac

c!∑c−1i=0

ai

i! + ac/c!=

(ac/c!)/∑c−1

n=0 an/n!1 + (ac/c!)/

∑c−1n=0 an/n!

= (a/c)B(c − 1, a)1 + (a/c)B(c − 1, a) = aB(c − 1, a)

c + aB(c − 1, a)

with B(0, a) = 1

3-82

Finite source population: M/M/C/C/K system I

Consider the loss system (no waiting places) in the case where thearrivals originate from a finite population of sources: the total numberof customers is K

……

1

21

2

• The time to the next call attempt by a customer, so called thinkingtime (idle time) of the customer obeys an exponential distributionwith mean 1/λ (sec)

• Blocked calls are lost- does not lead to reattempts; starts a new thinking time, again.The time to the next attempt is also the same exponentialdistribution with 1/λ

- the call holding time is exponentially distributed with 1/µ3-83

M/M/C/C/K system II

If C ≥ K , each customer has its own server, i.e., no blocking.

• Each user shows two-state, active with mean 1/µ and idle withmean 1/λ

• The probability for a user to be idle or active is

π0 = 1/λ/(1/λ+ 1/µ) and π1 = 1/µ/(1/λ+ 1/µ),

• Call arrival rate: π0λ, offered load: π1 = a/(1 + a), and a = λ/µ

If C < K , this system can be described as

((K − i)λ+ iµ)πi = (K − (i − 1))πi−1 + (i + 1)µπi+1

3-84

Page 22: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/M/C/C/K system III

• For j = 1, 2, . . . ,K , we have

(C − j + 1)πj−1 = jµπj ⇒ πj =(

Kj

)ajπ0.

• Applying∑K

j=0 πj = 1,

πj =(

Kj

)aj/

C∑k=0

(Kk

)ak

Time blocking (or congestion): the proportion of time the systemspends in the state C ; the equilibrium probability of the state C is

PB = πC

– The probability of all resources being busy in a given observational period– Insensitivity: Like Erlang B formula, this result is insensitive to the formof the holding time distribution (though the derivation above was explicitlybased on the assumption of exponential holding time distribution)

3-85

M/M/C/C/K system IV

Call blocking: the probability that an arriving call is blocked, i.e., PL

• Arrival rate is state-dependent, i.e., (K −N (t))λ: Not Poisson.• PASTA does not hold: Time blocking, PB can’t represent PL• λT : Call arrivals on average

λT ∝C∑

i=0(K − i)λπi

– PL: the probability that a call finds the system blocked– If λT = 10000 and PL = 0.01, λTPL = 100 calls are lost

• λC : Call arrivals when the system is in the blocking state

λC ∝ (K − C )λ

–PBλC : blocked calls upon the arrival instant

PLλT = PBλC

– Among total arrivals, some of them that find the system blockedshould be equal to call arrivals of seeing the busy system

3-86

M/M/C/C/K system V

• Call blocking PL can be obtained by

PLλT = PBλC → PL = λC

λTPB ≤ PB

• Engset formula:

PL(K ) = (K − C )λπC∑Ci=0(K − i)λπi

=(K − C ) K !

C !(K−C)!ac∑C

i=0(K − i) K !i!(K−i)!ac

=(K−1)!

C !(K−1−C)!ac∑C

i=0(K−1)!

i!(K−1−i)!ac=(

K − 1C

)ac/

C∑i=0

(K − 1

i

)ai

– The state distribution seen by an arriving customer is the same as theequilibrium distribution in a system with one less customer. It is as if thearriving customer were an "outside observer"– PL(K) = PB(K − 1): as K →∞, PL → PB

3-87

Where are we?

Elementary queueing models– M/M/1, M/M/C, M/M/C/C/K: product form solution– Bulk queues (not discussed here)

Intermediate queueing models (product-form solution)– Time-reversibility of Markov process– Detailed balance equations of time-reversible MCs– Multidimensional Birth-death processes– Network of queues: open- and closed networks

Advanced queueing models– M/G/1 type queue: Embedded MC and Mean-value analysis– M/G/1 with vacations and Priority queues– G/M/m queue

More advanced queueing models (omitted)– Algorithmic approaches to get steady-state solutions

3-88

Page 23: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Time Reversibility of discrete-time MC I

For an irreducible, aperiodic, discrete-time MC, (Xn, Xn+1,...) havingtransition probabilities pij and stationary distribution πi for all i:Time-reversed MC is defined as X∗n = Xτ−n for an arbitrary τ > 0

Forward process Time reversed process

1) Transition probabilities of X∗np∗ij = πjpji

πi

2) Xn and X∗n have the same stationary distribution πi :∞∑

j=0πipij =

∞∑j=0

πjp∗ji = πi

3-89

Time Reversibility of discrete-time MC II

• Proof for 1) p∗ij = πjpji/πi :p∗ij = Pr[Xm = j|Xm+1 = i,Xm+2 = i2, . . . ,Xm+k = ik ]

= Pr[Xm = j,Xm+1 = i,Xm+2 = i2, . . . ,Xm+k = ik ]Pr[Xm+1 = i,Xm+2 = i2, . . . ,Xm+k = ik ]

= Pr[Xm = j,Xm+1 = i] Pr[Xm+2 = i2, . . . ,Xm+k = ik |Xm = j,Xm+1 = i]Pr[Xm+1 = i] Pr[Xm+2 = i2, . . . ,Xm+k = ik |Xm+1 = i]

= Pr[Xm = j,Xm+1 = i]Pr[Xm+1 = i]

= Pr[Xm+1 = i|Xm = j] Pr[Xm = j]Pr[Xm+1 = i]

= pjiπj

πi

• Proof for 2) Using the above result,∑i∈S

πip∗ij =∑i∈S

πi(πjpji/πi) = πj

3-90

Time Reversibility of discrete-time MC III

A Markov process, Xn, is said to be reversible, if– the transition probabilities of the forward and reversed chains arethe same,

p∗ij = Pr[Xm = j|Xm+1 = i] = pij = Pr[Xm+1 = j|Xm = i]

• Time reversibility ⇔ Detailed balanced equations (DBEs) hold

πip∗ij = πjpji → πipij = πjpji (detailed balance eq.)

What types of Markov processes satisfy this detailed balanceequation? discrete-time Birth-death (BD) process

• Transition occurs between neighboring states: pij = 0 for |i− j| > 1

0 1 2 … …

3-91

Time Reversibility of discrete-time MC IV

A transmitter’s queue with stop-and-wait ARQ (θ = qr) in Mid-term I• Is this process reversible?

0 1 2… …… …

• Global balance equations (GBEs)

π0 =(1− p)π0 + (1− p)θπ1

π1 =pπ0 + (pθ + (1− p)(1− θ))π1 + (1− p)θπ2

For i = 2, 3, . . ., we have

πi =p(1− θ)πi−1 + (pθ + (1− p)(1− θ))πi + (1− p)θπi+1

• Instead, we can use DBEs, or simplify GBEs using DBEs, e.g.,

p(1− θ)πi = (1− p)θπi+1 ↔n∑

j=0

∞∑i=n+1

πjpji =n∑

j=0

∞∑i=n+1

πipij

3-92

Page 24: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Time Reversibility of discrete-time MC V

Kolmogorov Criteria• A discrete-time Markov chain is reversible if and only if

pi1i2pi2i3 · · · pin−1in pini1 = pi1in pinin−1 · · · pi3i2pi2i1

for any finite sequence of states, i1, i2,. . . ,in and any nProof:• For a reversible chain, if detailed balance eqns. hold, we have

10

3 2

• Fixing two states, i1 = i, and in = j and multiplying over all states,

pi,i2pi2i3 · · · pin−1jpji = pijpjin−1 · · · pi3i2pi2i3-93

Time Reversibility of discrete-time MC VI

• From the Kolmogorov criteria, we can get

pi,i2pi2i3 · · · pin−1jpji = pijpjin−1 · · · pi3i2pi2i

p(n−1)ij pji = pijp(n−1)

ji

As n →∞, we have

limn→∞

p(n−1)ij pji = lim

n→∞pijp(n−1)

ji → πjpji = πipij

Inspect whether the following two-state MC is reversible

P =[

0 10.5 0.5

]– It is a small BD process– Using state probabilities, π0 = 1/3 and π1 = 2/3,

π0p01 = 13 · 1 = π1p10 = 2

3 ·12

3-94

Time Reversibility of discrete-time MC VII

Inspect whether the following three-state MC is reversible

P =

0 0.6 0.40.1 0.8 0.10.5 0 0.5

• Using Kolmogorov criteria,

p12p23p31 = 0.6× 0.1× 0.5 6= p13p32p21 = 0.4× 0× 0.1 = 0• Inspecting state transition diagram, it is not a BD processIf the state transition diagram of a Markov process is a tree, then theprocess is time reversible

– A generalization of BD processes: at the cut boundary, DBE issatisfied 3-95

Continuous-time reversible MC I

For a continuous-time MC, X(t), whose stationary state probabilityπi , we have a discrete-time embedded Markov chain whose stationarypmf and a state transition probability are πi and p̃ij .

Embedded Markov process

Forward Process Reverse Process

time

There is a reversed embedded MC with πi p̃ij = πj p̃∗ji for all i 6= j.

0 1 32 4

0 1 32 4

… …

… …

CTMC

Embedded MC (BD process)

3-96

Page 25: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Continuous-time reversible MC II

Recall the state occupancy time of the forward process

Pr[Ti > t + s|Ti > t] = Pr[Ti > s] = e−vis

If X(t) = i, the probability that the reversed process remains in statei for an additional s seconds is

Pr[X(t′) = i, t − s ≤ t′ ≤ t|X(t) = i] = e−vis

; after staying t, probability that it shall stay s sec more

Embedded Markov process

Forward Process Reverse Process

time

3-97

Continuous-time reversible MC III

A continuous-time MC whose stationary probability of state i is θi ,and state transition rate from j to i is γji has a reversible MC whosestate transition rate is γ∗ij , if we find γ∗ij of satisfying

γ∗ij = vi p̃∗ij = viπj p̃ji

πi

∣∣∣p̃ji=γji/vj

= viπjγji

πivj︸ ︷︷ ︸from embedded MC

= θjγji/θi

– p̃∗ij(= p̃ij): state transition probability of the reversed embedded MC– Continuous-time MC whose state occupancy times are exponentiallydistributed is reversible if its embedded MC is reversible

Additionally, we have vj = v∗j∑i 6=j

θiγ∗ij∣∣γ∗ij =θjγji/θi

= θj∑i 6=j

γji = θjvj = θjv∗j ⇒∑j 6=i

γij =∑j 6=i

γ∗ij

3-98

Continuous-time reversible MC IV

Detailed balance equation holds for continuous-time reversible MCs

θjγji (input rate to i) = θiγij (output rate from i) for j = i + 1

– Birth-death systems with γij = 0 for |i − j| > 1– Since the embedded MC is reversible,

πi p̃ij = πj p̃ji → (viθi/c)p̃ij = (vjθj/c)p̃ji → θiγij = θjγji

If there exists a set of positive numbers θi , that sum up to 1 andsatisfy

θiγij = θjγji for i 6= jthen, the MC is reversible and θi is the unique stationary distribution– Birth and death processes, e.g., M/M/1, M/M/c, M/M/∞Kolmogorov criteria for continuous time MC– A continuous-time Markov chain is reversible if and only if

γi1i2γi2i3 · · · γini1 = γi1inγinin−1 · · · γi3i2γi2i1

– Proof is the same as in the discrete-time reversible MC3-99

M/M/2 queue with heterogeneous servers I

Servers A and B with service rates µA and µB. When the systemempty, arrivals go to A with probability p and to B with probability1− p. Otherwise, the head of the queue takes the first free server

0

1A

2B

2 3

Under what condition is this system time-reversible?• For n = 2, 3, . . .,

πn = π2 (λ/(µA + µA))n−2

• Global balance equations along the cuts

λπ0 = µAπ1,A + µBπ1,B

(µA + µB)π2 = λ(π1,A + π1,B)(µA + λ)π1,A = pλπ0 + µBπ2 3-100

Page 26: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/M/2 queue with heterogeneous servers II

After some manipulations,

π1,A = π0λ

µA

λ+ p(µA + µB)2λ+ µA + µB

π2,A = π0λ

µB

λ+ (1− p)(µA + µB)2λ+ µA + µB

π2 = π0λ2

µAµB

λ+ (1− p)µA + pµB

2λ+ µA + µB

π0 can be determined by π0 + π1,A + π2,B +∑∞

n=2 πn = 1• If it is reversible, use detailed balance equations

(1/2)λπ0 = µAπ1,A → π1,A = 0.5(λ/µA)π0

(1/2)λπ0 = µBπ1,B → π1,B = 0.5(λ/µB)π0

π2 = 0.5λ2

µAµBπ0

3-101

Multidimensional Markov chains I

Suppose that X1(t) and X2(t) are independent reversible MCs• Then, X(t) = (X1(t),X2(t)) is a reversible MC• Two independent M/M/1 queue, where arrival and service rates at

queue i are λi and µi

– (N1(t),N2(t)) forms an MC6-23 Example: Two Independent M/M/1 Queues

Stationary distribution:

Detailed Balance Equations:

Verify that the Markov chain is reversible – Kolmogorov criterion

1 2

1 1 2 21 2

1 1 2 2

( , ) 1 1n n

p n n λ λ λ λµ µ µ µ

= − −

1 1 2 1 1 2

2 1 2 2 1 2

( 1, ) ( , )( , 1) ( , )

p n n p n np n n p n n

µ λµ λ

+ =+ =

2λ 2µ2λ 2µ 2λ 2µ 2λ 2µ

2λ 2µ2λ 2µ 2λ 2µ 2λ 2µ

2λ 2µ2λ 2µ 2λ 2µ 2λ 2µ

02 12 22 321λ 1λ 1λ

1µ 1µ 1µ

01 11 21 311λ 1λ 1λ

1µ 1µ 1µ

00 10 20 301λ 1λ 1λ

1µ 1µ 1µ

03 13 23 331λ 1λ 1λ

1µ 1µ 1µ

– Is this a reversible MC?3-102

Multidimensional Markov chains II

– Owing to time-reversibility, detailed balance equations hold

µ1π(n1 + 1,n2) = λ1π(n1,n2)µ2π(n1,n2 + 1) = λ2π(n1,n2)

– Stationary state distribution

π(n1,n2) =(

1− λ1

µ1

)(λ1

µ1

)n1 (1− λ2

µ2

)(λ2

µ2

)n2

• Can be generalized for any number of independent queues, e.g.,M/M/1, M/M/c or M/M/∞

π(n1,n2, . . . ,nK ) = π1(n1)π2(n2) · · ·πK (nK )

– ’Product form’ distribution

3-103

Truncation of a Reversible Markov chain I

X(t) is a reversible Markov process with state space S and stationarydistribution, πj for j ∈ S .– Truncated to a set E ⊂ S such that the resulting chain Y (t) isirreducible. Then, Y (t) is reversible and has the stationarydistribution

π̂j = πj∑k∈E πk

j ∈ E

– This is the conditional prob. that. in steady state, the originalprocess is at state j, given that it is somewhere in E

Proof:

π̂jqji = π̂iqij ⇒πj∑

k∈E πk︸ ︷︷ ︸π̂j

qji = πi∑k∈E πk

qij ⇒ πjqji = πiqij

∑k∈E

π̂k =∑j∈E

πj∑k∈E πk

= 1

3-104

Page 27: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Truncation of a Reversible Markov chain II

Markov processes for M/M/1 and M/M/C are reversible• State probabilities of M/M/1/K queue

πi = (1− ρ)ρi∑Ki=0(1− ρ)ρi

= (1− ρ)ρi

1− ρK+1 for ρ = λ

µ

– Truncated version of M/M/1/∞ queue• State probabilities of M/M/c/c queue

– M/M/c/∞ queue with ρ = λ/(mµ) and a = λ/µ

πn = ρmax(0,n−c) ac

n! π0

– Truncated version of M/M/c/∞ queue

π̂n = πn/c∑

n=0πn = an

n! /c∑

i=0

ai

i!

3-105

Truncation of a Reversible Markov chain III

Two independent M/M/1 queues of the previous example share acommon buffer of size B (=2)• An arriving customer who finds B customers waiting is blocked

6-25 Example: Two Queues with Joint Buffer The two independent M/M/1 queues of the previous example share a common buffer of size B – arrival that finds Bcustomers waiting is blockedState space restricted to

Distribution of truncated chain:

Normalizing:

Theorem specifies joint distribution up to the normalization constantCalculation of normalization constant is often tedious

2λ 2µ

2λ 2µ

2λ 2µ1λ

2λ 2µ

2λ 2µ

2λ 2µ

2λ 2µ

2λ 2µ 2λ 2µ

02 121λ

01 11 211λ 1λ

1µ 1µ

00 10 20 301λ 1λ

1µ 1µ

03 13

22

31

1 2 1 2{( , ) : ( 1) ( 1) }E n n n n B+ += − + − ≤

1 21 2 1 2 1 2( , ) (0,0) , ( , )n np n n p n n Eρ ρ= ⋅ ∈

1 2

1 2

1

1 2( , )

(0,0) n n

n n Ep ρ ρ

= ∑

State diagram for B =2• State space: E = {(n1,n2) : (n1 − 1)+ + (n2 − 1)+ ≤ B}• Stationary state distribution of the truncated MC

π(n1,n2) = π(0, 0)ρn11 ρn2

2 for (n1,n2) ∈ E

• π(0, 0) is obtained by π(0, 0) = 1/∑

(n1,n2)∈E ρn11 ρn2

2 3-106

Truncation of a Reversible Markov chain IV

Two session classes in a circuit switching system with preferentialtreatment for one class for a total of C channels• Type 1: Poisson arrivals with λ1 require exponentially distributed

service rate µ1 – admissible only up to K• Type 2: Poisson arrivals with λ2 require exponentially distributed

service rate µ2 – can be accepted until C channels are used up

S = {(n1,n2)|0 ≤ n1 ≤ K ,n1 + n2 ≤ C}374 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 51, NO. 2, MARCH 2002

Fig. 2. Transition diagram for the new call bounding scheme.

handoff calls in the cell. Let and . Fromthe detailed balance equation, we obtain

From the normalization equation, we obtain

From this, we obtain the formulas for new call blocking proba-bility and handoff call blocking probability as follows:

(1)

(2)

Obviously, when , the new call bounding scheme be-comes the nonprioritized scheme. As we expect, we obtain

As we mentioned earlier, in most literature the channelholding times for both new calls and handoff calls are iden-tically distributed with the same parameter. In this case, theaverage channel holding time is given by

(3)

From this, the traffic intensities for new calls and handoff callsusing the above common average channel holding time 1are given by

Applying these formulas in (1) and (2), we obtain similar re-sults for new call blocking probability and handoff call blockingprobability following the traditional approach (one-dimensionalMarkov chain theory), which obviously provides only an ap-proximation. We will show later that significantly inaccurateresults are obtained using this approach, which implies that wecannot use the traditional approach if the channel holding timesfor new calls and handoff calls are distinct with different av-erage values. We observe that there is one case where these twoapproaches give the same results, i.e., when the nonprioritizedscheme is used: . This is because we have the followingidentity: .

As a final remark, this scheme may work best when the callarrivals are bursty. When a big burst of calls arrives in a cell (forexample, before or after a football game), if too many new callsaccepted, the network may not be able to handle the resultinghandoff traffic, which will lead to severe call dropping. The newcall bounding scheme, however, could handle the problem wellby spreading the potential bursty calls (users will try again whenthe first few tries fail). On another note, as we observe in wirednetworks, network traffic tends to be self-similar ([15]). Wire-less network traffic will behave the same considering more dataservices will be supported in the wireless networks. This schemewill be useful in the future wireless multimedia networks.

B. Cutoff Priority Scheme

Instead of putting limitation on the number of new calls, webase on the number of total on-going calls in the cell to makea decision whether a new arriving call is accepted or not. Thescheme works as follows.

Let denote the threshold, upon a new call arrival. If the totalnumber of busy channels is less than, the new call is accepted;

3-107

Truncation of a Reversible Markov chain V

• The state probabilities can be obtained as

P(n1,n2) = ρn11

n1!ρn2

2n2! P(0, 0) for 0 ≤ n1 ≤ K , n1+n2 ≤ C , n2 ≥ 0

– P(0, 0) can be determined by∑

n1,n2P(n1,n2) = 1

• Blocking probability of type 1

Pb1 =∑C−K

n2=0ρK

1K ! ·

ρn22

n2! +∑K−1

n1=0ρ

n11

n1! ·ρ

C−n12

(C−n1)!∑Kn1=0

ρn11

n1!∑C−n1

n2=0ρ

n22

n2!

• Blocking probability of type 2

Pb2 =∑K

n1=0ρ

n11

n1! ·ρ

C−n12

(C−n1)!∑Kn1=0

ρn11

n1!∑C−n1

n2=0ρ

n22

n2!

For this kind of systems, blocking probabilities are valid for a broadclass of holding time distributions

3-108

Page 28: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Network of queues

Open queueing networks

Closed queueing networks

Three things are neededKleinrock’s independence assumption– Each link works as an M/M/1 queueBurke’s theorem– The output process at an M/M/1 queue is a Poisson process withmean rate λTime-reversibility

3-109

Networks of queues

Two queues in tandem (BG, p.210)

• Assume that service time is proportional to the packet length

Queue 1

Queue 1 Queue 2

Queue 2 is empty when arrives

Queue 2

timeArrivals at queue 2 get bursty

– Interarrival times at the second queue are strongly correlated withthe packet length at the first queue or the service time!

• The first queue is an M/M/1, but the second queue cannot beconsidered as an M/M/1

3-110

Kleinrock’s Independence Approximation I

In real networks, many queues interact with each other– a traffic stream departing from one or more queues enters one ormore other queues, even after merging with other streams departingfrom yet other queues• Packet interarrival times are correlated with packet lengths.• Service times at various queue are not independent, e.g.,

state-dependent flow control.Kleinrock’s independence approximation:• M/M/1 queueing model works for each link:

– sufficient mixing several packet streams on a transmission linemakes interarrival times and packet lengths independent

• Good approximation when:* Poisson arrivals at entry points of the network* Packet transmission times ‘nearly’ exponential* Several packet streams merged on each link* Densely connected network and moderate to heavy traffic load

3-111

Kleinrock’s Independence Approximation II

Suppose several packet streams, each following a unique path throughthe network: appropriate for virtual circuit network, e.g., ATM

• xs: arrival rate of packet stream s• fij(s): the fraction of the packets of stream s through link (i, j)• Total arrival rate at link (i, j)

λij =∑

all packet streams scrossing link (i, j)

fij(s)xs

3-112

Page 29: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Kleinrock’s Independence Approximation III

Based on M/M/1 (with Kleinrock’s Independence approximation), #of packets in queue or service at (i, j) on average is

Nij = λij

µij − λij

– 1/µij is the average packet transmission time on link (i, j)• The average number of packets over all queues and the average

delay per packet are

N =∑(i,j)

Nij and T = 1γ

∑(i,j)

Nij

– γ =∑

s xs: total arrival rate in the system• As a generalization with proc. & propag. delay,

Tp =∑

all packet streams scrossing link (i, j)

λij

µij(µij − λij)︸ ︷︷ ︸queueing delay

+ 1µij

+ dij

3-113

Kleinrock’s Independence Approximation IV

In datagram networks including multiple path routing for someorigin-destination pairs, M/M/1 approx. often fails• Node A sends traffic to node B along two links with service rate µSec. 3.6 Networks of Transmission Lines

V2

V2

B

213

Figure 3.29 Poisson process with rate .\divided among two links. If division isdone by randomization, each link behaveslike an M I JIII queue. If division is doneby metering, the whole system behaves Iikean 1\111'v112 queue.

(3.103)

(3.104)

where! = L8.rS is the total arrival rate in the system. If the average processing andpropagation delay el ij at link (i. j) is not negligible, this formula should be adjusted to

I L ( Ai) )T = - . + Aij elij11" - A"r (i.j) I) 1)

Finally, the average delay per packet of a traffic stream traversing a path p is given by

'"' (Ai) 1 )Tp = L . . . . _ .. + -. + eli)'. ILI)(p,) AI)) 11i)all (I,))on path p

where the three terms in the sum above represent average waiting time in queue, averagetransmission time, and processing and propagation delay, respectively.

In many networks, the assumption of exponentially distributed packet lengths is notappropriate. Given a different type of probability distribution of the packet lengths, onemay keep the approximation of independence between queues but use the P-K formula foraverage number in the system in place of the AI/M /1 formula (3.100). Equations (3.101)to (3.104) for average delay would then be modified in an obvious way.

For virtual circuit networks (cf. Fig. 3.27), the main approximation involved in theI\I /M /1 formula (3.101) is due to the correlation of the packet lengths and the packetinterarrival times at the various queues in the network. If somehow this correlation wasnot present (e.g., if a packet upon departure from a transmission line was assigned a newlength drawn from an exponential distribution), then the average number of packets inthe system would be given indeed by the formula

This fact (by no means obvious) is a consequence of Jackson's Theorem, which will bediscussed in Section 3.8.

In datagram networks that involve multiple path routing for some origin-destinationpairs (cf. Fig. 3.28), the accuracy of the M / M /1 approximation deteriorates for anotherreason, which is best illustrated by an example.

Example 3.17

Suppose that node A sends traffic to node B along two links with service rate 11 in thenetwork of Fig. 3.29. Packets arrive at A according to a Poisson process with rate .\packets/sec. Packet transmission times are exponentially distributed and independent ofinterarrival times as in the AI/M / I system. Assume that the arriving traffic is to bedivided equally among the two links. However, how should this division be implemented?Consider the following possibilities.

– Random splitting: queue at each link may behave like an M/M/1

TR = 1µ− λ/2

– Metering: arriving packets are assigned to a queue with the smallestbacklog → approximated as an M/M/2 with a common queue

TM = 2(2µ− λ)(1 + ρ) < TR

∗ Metering destroys an M/M/1 approximation

3-114

Burke’s theorem I

For M/M/1, M/M/c, M/M/∞ with arrival rate λ (without bulkarrivals and service):

B1. Departure process is Poisson with rate λ.Forward process Reverse process

Reverse process

Arrivals of forward process Departures of reverse process

• The arrival process in the forward process corresponds to thedeparture process in the reverse process– The departure process in the forward process is the arrivalprocess in the backward process

• Because M/M/1 is time-reversible, the reverse process isstatistically identical to the forward process.– The departures in forward time form a Poisson process, which isthe arrivals in backward time

3-115

Burke’s theorem II

B2. At each time t, the number of jobs in the system at time t isindependent of the sequence of departure times prior to time t

• The sequence of departure times prior to time t in the forwardprocess is exactly the sequence of arrival times after time t in thereverse process.

• Since the arrival process in the reverse process is independentPoisson process, the future arrival process does not depend on thecurrent number in the system

• The past departure process in the forward process (which is thefuture arrival in the reverse process) does not depend on thecurrent number in the system.

3-116

Page 30: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Two M/M/1 Queues in Tandem

The service times of a customer at the first and the second queues aremutually independent as well as independent of the arrival process.

Queue 1 Queue 2

• Based on Burke’s theorem B1, queue 2 in isolation is an M/M/1– Pr[m at queue 2] = ρm

2 (1− ρ2)

• B2: # of customers presently in queue 1 is independent of thesequence of departure time prior to t (earlier arrivals at queue 2)– independent of # of customers presently in queue 2

Pr[n at queue 1 and m at queue 2]= Pr[n at queue 1] Pr[m at queue 2] = ρn

1 (1− ρ1)ρm2 (1− ρ2)

3-117

Open queueing networks

Consider a network of K first-come first serve, single server queue,each of which has unlimited queue size and exponential distributionwith rate µk .

External arrivals

routing path

• Traffic equation with routing probability pij or matrix P = [pij ]

λi = αi +K∑

j=1λjpji and

K∑i=0

pji = 1

– pi0: flow going to the outside– λi can be uniquely determined by solving

λ = α+ λP ⇒ λ = α(I − P)−1 3-118

Open queueing networks II

Let n = (n1, . . . ,nK ) denote a state (row) vector of the network.The limiting queue length distribution π(n)

π(n) = limt→∞

Pr[X1(t) = n1, . . . ,XK (t) = nK ]

Global balance equation (GBE): total rate out of n = total rate into n(α+

K∑i=1

µi

)π(n) =

K∑i=1

αiπ(n− ei)︸ ︷︷ ︸external arrivals

+K∑

i=1pi0µiπ(n + ei)︸ ︷︷ ︸go outside from i

+K∑

i=1

K∑j=1

pjiµjπ(n + ej − ei)︸ ︷︷ ︸from j to i

– ei = (0, . . . , 1, . . . , 0), i.e., the 1 is in the ith position– π(n− ei) denotes π(n1,n2, . . . ,ni − 1, . . . ,nK )

3-119

Jackson’s theorem I

Using time-reversibility, guess detailed balance equations (DBEs) as

λiπ(n− ei) = µiπ(n), λiπ(n) = µiπ(n + ei)

and λjπ(n− ei) = µjπ(n + ej − ei) based on

Substituting DBEs into GBE gives us

RHS =π(n)[ K∑

i=1

αiµi

λi+

K∑i=1

pi0λi +K∑

i=1

∑Kj=1 pjiλj

λiµi

]

=π(n)[ K∑

i=1pi0λi︸ ︷︷ ︸α

+K∑

i=1

αi +∑K

j=1 pjiλj

λiµi

]

– in the numerator: λi = αi +∑K

j=1 pjiλj 3-120

Page 31: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Jackson’s theorem II

From DBEs, we have

π(n1, . . . ,ni , . . . ,nK ) = λi

µiπ(n1, . . . ,ni − 1, . . . ,nK )

and

π(n1, . . . ,ni − 1, . . . ,nK ) = λi

µiπ(n1, . . . ,ni − 2, . . . ,nK )

which is finally rearranged as

π(n1, . . . ,ni , . . . ,nK ) =(λi

µi

)ni

π(n1, . . . , 0, . . . ,nK )

Repeating for i = 1, 2, . . . ,K ,

π(n) = π(0)K∏

i=1

(λi

µi

)ni

– π(0) =(∏K

i=1∑∞

ni=0 ρnii

)−1and ρi = λi/µi

3-121

Summary of time-reversibility: CTMC

The forward CTMC has transition rates γij .

The reversed chain is a CTMC with transition rates γ∗ij = φjγjiφi

If we find positive numbers φi , summing to unity and such thatscalars γ∗ij satisfy

∑∞j=0 γij =

∑∞j=0 γ

∗ij for all i ≥ 0, then φi is the

stationary distribution of both the forward and reverse chains.– Proof∑

j 6=iφjγji =

∑j 6=i

φiγ∗ij = φi

∑j 6=i

γ∗ij = φi∑j 6=i

γij =∑j 6=i

φiγij

: Global balance equation

3-122

Jackson’s theorem: proof of DBEs I

Proving DBEs based on time-reversibility

• Construct a routing matrix, P∗ = [p∗ij ], of the reversed process• The rate from node i to j must be the same in the forward and

reverse direction,

(forward process) λipij = λjp∗ji (reverse process)

– λjp∗ji : the output rate from server j is λj , and p∗ji is the rate ofmoving from j to i; α∗i = λipi0; p∗i0 = αi/λi

We need to show (recall θiγij = θjγ∗ji)

π(n)vn,m = π(m)v∗m,n and∑m

vn,m =∑m

v∗n,m

– vn,m and v∗n,m denote state transition rate of the forward andreversed process

3-123

Jackson’s theorem: proof of DBEs II

We need to consider the following three cases• Arrival to server i outside the network in the forward process

corresponds to a departure out of the network from server i in thereversed process

π(n)vn,n+ei = π(n + ei)v∗n+ei ,n

• Departure to the outside in the forward process corresponds toarrival from the outside in the reversed process,

π(n)vn,n−ei = π(n− ei)v∗n−ei ,n

• Leaving queue i and joining queue j in the forward process(vn,n−ei+ej = µipij) correspond to leaving queue j and joiningqueue i in the reversed process (v∗n−ei+ej ,n = µjp∗ji = λipijµj/λj)

π(n)vn,n−ei+ej = π(n− ei + ej)v∗n−ei+ej ,n

3-124

Page 32: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Jackson’s theorem: proof of DBEs III

1) π(n)vn,n+ei = π(n + ei)v∗n+ei ,n:Arrival to server i outside the network in the forward process corresponds toa departure out of the network from server i in the reversed process, i.e.,

v∗n+ei ,n =µi

(1−

K∑j=1

p∗ij︸ ︷︷ ︸p∗i0

)Use p∗ij=λjpji/λi

= µi

(1−

K∑j=1

λjpji

λi

)

=µi

λi

(λi −

K∑j=1

λjpji︸ ︷︷ ︸λi=αi+

∑Kj=1

λjpji

)= αi/ρi (= v∗n,n−ei

).

Substituting this into 1) (vn,n+ei = αi : arrival to server i from outside)K∏

i=1πi(ni)αi = πi(ni + 1)

K∏j=1,j 6=i

πj(nj)αi/ρi

3-125

Jackson’s theorem: proof of DBEs IV

Rearranging the previous eqn. yields

πi(ni)αi

K∏j=1,j 6=i

πj(nj) = πi(ni + 1)(αi/ρi)K∏

j=1,j 6=iπj(nj)

After canceling, we have

πi(ni + 1) = ρiπi(ni)⇒ πi(n) = ρni (1− ρi)

2) π(n)vn,n−ei = π(n− ei)v∗n−ei ,n: Departure to the outside in the forwardprocess corresponds to arrival from the outside in the reversed process,

v∗n−ei ,n = αi = λi −K∑

j=1λjp∗ji︸ ︷︷ ︸

Traffic eqn. for the reversed process

= λi −K∑

j=1λjλipij

λj

=λi

(1−

K∑j=1

pij

)= λipi0 (= v∗n,n+ei

).

3-126

Jackson’s theorem: proof of DBEs V

Substituting this with vn,n−ei = µipi0 (departure to the outside),

(1− ρi)ρnii

K∏k=1,k 6=i

πk(nk)µipi0 = (1− ρi)ρni−1i

K∏k=1,k 6=i

πk(nk)λipi0

3) π(n)vn,n−ei+ej = π(n− ei + ej)v∗n−ei+ej ,n: Leaving queue i andjoining queue j in the forward process (vn,n−ei+ej = µipij) correspond toleaving queue j and joining queue i in the reversed process, i.e,v∗n−ei+ej ,n = µjp∗ji = λipijµj/λj ,

(1− ρi)ρnii (1− ρj)ρ

njj

K∏k=1,k 6=i,j

πk(nk)µipij

= (1− ρi)ρni−1i (1− ρj)ρ

nj+1j

K∏k=1,k 6=i,j

πk(nk)µjp∗ji∣∣∣use p∗ji=λipij/λj

3-127

Jackson’s theorem: proof of DBEs VI

Summary of transition rates of forward and reverse processesTransition Forward vn,m Reverse v∗n,m Commentn→ n + ei αi λi(1−

∑Kj=1 pij) all i

n→ n− ei µi(1−∑K

j=1 pij) αiµi/λi all i: ni > 0n→ n− ei + ej µipij λjpjiµi/λi all i: ni > 0, all j

4) Finally, we verify total rate equation,∑

m vn,m =∑

m v∗n,m:

v∗n,m =∑

i

λi

(1−

K∑j=1

pij

)︸ ︷︷ ︸∑

iλi−∑

i

∑jλipij

+∑

i:ni>0

(αiµi/λi +

∑j

λjpjiµi/λi

)

=∑

i

λi −∑

j

(λj − αj)︸ ︷︷ ︸λj=αj+

∑Ki=1

λipij

+∑

i:ni>0

(αiµi

λi+ µi

λi(λi − αi)

)

=∑

i

αi +∑

i:ni>0

µi = vn,m. �3-128

Page 33: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Open queueing networks: Extension I

The product-form solution of Jackson’s theorem is valid for thefollowing network of queues

• State-dependent service rate– 1/µi(ni): the mean of queue i’s service time exponentiallydistributed, when ni is the number of customers in the ith queuejust before the customer’s departure

ρi(ni) = λi

µi(ni), i = 1, . . . ,K , ni = 1, 2, . . .

– λi : total arrival rate at queue i determined by the traffic eqn.– Define P̂j(nj) as

P̂j(nj) ={

1, if nj = 0,ρj(1)ρj(2) · · · ρj(nj), if nj > 0

3-129

Open queueing networks: Extension II

– For all state n = (n1, . . . ,nK )

P(n) = P̂1(n1)P̂2(n2) · · · P̂k(nK )G ,

where G =∑∞

n1=0 · · ·∑∞

nK =0 P̂1(n1) · · · P̂k(nK )

• Multiple classes of customers– Provided that the service time distribution at each queue is thesame for all customer classes, the product form solution is valid forthe system with different classes of customers, i.e.,

λj(c) = αj(c) +K∑

i=1λi(c)pij(c)

– αj(c): rate of the external arrival of class c at queue j; pij(c) therouting probabilities of class c– See pp.230-231 in the textbook for more details

3-130

Open queueing networks: Performance measure

Performance measure• State probability distribution has been derived• Mean # of hops traversed, h, is

h = λ

α=∑K

i=1 λi∑Ki=1 αi

• Throughput of queue i: λi• Total throughput of the queueing network: α• Mean number of customers at queue i (ρi = λi/µi)

N i = ρi/(1− ρi)

• System response time T

T = Nα

= 1α

K∑i=1

N i = 1α

K∑i=1

λiTi = 1α

K∑i=1

λi

µi − λi

3-131

Open queueing networks: example A-I

New programs arrive at a CPU according to a Poisson process of rate α. Aprogram spends an exponentially distributed execution time of mean 1/µ1

in the CPU. At the end of this service time, the program execution iscomplete with probability p or it requires retrieving additional informationfrom secondary storage with probability 1− p. Suppose that the retrieval ofinformation from secondary storage requires an exponentially distributedamount of time with mean 1/µ2. Find the mean time that each programspends in the system.

3-132

Page 34: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Open queueing networks: example A-II

Find the mean arrival rate,

• Arrival rate into each queue, λ1 = α+ λ2 and λ2 = (1− p)λ1

λ1 = α/p and λ2 = (1− p)α/p

• Each queue behaves like an M/M/1 system, so

E [N1] = ρ1

1− ρ1and E [N2] = ρ2

1− ρ2

where ρ1 = λ1/µ1 and ρ2 = λ2/µ2

Using Little’s result, the total time spent in the system

E [T ] = E [N1 + N2]α

= 1α

[ρ1

1− ρ1+ ρ2

1− ρ2

]

3-133

Open queueing networks: example B-I

Consider the following network with three nodes

Data Communications EIEN 368 (2014년 2학기) QUEUEING THEORY Q–69

ρi∆=

λi

µi

N i =ρi

1− ρi; Ti =

1

µi − λi

N =

M∑

i=1

N i =

M∑

i=1

ρi1− ρi

; T =N

γ=

M∑

i=1

(λi

γ)Ti

eg. Consider the following networks with three routers

A

B

CγA

γB

γC

L1 L3

L2

L4

• External packet arrivals : Poisson

process with γA = 3.5 pack-

ets/sec, γB = 1.5, γC = 1.5.

• Packet length : exponentially dis-

tributed with mean 1000 bits/packet.

• External packet arrivals :Poisson process with γA = 350(packets/sec), γB = 150,γC = 150.

• Packet length : exponentiallydistributed with mean 50(kbits/packet)

Assumptions:(a) Packets moving along a path from source to destination have their

lengths selected independently at each outgoing link→ Kleinrock’s independence assumption

(b) Channel capacity of link i: Ci= 17.5 Mbps for i = 1, 2, 3, 4→ Service rate at link i: exponentially distributed with rateµi = Ci/50000 = 350 packets/sec.

3-134

Open queueing networks: example B-II

• Traffic matrix (packets per second)from → to A B C

A – 150 200(50% through B)(50% directly to C)

B 50 – 100C 100 50 –

• Find mean delay from A to C• First, we need to know link traffic

traffic type L1 L2 L3 L4A → B 150A → C 100 100 100B → A 50 50B → C 100C → A 100C → B 50 50total λ1 = 300 λ2 = 100 λ3 = 250 λ4 = 200

3-135

Open queueing networks: example B-II

• Since α = 650 and λ = 850, the mean number of hops is

h = 850/650 = 1.3077

• We get link utilization, mean number and response time asL1 L2 L3 L4

ρi 300/350=0.857 100/350=0.286 250/350=0.714 200/350 =0.572N i 300/50 100/250 250/100 200/150Ti 1/50=0.02 1/250=0.004 1/100=0.01 1/150=0.0067

– N i = ρi/(1− ρi) and Ti = N i/λi

• Mean delay from A to C

TAC = ( T1︸︷︷︸A to B

+ T2︸︷︷︸B to C

)× 0.5 + T3 × 0.5 = 0.017 (sec)

– propagation delay is ignored

3-136

Page 35: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Closed queueing networks I

Consider a network of K first-come first serve, single server queue,each of which has unlimited queue size and exponential distributionwith rate µk . There are also a fixed number of customers, say M ,circulate endlessly in a closed network of queues.

• Traffic eqn.: no external arrival!

λi =K∑

j=1λjpji with

K∑i=0

pji = 1

3-137

Closed queueing networks II

• Using ~π = ~π · P, and ~π · ~1 = 1, we have

λi = λ(M )πi

– λ(M ): a constant of proportionality, the sum of the arrival ratesin all the queues in the network, and

∑Ki=1 λi 6= 1

– G(M ) (normalization constant) will take care of λ(M )Assuming ρi(ni) = λi/µi(ni) < 1 for i = 1, . . . ,K , we have for allni ≥ 0,– Define P̂j(nj) as

P̂j(nj) ={

1, if nj = 0,ρj(1)ρj(2) · · · ρj(nj), if nj > 0

The joint state probability is expressed as

π(n) = 1G(M )

K∏i=1

P̂i(ni), and G(M ) =∑

n1+···+nK =M

K∏i=1

P̂i(ni)

3-138

Closed queueing networks III

• ρi : no longer the actual utilization due to λ(M ), i.e., relativeutilization

• Setting λ(M ) to a value does not change the results• The maximum queue size of each queue is MProof: as in Jackson’s theorem for open queueing networks• Use time-reversibility:

– routing matrix of the reversed process, p∗ji = λipij/λj• For state transition between n and n′ = n− ei + ej

π(n′)v∗n′,n = π(n)vn,n′ (∗)

• As in open queueing networks, we have for ni > 0

v∗n−ei+ej ,n =µjp∗ji = µj(λipij/λj) (1)

vn,n−ei+ej =µipij (2)

Leaving queue i and joining queue j in the forward process(vn,n−ei+ej = µipij) correspond to leaving queue j and joining queue iin the reversed process, 3-139

Closed queueing networks IV

• GBE of open queueing networks is reduced to(α+

K∑i=1

µi

)π(n) =

K∑i=1

αiπ(n− ei)︸ ︷︷ ︸external arrivals

+K∑

i=1pi0µiπ(n + ei)︸ ︷︷ ︸go outside from i

+K∑

i=1

K∑j=1

pjiµjπ(n− ei + ej)︸ ︷︷ ︸from j to i

• Substituting (1) and (2) into (*), we have

ρiπ(n1, . . . ,ni − 1, . . . ,nj + 1, . . . ,nK ) = ρjπ(n1, . . . ,nK )

• The proof for the following is given on page 235 (BG)∑n′

vn,n′ =∑n′

v∗n,n′

3-140

Page 36: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Summary of time-reversibility: CTMC

The forward CTMC has transition rates γij .

The reversed chain is a CTMC with transition rates γ∗ij = φjγjiφi

If we find positive numbers φi , summing to unity and such thatscalars γ∗ij satisfy

∑∞j=0 γij =

∑∞j=0 γ

∗ij for all i ≥ 0, then φi is the

stationary distribution of both the forward and reverse chains.– Proof∑

j 6=iφjγji =

∑j 6=i

φiγ∗ij = φi

∑j 6=i

γ∗ij = φi∑j 6=i

γij =∑j 6=i

φiγij

: Global balance equation

3-141

Closed queueing networks V

• µi(n): a service rate when queue i has n customers• vn,n−ei+ej = µi(n)pij (leaving queue i and joining queue j)∑

{(j,i)|ni>0}

µi(ni)pij =K∑

j=1

∑{i|ni>0}

µi(ni)pij

=∑

{i|ni>0}

µi(ni)K∑

j=1pij =

∑{i|ni>0}

µi(ni)

• v∗n,n−ei+ej= µi(n)p∗ij = µi(n)λjpji/λi using p∗ij = λjpji/λi∑{(j,i)|ni>0}

µi(ni)λjpji

λi=

K∑j=1

∑{i|ni>0}

µi(ni)λjpji

λi

=∑

{i|ni>0}

µi(ni)λi

K∑j=1

λjpji︸ ︷︷ ︸λi

=∑

{i|ni>0}

µi(ni)

3-142

Closed queueing networks VI

Dealing with G(M ,K ) =∑

n1+···+nK =M∏K

i=1 P̂i(ni)Computing G(M ,K ) with M customers and K queues iteratively

G(m, k) = G(m, k − 1) + ρkG(m − 1, k)

with boundary conditions: G(m, 1) = ρm1 for m = 0, 1, . . . ,M , and

G(0, k) = 1 for k = 1, 2, · · · ,K• For m > 0 and k > 1, split the sum into two disjoint sums as

G(m, k) =∑

n1+···+nk=mρn1

1 ρn22 · · · ρ

nkk

=∑

n1+···+nk=m,nk=0

ρn11 ρn2

2 · · · ρnkk +

∑n1+···+nk=m,

nk>0

ρn11 ρn2

2 · · · ρnkk

=∑

n1+···+nk=m,nk=0

ρn11 ρn2

2 · · · ρnk−1k−1

︸ ︷︷ ︸G(m,k−1)

+∑

n1+···+nk=m,nk>0

ρn11 ρn2

2 · · · ρnkk

3-143

Closed queueing networks VII

• Since nk > 0, we change nk = n′k + 1 for n′k ≥ 0∑n1+···+nk=m,

nk>0

ρn11 ρn2

2 · · · ρnkk =

∑n1+···+n′k+1=m,

n′k>0

ρn11 ρn2

2 · · · ρn′k+1k

=ρk∑

n1+···+n′k=m−1,n′k>0

ρn11 ρn2

2 · · · ρn′kk

=ρkG(m − 1, k)

In a closed Jackson network with M customers, the probability thatat steady-state, the number of customers in station j greater than orequal to m is

Pr[xj ≥ m] = ρmj

G(M −m)G(M ) for 0 ≤ m ≤ M

3-144

Page 37: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Closed queueing networks VIII

Implementation of G(M ,K ): µi is not a function of m.

m\k 1 2 · · · k − 1 k · · · K0 1 1 1 1 1

1 ρ1G(1, 2) = G(1, 1) + ρ2G(0, 2)

= ρ1 + ρ22 ρ2

1 G(2, 2) = ρ21 + ρ2G(1, 2)

...m − 1

m ρm1

...M ρM

1 G(M,K)

If µi(m) = mµi (multiple servers), we generalize

G(m, k) =m∑

i=0fk(i)G(m − i, k − 1) and fk(i) = (λ(M )πk)i∏i

j=1 µi(j)

with fk(0) = 1 for all k.

3-145

Closed queueing networks IX

• Proof: nj = n′j + m for n′j ≥ 0

Pr[xj ≥ m] =∑

n1+···+nj+···+nK =M,nj≥m

ρn11 · · · ρ

njj · · · ρ

nKK

G(M )

=∑

n1+···+n′j +m+···+nK =M,

n′j +m≥m

ρn11 · · · ρ

n′j +mj · · · ρnK

KG(M )

=ρm

j

G(M )∑

n1+···+n′j +···+nK =M−m,n′j≥0

ρn11 · · · ρ

n′jj · · · ρ

nKK

=ρm

j

G(M )G(M −m)

• Pr[xj = m] = Pr[xj ≥ m]− Pr[xj ≥ m + 1]= ρm

j (G(M −m)− ρjG(M −m − 1))/G(M )3-146

Closed queueing networks X

In a closed Jackson network with M customers, the average numberof customers at queue j:

Nj(M ) =M∑

m=1Pr[xj ≥ m] =

M∑m=1

ρmj

G(M −m)G(M )

In a closed Jackson network with M customers, the averagethroughput of queue j:

γj(M ) =µj Pr[xj ≥ 1] = µjρjG(M − 1)

G(M )

=λjG(M − 1)

G(M )– Average throughput is the average rate at which customers areserviced in the queue. For a single-server queue, the service rate is µjwhen there are one or more customers in the queue, and 0 when thequeue is empty

3-147

Closed queueing networks: example A-I

Suppose that the computer system given in the open queueing network isnow operated so that there are always I programs in the system. Note thatthe feedback loop around the CPU signifies the completion of one job andits instantaneous replacement by another one. Find the steady state pmf ofthe system. Find the rate at which programs are completed.

• Using λi = λ(I )πi with ~π = ~πP,

π1 = pπ1 + π2, π2 = (1− p)π1 and π1 + π2 = 1

we have

λ1 = λ(I )π1 = λ(I )2− p and λ2 = λ(I )π2 = λ(I )(1− p)

2− p 3-148

Page 38: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Closed queueing networks: example A-II

• For 0 ≤ i ≤ I , ρ1 = λ1/µ1 and ρ2 = λ2/µ2

Pr[N1 = i,N2 = I − i] = (1− ρ1)ρi1(1− ρ2)ρI−i

2S(I )

• The normalization constant, S(I ), is obtained by

S(I ) = (1−ρ1)(1−ρ2)I∑

i=0ρi

1ρI−i2 = (1−ρ1)(1−ρ2)ρI

21− (ρ1/ρ2)I+1

1− (ρ1/ρ2)

• We then have for 0 ≤ i ≤ I

Pr[N1 = i,N2 = I − i] = 1− β1− βI+1 β

i

where β = ρ1/ρ2 = µ2/((1− p)µ1)• Program completion rate: pλ1

λ1/µ1 = 1− Pr[N1 = 0] = β(1− βI )/(1− βI+1)

3-149

Arrival theorem for closed networks I

Theorem: In a closed Jackson network with M customers, theoccupancy distribution seen by a customer upon arrival at queue j isthe same as the occupancy distribution in a closed network with thearriving customer removed, i.e., the system with M − 1 customers

• In a closed network with M customers, the expected number ofcustomers found upon arrival by a customer at queue j is equal tothe average number of customers at queue j, when the totalnumber of customers in the closed network is M − 1

• An arriving customer sees the system at a state that does notinclude itself

Proof:

• X(t) = [X1(t),X2(t), . . . ,XK (t))]: state of the network at time t• Tij(t): probability that a customer moves from queue i to j at

time t+

3-150

Arrival theorem for closed networks II

• For any state n with ni > 0, the conditional probability that acustomer moving from node i to j finds the network at state n

αij(n) = Pr[X(t) = n|Tij(t)] = Pr[X(t) = n,Tij(t)]Pr[Tij(t)]

= Pr[Tij(t)|X(t) = n] Pr[X(t) = n]∑m,mi>0 Pr[Tij(t)|X(t) = m] Pr[X(t) = m]

= π(n)µipij∑m,mi>0 π(m)µipij

= ρn11 · · · ρ

nii · · · ρ

nKK∑

m,mi>0 ρm11 · · · ρ

mii · · · ρ

mKK

– Changing mi = m′i + 1, m′i ≥ 0,

αij(n) = ρn11 · · · ρ

nii · · · ρ

nKK∑

m1+···+m′i +1+···+mK =M,

m′i +1>0ρm1

1 · · · ρm′i +1i · · · ρmK

K

= ρn11 · · · ρ

ni−1i · · · ρnK

K∑m1+···+m′i

+···+mK =M−1,m′i≥0ρm1

1 · · · ρm′ii · · · ρ

mKK

= ρn11 · · · ρ

ni−1i · · · ρnK

KG(M − 1)

3-151

Mean Value Analysis I

Performance measure for closed networks with M customers• Nj(M ): average number of customers in queue j• Tj(M ): average time a customer spends (per visit) in queue j• γj(M ): average throughput of queue j

Mean-Value Analysis: Calculates Nj(M ) and Tj(M ) directly, withoutfirst computing G(M ) or deriving the stationary distribution of thenetworka) The queue length observed by an arriving customer is the same asthe queue length in a closed network with one less customerb) Little’s result is applicable throughout the network1. Based on a)

Tj(s) = 1µj

(1 + Nj(s − 1)) for j = 1, . . . ,K , s = 1, . . . ,M

– Tj(0) = Nj(0) = 0 for j = 1, . . . ,K

3-152

Page 39: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Mean Value Analysis II

2. Based on b), we first have when there are s customers in thenetwork

Nj(s) = λj(s)Tj(s) = λ(s)πjTj(s)︸ ︷︷ ︸step 2-b

(1)

and

s =K∑

j=1Nj(s) = λ(s)

K∑j=1

πjTj(s)→ λ(s) = s∑Kj=1 πjTj(s)︸ ︷︷ ︸

step 2-a

(2)

Combining (1) and (2) yields

Nj(s) = s λ(s)πjTj(s)∑Kj=1 πjTj(s)

This will be iteratively done for s = 0, 1, . . . ,M

3-153

Closed queueing networks: example B

Gupta’s truck company owns m trucks: Gupta is interested in theprobability that 90% of his trucks are in operation

Local maintenance

Manufacturer

• Set a routing matrix P:

P =

Op LM MOp 0 0.85 0.15

LM 0.9 0 0.1M 1 0 0

• With π0 = 0.4796,π1 = 0.4077, andπ2 = 0.1127, we haveρ0 = λ(m)π0/λ0,ρ1 = λ(m)π1/µ1, andρ2 = λ(m)π2/µ2

• We have Pr[O = i,L = j,M = k] = 1i!ρ

i0ρ

j1ρ

k2/G(m) and

k = m − i − j 3-154

Where are we?

Elementary queueing models– M/M/1, M/M/C, M/M/C/C/K, ... and bulk queues– either product-form solutions or use PGF

Intermediate queueing models (product-form solution)– Time-reversibility of Markov process– Detailed balance equations of time-reversible MCs– Multidimensional Birth-death processes– Network of queues: open- and closed networks

Advanced queueing models– M/G/1 type queue: Embedded MC and Mean-value analysis– M/G/1 with vacations and Priority queues

More materials on queueing models (omitted)– G/M/m queue, G/G/1, etc.– Algorithmic approaches to get steady-state solutions

3-155

M/G/1 queue: Embedded MC I

Recall that a continuous-time MC is described by (n, r):

• n: number of customers in the system.• r : attained or remaining service time of the customer in service.

Due to x, (n, x) is not a countable state space. How can we get rid of x?

What if we observe the system at the end of each service?

Xn+1 = max(Xn − 1, 0) + Yn+1

Xn: number of customers in the system left behind by a departure.Yn: number of arrivals that occur during the service time of thedeparting customer.

Question: Xn is equal to the queue length seen by an arrivingcustomer (queue length just before arrival)? Recall PASTA.

3-156

Page 40: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Distribution Upon Arrival or Departure

α(t), β(t): number of arrivals and departures (respectively) in (0, t)Un(t): number of times the system goes from n to n + 1 in (0, t);number of times an arriving customer finds n customers in the systemVn(t): number of times that the system goes from n + 1 to n;number of times a departing customer leaves n.

the transition n to n+1 cannot reoccur until after the number in the system drops to n once more (i.e., until after the transition n +1 to n reoccurs)

Un(t) and Vn(t) differ by at most one: |Un(t)−Vn(t)| ≤ 1.

limt→∞

Un(t)t = lim

t→∞

Vn(t)t ⇒ lim

t→∞

Un(t)α(t)

α(t)t = lim

t→∞

Vn(t)β(t)

β(t)t

3-157

M/G/1 queue: Embedded MC II

Defining probability generating function of distribution Xn+1,

Qn+1(z) , E [zXn+1 ] = E [zmax(Xn−1,0)+Yn+1 ] = E [zmax(Xn−1,0)]E [zYn+1 ]

Let Un+1(z) = E [zYn+1 ], as n →∞, Un+1(z) = U (z) (independentof n). Then, we have

Qn+1(z) =U (z)∞∑

k=0zk Pr[max(Xn − 1, 0) = k]

=U (z)[z0 Pr[Xn = 0] +

∞∑k=1

zk−1 Pr[Xn = k]]

=U (z)[

Pr[Xn = 0] + z−1(Qn(z)− Pr[Xn = 0])]

As n →∞, we have Qn+1(z) = Qn(z) = Q(z), and Pr[Xn = 0] = q0,

Q(z) = U (z)(z − 1)z −U (z) q0.

3-158

M/G/1 queue: Embedded MC III

We need to find U (z) and q0. Using U (z|xi = x) = eλx(z−1),

U (z) =∫ ∞

0U (z|xi = x)b(x)dx = B∗(λ(1− z)).

Since Q(1) = 1, we have q0 = 1−U ′(1) = 1− λ ·X = 1− ρ.

Transform version of Pollaczek-Khinchin (P-K) formula is

Q(z) = B∗(λ(1− z))(z − 1)z − B∗(λ(1− z)) (1− ρ)

Letting q = Q′(1), one gets W = q/λ−X .

Sojourn time distribution of an M/G/1 system with FIFO service:If a customer spends Tj sec in the system, the number of customers itleaves behind in the system is the number of customers that arriveduring these Tj sec, due to FIFO.

3-159

M/G/1 Queue: Embedded MC IV

Let fT(t) be probability density function of Tj , i.e., total delay.

Q(z) =∞∑

k=0zk∫ ∞

0

(λt)k

k! e−λtfT(t)dt = T∗(λ(1− z))

where T∗(s) is the Laplace transform of fT(t). We have

T∗(λ(1− z)) = B∗(λ(1− z))(z − 1)z − B∗(λ(1− z)) (1− ρ)

Let s = λ(1− z), one gets

T∗(s) = (1− ρ)sB∗(s)s − λ+ λB∗(s) = W ∗(s)B∗(s)⇒W ∗(s) = (1− ρ)s

s − λ+ λB∗(s)

In an M/M/1 system, we have B∗(s) = µ/(s + µ):

W ∗(s) = (1− ρ)(

1 + λ

s + µ− λ

)3-160

Page 41: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/G/1 Queue: Embedded MC V

• Taking the inverse transform of W ∗(s) (L{Ae−at} ↔ A/(s + a)),

L−1{W ∗(s)} = L−1{

(1− ρ)(

1 + λ

s + µ− λ

)}= (1− ρ)δ(t) + λ(1− ρ)e−µ(1−ρ)x , x > 0

We can write W ∗(s) in terms of R′∗(s)

W ∗(s) = (1− ρ)ss − λ+ λB∗(s) = (1− ρ)s

s − λ(1− B∗(s))

= 1− ρ

1− λX 1− B∗(s)sX

= 1− ρ1− ρR′∗(s)

= (1− ρ)∞∑

k=0(ρR′∗(s))k

3-161

Residual life time∗ I

Hitchhiker’s paradox:

Cars are passing at a point of a road according to a Poisson processwith rate λ = 1/10, i.e., 10 min.

A hitchhiker arrives to the roadside point at random instant of time.

Hitchhiker arrives

Next car Previous car

time

What is his mean waiting time for the next car?

1. Since he arrives randomly in an interval, it would be 5 min.2. Due to memoryless property of exponential distribution, it would be

another 10 min.

∗L. Kleinrock, Queueing systems, vol.1: theory3-162

Residual life time II

The distribution of an interval that the hitchhiker captures dependson both X and fX(x):

fX′(x) = CxfX(x) and C : proportional constant

Since∫∞

0 fX′(x)dx = 1, we have C = 1/E [X ] = 1/X :

fX′(x) = xfX(x)X

Since Pr[R′ < y|X ′ = x] = y/x for 0 ≤ y ≤ x, joint pdf of X and R′:

Pr[y < R′ < y + dy, x < X ′ < x + dx] = dyx

xfX(x)dxX

= fX(x)dydxX

Unconditioning over X ′,

fR′(y)dy = dyX

∫ ∞y

fX(x)dx = 1− FX(y)X

dy ⇒ fR′(y) = 1− FX(y)X

3-163

Residual life time III

If we take the Laplace transform of the pdf of R′ for 0 ≤ R′ ≤ x,

E [e−R′s|X ′ = x] =∫ x

0

e−sx

x dy = 1− e−sx

sx

Unconditioning over X ′, we have R′∗(s) and its moments as

R′∗(s) = 1− F∗X(s)sX

⇒ E [R′n] = X (n+1)

(n + 1)X

where F∗X(s) =∫∞

0 e−sx fX(t)dt

Mean residual time is rewritten as

R = E [R′] = 0.5(

X + σ2X

X

)Surprisingly, the distribution of the elapsed waiting time, X ′ − R′, isidentical to that of the remaining waiting time.

3-164

Page 42: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/G/1 Queue: Embedded MC VI

State transition diagram of M/G/1

… …

……

and its state transition probability matrix

P =

α0 α1 α2 α3 . . .

α0 α1 α2 α3 . . .

0 α0 α1 α2 . . .

0 0 α0 α1 . . ....

......

.... . .

and αk =∫ ∞

0

(λx)k

k! e−λxb(x)dx

3-165

M/G/1 Queue: Embedded MC VII

GBE can be expressed as

πn =n∑

k=0πn+1−kαk + αnπ0 for n = 0, 1, 2, · · ·

– Q(z) can be also obtained using Q(z) =∑∞

n=0 πnzn

As an alternative, we define ν0 = 1 and νi = πi/π0

ν1 =1− α0

α0

ν2 =1− α1

α0ν1 −

α1

α0...

νi =1− α1

α0νi−1 −

α2

α0νi−2 − · · ·

αi−1

α0ν1 −

αi−1

α0

–∑∞

i=0 νi = 1 +∑∞

i=1πiπ0

= 1/π0 and πi = π0νi3-166

M/G/1 queue: Mean value analysis I

We are interested in E [Wi ] = dW∗(s)ds

∣∣∣s=0

(See BG p.186)• Wi : Waiting time in queue of customer i• Ri : Residual service time seen by customer i• Xi : Service time of customer i• Ni : Number of customers in queue found by customer i

Wi = Ri +i−1∑

j=i−Ni

Xj

Taking expectations and using the independence among Xj ,

E [Wi ] ,W = E [Ri ] + E[ i−1∑

j=i−Ni

E [Xj |Ni ]]

= Ri + 1µ

Nq

Since Nq = λW , Ri = R for all i, we have

W = R1− ρ

3-167

M/G/1 queue: Mean value analysis II

A sample path of M/G/1 queue

0 1 2 3 54 6 7 8 9 10 11 1312 14 15 16

time

0 1 2 3 54 6 7 8 9 10 11 1312 14 15 16

time12

3

5

4

67

8

0 1 2 3 54 6 7 8 9 10 11 1312 14 15 16

time12

3

4

#o

f cu

sto

mer

s

12

3

Vir

tual

wo

rklo

ad

3-168

Page 43: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/G/1 queue: Mean value analysis III

Time averaged residual time of r(τ) in the interval [0, t] is

R(t) = 1t

∫ t

0r(τ)dτ = 1

t

M(t)∑i=1

12X2

i = 12

M (t)t

∑M(t)i=1 X2

iM (t)

– M (t) is the number of service completion within [0, t].R

esid

ual

ser

vice

tim

e

time

Upon a new service of duration , starts at and decays linearly for time units.

– e.g., upon a new service of duration X1, r(τ) starts at X1 and decayslinearly for X1 time units

As t →∞, limt→∞R(t) = R = λX2/2

W = λX2

2(1− ρ) ←→dW ∗(s)

ds

∣∣∣s=0 3-169

M/G/1 queue: Mean value analysis IV

From the hitchhiker’s paradox, we have E [R′] = E [X2]/(2E [X ])

R =0 · Pr[N (t) = 0] + E [R′]× Pr[N (t) > 0]

= E [X2]2E [X ] × λE [X ] = λX2

2P-K formula for mean waiting time in queue

W = −W ∗′(s)|s=0 = λX2

2(1− ρ)

∣∣∣∣X2=σ2

X +X2= λ(σ2

X + X2)2(1− ρ)

= 1 + C 2x

1− ρX = 1 + C 2x

2 WM/M/1

– C 2x = σ2

X/X2 is the coefficient of variation of the service time

– The average time in the system T = W + XEg.: since Cx = 1 in an M/M/1 and Cx = 0 in an M/D/1,

WM/M/1 = ρ

1− ρX > WM/D/1 = ρ

2(1− ρ)X3-170

Delay analysis of an ARQ system

Suppose Go-Back-N ARQ system, where a packet is successfullytransmitted with probability 1− p; ACK arrives in tx time of N − 1frames without an error• Packet arrivals to a transmitter’s queue follows Poisson with meanλ (packets/slot)

Sec. 3.5 The MIG11 System 191

2. A packet transmitted in frame i might be accepted at the receiver, but the correspond-ing acknowledgment (in the form of the receive number) might not have arrived atthe transmitter by the time the transmission of packet i + n - I is completed. Thiscan happen due to errors in the return channel, large propagation delays, long returnframes relative to the size of the goback number n, or a combination thereof.

We will assume (somewhat unrealistically) that retransmissions occur only due toreason I, and that a packet is rejected at the receiver with probability p independently ofother packets.

Consider the case where packets arrive at the transmitter according to a Poissonprocess with rate .\. It follows that the time interval between start of the first transmissionof a given packet after the last transmission of the previous packet and end of the lasttransmission of the given packet is I + kn time units with probability (I - p)pk. (Thiscorresponds to k retransmissions following the last transmission of the previous packet; seeFig. 3.17.) Thus, the transmitter's queue behaves like an MIGII queue with service timedistribution given by

kP{X=I+kn}=(l-p)p, k=O.I. ...

The first two moments of the service time are

x (X X)

x (00 00 00)- 2 k k k 2 2k=(l-p) +n

We now note thatX

,",pk = _1_,I-p

k=O

X

'"'kk __P_P - (I _ )2'

k=O P

Effective service timeof packet 1

Effective service timeof packet 2

,. II .. 'II

Start of effective service timeof packet 4

Error Final transmissionof packet 1

Error Final transmission Correct Errorof packet 2 -Packets Transmitted

Error

Figure 3.17 Illustration of the effective service times of packets in the ARQsystem of Example 3.15. For example, packet 2 has an effective service time ofn + 1 because there was an error in the first attempt to transmit it following thelast transmission of packet 1. but no error in the second attempt.

• We need the first two moments of the service time to use P-Kformula

X =∞∑

k=0(1 + kN )(1− p)pk = 1 + Np

1− p

X2 =∞∑

k=0(1 + kN )2(1− p)pk = 1 + 2Np

1− p + N 2(p + p2)(1− p)2 3-171

M/G/1 Queue with vacations I

Server takes a vacation at the end of each busy period• Take an additional vacation if no customers are found at the end of

each vacation: V1, V2, ... the durations of the successive vacations• A customer finds the system idle (vacation), waits for the end of

the vacation period

Delay Models in Data Networks

Packet arrivals

x, x x v2 V3 X5 V4 X5::::::::'':'::':'':'::'V ,Busy period

Vacations

Time

Figure 3.12 An M/G/I1 system with vacations. At the end of a busyperiod, the server goes on vacation for time V with first and second momentsV and VY , respectively. If the system is empty at the end of a vacation, theserver takes a new vacation. An arriving customer to an empty system mustwait until the end of the current vacation to get service.

X, X3

X,

Figure 3.13 Residual service times for an M/G/1 system with vacations.Busy periods alternate with vacation periods.

Chap. 3

II

• Residual service time including vacation periods

1t

∫ t

0r(τ)dτ = 1

t

M(t)∑i=1

12X2

i + 1t

L(t)∑i=1

12V 2

i

– M (t): # of services completed by time t– L(t): # of vacations completed by time t

3-172

Page 44: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/G/1 Queue with vacations II

• Residual service time including vacation periods is rewritten as

1t

∫ t

0r(τ)dτ︸ ︷︷ ︸

R as t→∞

= M (t)t︸ ︷︷ ︸

λ as t→∞

·∑M(t)

i=112 X2

iM (t) + L(t)

t︸ ︷︷ ︸1−ρ

Vas t→∞

·∑L(t)

i=112 V 2

iL(t)

= λX2

2 + (1− ρ)V 2

2V= R

• Using W = R/(1− ρ), we have

W = λX2

2(1− ρ) + V 2

2V

– The sum of waiting time in M/G/1 queue and mean residualvacation times

3-173

FDM and TDM on a Slot Basis I

Suppose m traffic streams of equal-length packets according toPoisson process with rate λ/m each

• If the traffic streams are frequency-division multiplexed on msubchannels, the transmission time of each packet is m time units– Using P-K formula, λX2/(2(1− ρ)), with ρ = λ and µ = 1/m,

WFDM = λm2(1− λ)

• Consider the same FDM, but packet transmissions can start only attimes, m, 2m, 3m,...: slotted FDM– This system gives stations a vacation of m slots

WSFDM = WFDM + 0.5m(

= V 2

2V

)

3-174

FDM and TDM on a Slot Basis II

• m traffic streams are time-division multiplexed, where one slotdedicated to each traffic stream as shown belowSec. 3.5 The M / G/1 System 195

Stream 1 Stream 2 Stream 3 Stream 4

--,--IttM-'-------'--...l...----..!-1-!--I__.I. Framek .1_. t

Frame (k + 1)One time unit per slot

Figure 3.20 TOM with m = 4 traffic streams.

Thus, the customer's average total delay is more favorable in TDM than in FDM (assumingthat In > 2). The longer average waiting time in queue for TDM is more than compensatedby the faster service time. Contrast this with the Example 3.9, which treats TDM withslots that are a very small portion of the packet size. Problem 3.33 outlines an altemativeapproach for deriving the TDM average delay.

3.5.2 Reservations and Polling

Organizing transmissions from several packet streams into a statistical multiplexing sys-tem requires some form of scheduling. In some cases, this scheduling is naturally andeasily accomplished; in other cases, however, some form of reservation or polling systemis required.

Situations of this type arise often in multiaccess channels, which will be treatedextensively in Chapter 4. For a typical example, consider a communication channel thatcan be accessed by several spatially separated users; however, only one user can transmitsuccessfully on the channel at anyone time. The communication resource of the channelcan be divided over time into a portion used for packet transmissions and another portionused for reservation or polling messages that coordinate the packet transmissions. In otherwords, the time axis is divided into data intervals, where actual data are transmitted, andreservation intervals, used for scheduling future data. For uniform presentation, we usethe term "reservation" even though "polling" may be more appropriate to the practicalsituation.

We will consider Tn traffic streams (also called users) and assume that each datainterval contains packets of a sinfile user. Reservations for these packets are made in theimmediately preceding reservation interval. All users are taken up in cyclic order (seeFig. 3.21). There are several versions of this system differing in the rule for decidingwhich packets are transmitted during the data interval of each user. In the fiated system,the rule is that only those packets that arrived prior to the user's preceding reservationinterval are transmitted. By contrast, in the exhaustive system, the rule is that all availablepackets of a user are transmitted during the corresponding data interval, including thosethat arrived in this data interval or the preceding reservation interval. An intermediateversion, which we call the partially fiated system, results when the packets transmitted ina user's data interval are those that arrived up to the time this data interval began (and thecorresponding reservation interval ended). A typical example of such reservation systems

TDM with m = 4 traffic streams

– Service time to each queue, X : m slots → X2 = m2

– Frame synchronization delay: m/2– Using P-K formula, we have

WTDM = m2(1− λ) = WSFDM

– System response time: T = 1 + WTDM

3-175

M/G/1 Queue with Non-Preemptive Priorities ICustomers are divided into K priority classes, k = 1, . . . ,K .

Non-preemptive priority• Service of a customer completes uninterrupted, even if customers

of higher priority arrive in the meantime• A separate (logical) queue is maintained for each class; each time

the server becomes free, the first customer in the highest priorityqueue (that is not empty) enters service

• Due to non-preemptive policy, the mean residual service time R′seen by an arriving customer is the same for all priority classes if allcustomers have the same service time distribution

Notations• N (k)

q : mean number of waiting customers belonging to class k inthe queue.

• Wk : mean waiting time of class-k customers• ρk : utilization, or load of class k, ρk = λkXk .• R′: mean residual service time in the server upon arrival

3-176

Page 45: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

M/G/1 Queue with Non-Preemptive Priorities II

Stability condition: ρ1 + ρ2 + · · ·+ ρK < 1.Priority 1: similar to P-K formula,

W1 = R + 1µ

N (1)q and N (1)

q = λ1W1 ⇒W1 = R1− ρ1

Priority 2:

W2 = R + 1µ1

N (1)q + 1

µ2N (2)

q︸ ︷︷ ︸time needed to serve

class-1 and class-2 customersahead in the queue

+ 1µ1λ1W2︸ ︷︷ ︸

time needed to serve those customersin higher classes that arrive

during the waiting time of class-2 customer

From N (2)q = λ2W2,

W2 = R + ρ1W1 + ρ2W2 + ρ1W2 ⇒W2 = R + ρ1W1

1− ρ1 − ρ2

– Using W1 = R/(1− ρ1), we have

W2 = R(1− ρ1)(1− ρ1 − ρ2) 3-177

M/G/1 Queue with Non-Preemptive Priorities III

From W2 = R/((1− ρ1)(1− ρ1 − ρ2)), we can generalize

Wk = R(1− ρ1 − · · · − ρk−1)(1− ρ1 − · · · − ρk)

As before, the mean residual service time R is

R = 12λX2, with λ =

K∑i=1

λi and X2 = 1λ

K∑i=1

λiX2i

Mean waiting time for class-k customers:

Wk =∑K

i=1 λiX2i

2(1− ρ1 − · · · − ρk−1)(1− ρ1 − · · · − ρk)

Note that average queueing time of a customer depends on the arrivalrate of lower priority customers.

3-178

M/G/1 Queue with Preemptive Resume Priorities I

Preemptive resume priority• Service of a customer is interrupted when a higher priority

customer arrives.• It resumes from the point of interruption when all higher priority

customers have been served.• In this case the lower priority class customers are completely

"invisible" and do not affect in any way the queues of the higherclasses

Waiting time of class-k customer consists of(i) The customer’s own mean service time Xk .(ii) The mean time to serve the customers in classes 1, . . . , k, ahead in

the queue,

Wk = Rk

1− ρ1 − · · · − ρkand Rk = 1

2

k∑i=1

λiX2i .

This is equal to the average waiting time in an M/G/1 systemwhere customers of priority lower than k are neglected 3-179

M/G/1 Queue with Preemptive Priorities II

(iii) Average time required to serve customers of priority higher than kthat arrive while the customer is in the system for Tk

k−1∑i=1

λiX iTk =k−1∑i=1

ρiTk for k > 1 and 0 if k = 1

• Combining these three terms,

Tk = Xk + Rk

1− ρ1 − · · · − ρk+ Tk

k−1∑i=1

ρk︸ ︷︷ ︸this is zero for k=1

⇒Tk =(1− ρ1 − · · · − ρk)Xk +

∑ki=1 λiX2

i(1− ρ1 − · · · − ρk−1)︸ ︷︷ ︸

becomes 1 if k=1

(1− ρ1 − · · · − ρk) .

3-180

Page 46: Lecture 3 Introduction to Queueing theoryweb.iitd.ac.in/~jbseo/ell785/Queueing_theory_2017.pdf · ELL785–ComputerCommunicationNetworks Lecture 3 Introduction to Queueing theory

Upper Bound for G/G/1 System I

Waiting time of the kth customer

• Wk : Waiting time of the kth customer• Xk : Service time of the kth customer• τk : Interarrival time between the kth and the (k + 1)st customer• Ik : Idle period between the kth and the (k + 1)st customer

Wk+1 = max{0,Wk + Xk − τk} and Ik = −min{0,Wk + Xk − τk}

The average waiting time in queue: W ≤ λ(σ2a +σ2

b )2(1−ρ)

• σ2a: variance of the interarrival times

• σ2b : variance of the service times

• λ: average interarrival time

3-181

Upper Bound for G/G/1 System II

Notations for any random variable Y :

• Y + = max{0,Y } and Y− = −min{0,Y }• Y = E [Y ] and σ2

Y = E[Y 2 −Y 2]

• Y = Y + −Y− and Y + ·Y− = 0• E [Y ] = Y = Y + −Y−• σ2

Y = σ2Y + + σ2

Y− + 2Y + ·Y−

Using the above, we can express

• Wk+1 = max{0,Wk +Xk−τk} = max{0,Wk +Vk} = (Wk +Vk)+

• Ik = −min{0,Wk +Xk−τk} = −min{0,Wk +Vk} = (Wk +Vk)−

σ2(Wk+Vk) =σ2

(Wk+Vk)+ + σ2(Wk+Vk)− + 2(Wk + Vk)+ · (Wk + Vk)−

=σ2Wk+1

+ σ2Ik

+ 2W k+1 · I k

=σ2Wk

+ σ2Vk

= σ2Wk

+ σ2a + σ2

b

3-182

Upper Bound for G/G/1 System III

As k →∞, we can see

σ2Wk+1

+ σ2Ik

+ 2W k+1 · I k = σ2Wk

+ σ2a + σ2

b

becomesσ2

W + σ2I + 2W · I = σ2

W + σ2a + σ2

b

We get W as

W = σ2a + σ2

b2I

−σ2

I2I≤ σ2

a + σ2b

2I= λ(σ2

a + σ2b)

2(1− ρ)

– The average idle time I between two successive arrival is 1λ (1− ρ)

3-183