Stochastic Processes - lesson 8 · Stochastic Processes - lesson 8 Bo Friis Nielsen Institute of Mathematical Modelling Technical University of Denmark 2800 Kgs. Lyngby { Denmark

Post on 12-Jun-2020

4 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Stochastic Processes - lesson 8

Bo Friis Nielsen

Institute of Mathematical Modelling

Technical University of Denmark

2800 Kgs. Lyngby – Denmark

Email: bfn@imm.dtu.dk

Bo Friis Nielsen – 3/10-2000 2C04141

OutlineOutline

• Generating functions

Bo Friis Nielsen – 3/10-2000 2C04141

OutlineOutline

• Generating functions

� Sums of random variables - products of generating

functions

Bo Friis Nielsen – 3/10-2000 2C04141

OutlineOutline

• Generating functions

� Sums of random variables - products of generating

functions

� Moments of random variables - derivatives of generating

functions

Bo Friis Nielsen – 3/10-2000 2C04141

OutlineOutline

• Generating functions

� Sums of random variables - products of generating

functions

� Moments of random variables - derivatives of generating

functions

• Classification of Markov chain states

Bo Friis Nielsen – 3/10-2000 2C04141

OutlineOutline

• Generating functions

� Sums of random variables - products of generating

functions

� Moments of random variables - derivatives of generating

functions

• Classification of Markov chain states

• Classification of irreducible Markov chains

Bo Friis Nielsen – 3/10-2000 2C04141

OutlineOutline

• Generating functions

� Sums of random variables - products of generating

functions

� Moments of random variables - derivatives of generating

functions

• Classification of Markov chain states

• Classification of irreducible Markov chains

• Limiting/stationary distribution

Bo Friis Nielsen – 3/10-2000 2C04141

OutlineOutline

• Generating functions

� Sums of random variables - products of generating

functions

� Moments of random variables - derivatives of generating

functions

• Classification of Markov chain states

• Classification of irreducible Markov chains

• Limiting/stationary distribution

• Reading recommendations

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x),

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) =

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

• X with GX(s) and Y with GY (s) independent,

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

• X with GX(s) and Y with GY (s) independent,

Z = X + Y

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

• X with GX(s) and Y with GY (s) independent,

Z = X + Y has GZ(s)

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

• X with GX(s) and Y with GY (s) independent,

Z = X + Y has GZ(s) = GX(s)GY (s)

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

• X with GX(s) and Y with GY (s) independent,

Z = X + Y has GZ(s) = GX(s)GY (s)

• X with GX(s)

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

• X with GX(s) and Y with GY (s) independent,

Z = X + Y has GZ(s) = GX(s)GY (s)

• X with GX(s) has E(X) = lims→1 G′(s) usually G′(1)

Bo Friis Nielsen – 3/10-2000 3C04141

Once again: highligt of generating functionsOnce again: highligt of generating functions

• X with pdf f(x), GX(s) = E(

sX)

=∑

x=0 sxf(x)

• X with GX(s) and Y with GY (s) independent,

Z = X + Y has GZ(s) = GX(s)GY (s)

• X with GX(s) has E(X) = lims→1 G′(s) usually G′(1)

� and V (X) = G′′(1) + G′(1)− (G′(1))2

Bo Friis Nielsen – 3/10-2000 4C04141

Classification of Markov chain states - 6.2Classification of Markov chain states - 6.2

• States which cannot be left, once entered - absorbing

states

• States where the return some time in the future is certain -

recurrent or persistent states

� The time to return can be

? finite - postive recurrence/non-null persistent

? infite - null recurrent

• States where the return some time in the future is

uncertain - transient states

• States which can only be visited at certain time epochs -

periodic states

Bo Friis Nielsen – 3/10-2000 5C04141

First passage probabilitiesFirst passage probabilities

Bo Friis Nielsen – 3/10-2000 5C04141

First passage probabilitiesFirst passage probabilities

The first passage probability (p. 201)

fij(n) = P{X1 6= j,X2 6= j, . . . , Xn−1 6= j,Xn = j|X0 = i}

This is the probability of reaching j for the first time at time

n having started in i.

Bo Friis Nielsen – 3/10-2000 5C04141

First passage probabilitiesFirst passage probabilities

The first passage probability (p. 201)

fij(n) = P{X1 6= j,X2 6= j, . . . , Xn−1 6= j,Xn = j|X0 = i}

This is the probability of reaching j for the first time at time

n having started in i.

The probability of ever reaching j

fij =∞∑

n=1

fij(n) ≤ 1

The probabilities fij(n) constitiute a probability distribution.

Bo Friis Nielsen – 3/10-2000 5C04141

First passage probabilitiesFirst passage probabilities

The first passage probability (p. 201)

fij(n) = P{X1 6= j,X2 6= j, . . . , Xn−1 6= j,Xn = j|X0 = i}

This is the probability of reaching j for the first time at time

n having started in i.

The probability of ever reaching j

fij =∞∑

n=1

fij(n) ≤ 1

The probabilities fij(n) constitiute a probability distribution.

On the contrary we cannot say anything in general on∑

n=1 pij(n) (the n-step transition probabilities)

Bo Friis Nielsen – 3/10-2000 6C04141

First passage and first return timesFirst passage and first return times

Bo Friis Nielsen – 3/10-2000 6C04141

First passage and first return timesFirst passage and first return times

The random variable Tij for the first passage time.

Bo Friis Nielsen – 3/10-2000 6C04141

First passage and first return timesFirst passage and first return times

The random variable Tij for the first passage time.

P{Tij = n} = fij(n)

Bo Friis Nielsen – 3/10-2000 6C04141

First passage and first return timesFirst passage and first return times

The random variable Tij for the first passage time.

P{Tij = n} = fij(n)

For the first return time we use the shorter Ti.

We can define Ti by

Ti = min{n > 0 : Xn = i|X0 = i}

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

� A state is positive recurrent or non-null persistent if

E(Ti) = µi < ∞.

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

� A state is positive recurrent or non-null persistent if

E(Ti) = µi < ∞.

� A state is null recurrent if

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

� A state is positive recurrent or non-null persistent if

E(Ti) = µi < ∞.

� A state is null recurrent if E(Ti) =

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

� A state is positive recurrent or non-null persistent if

E(Ti) = µi < ∞.

� A state is null recurrent if E(Ti) = µi = ∞

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

� A state is positive recurrent or non-null persistent if

E(Ti) = µi < ∞.

� A state is null recurrent if E(Ti) = µi = ∞

• A state is transient if fii < 1.

In this case we define µi = ∞ for later convenience.

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

� A state is positive recurrent or non-null persistent if

E(Ti) = µi < ∞.

� A state is null recurrent if E(Ti) = µi = ∞

• A state is transient if fii < 1.

In this case we define µi = ∞ for later convenience.

• A peridoic state has non-zero pii(nk) for some k.

Bo Friis Nielsen – 3/10-2000 7C04141

State classification by fii(n)State classification by fii(n)

• A state is recurrent (persistent) if fii = 1

� A state is positive recurrent or non-null persistent if

E(Ti) = µi < ∞.

� A state is null recurrent if E(Ti) = µi = ∞

• A state is transient if fii < 1.

In this case we define µi = ∞ for later convenience.

• A peridoic state has non-zero pii(nk) for some k.

• A state is ergdoic if it is positive recurrent and aperiodic.

Bo Friis Nielsen – 3/10-2000 8C04141

Classification of Markov chains - 6.3Classification of Markov chains - 6.3

Bo Friis Nielsen – 3/10-2000 8C04141

Classification of Markov chains - 6.3Classification of Markov chains - 6.3

• We can identify subclasses of states with the same

properties

Bo Friis Nielsen – 3/10-2000 8C04141

Classification of Markov chains - 6.3Classification of Markov chains - 6.3

• We can identify subclasses of states with the same

properties

• All states which can mutually reach each other will be of

the same type

Bo Friis Nielsen – 3/10-2000 8C04141

Classification of Markov chains - 6.3Classification of Markov chains - 6.3

• We can identify subclasses of states with the same

properties

• All states which can mutually reach each other will be of

the same type

• Once again the formal analysis is a little bit heavy,

Bo Friis Nielsen – 3/10-2000 8C04141

Classification of Markov chains - 6.3Classification of Markov chains - 6.3

• We can identify subclasses of states with the same

properties

• All states which can mutually reach each other will be of

the same type

• Once again the formal analysis is a little bit heavy, but try

to stick to the fundamentals,

Bo Friis Nielsen – 3/10-2000 8C04141

Classification of Markov chains - 6.3Classification of Markov chains - 6.3

• We can identify subclasses of states with the same

properties

• All states which can mutually reach each other will be of

the same type

• Once again the formal analysis is a little bit heavy, but try

to stick to the fundamentals, definitions (concepts) and

results

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

• If i communicates with j and j communicates with i

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

• If i communicates with j and j communicates with i they

intercommunicate, expressed as i ↔ j.

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

• If i communicates with j and j communicates with i they

intercommunicate, expressed as i ↔ j.

• The set of states of a Markov chain can be partitioned into

sets of intercommunicating states.

Properties of sets of intercommunicating statesProperties of sets of intercommunicating states

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

• If i communicates with j and j communicates with i they

intercommunicate, expressed as i ↔ j.

• The set of states of a Markov chain can be partitioned into

sets of intercommunicating states.

Properties of sets of intercommunicating statesProperties of sets of intercommunicating states

Theorem 1 (2) Page 204 If i ↔ j then

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

• If i communicates with j and j communicates with i they

intercommunicate, expressed as i ↔ j.

• The set of states of a Markov chain can be partitioned into

sets of intercommunicating states.

Properties of sets of intercommunicating statesProperties of sets of intercommunicating states

Theorem 1 (2) Page 204 If i ↔ j then

• (a) i and j has the same period

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

• If i communicates with j and j communicates with i they

intercommunicate, expressed as i ↔ j.

• The set of states of a Markov chain can be partitioned into

sets of intercommunicating states.

Properties of sets of intercommunicating statesProperties of sets of intercommunicating states

Theorem 1 (2) Page 204 If i ↔ j then

• (a) i and j has the same period

• (b) i is transient if and only if j is transient

Bo Friis Nielsen – 3/10-2000 9C04141

Communicating statesCommunicating states

• we say that i communicates with j if fij > 0

• If i communicates with j and j communicates with i they

intercommunicate, expressed as i ↔ j.

• The set of states of a Markov chain can be partitioned into

sets of intercommunicating states.

Properties of sets of intercommunicating statesProperties of sets of intercommunicating states

Theorem 1 (2) Page 204 If i ↔ j then

• (a) i and j has the same period

• (b) i is transient if and only if j is transient

• (c) i is null persistent (null recurrent) if and only if j is

null persistent

Bo Friis Nielsen – 3/10-2000 10C04141

Definition 2 (3) page 205 A set C of states is called

Bo Friis Nielsen – 3/10-2000 10C04141

Definition 2 (3) page 205 A set C of states is called

• (a) Closed if pij = 0 for all i ∈ C, j /∈ C

Bo Friis Nielsen – 3/10-2000 10C04141

Definition 2 (3) page 205 A set C of states is called

• (a) Closed if pij = 0 for all i ∈ C, j /∈ C

• (b) Irreducible if i ↔ j for all i, j ∈ C.

Theorem 3

Bo Friis Nielsen – 3/10-2000 10C04141

Definition 2 (3) page 205 A set C of states is called

• (a) Closed if pij = 0 for all i ∈ C, j /∈ C

• (b) Irreducible if i ↔ j for all i, j ∈ C.

Theorem 3 Decomposition theorem (4) page 205. The

state space S can be partitioned uniquely as

S = T ∪ C1 ∪ C2 ∪ . . .

Bo Friis Nielsen – 3/10-2000 10C04141

Definition 2 (3) page 205 A set C of states is called

• (a) Closed if pij = 0 for all i ∈ C, j /∈ C

• (b) Irreducible if i ↔ j for all i, j ∈ C.

Theorem 3 Decomposition theorem (4) page 205. The

state space S can be partitioned uniquely as

S = T ∪ C1 ∪ C2 ∪ . . .

where T is the set of transient states, and the Ci are

irreducible closed sets of persistent states

Bo Friis Nielsen – 3/10-2000 10C04141

Definition 2 (3) page 205 A set C of states is called

• (a) Closed if pij = 0 for all i ∈ C, j /∈ C

• (b) Irreducible if i ↔ j for all i, j ∈ C.

Theorem 3 Decomposition theorem (4) page 205. The

state space S can be partitioned uniquely as

S = T ∪ C1 ∪ C2 ∪ . . .

where T is the set of transient states, and the Ci are

irreducible closed sets of persistent states

Lemma 4 If S is finite, then at least one state is

persistent(recurrent) and all persistent states are non-null

(positive recurrent)

Bo Friis Nielsen – 3/10-2000 11C04141

An example chain (random walk with

reflecting barriers)

An example chain (random walk with

reflecting barriers)

P

Bo Friis Nielsen – 3/10-2000 11C04141

An example chain (random walk with

reflecting barriers)

An example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

Bo Friis Nielsen – 3/10-2000 11C04141

An example chain (random walk with

reflecting barriers)

An example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

With initial probability distribution ~µ(0) = (1, 0, 0, 0, 0, 0, 0, 0)

or X0 = 1.

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1,

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

• From state j we can reach state 1 with a probability

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

• From state j we can reach state 1 with a probability

fj1 ≥ 0.3j−1,

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

• From state j we can reach state 1 with a probability

fj1 ≥ 0.3j−1, j > 1.

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

• From state j we can reach state 1 with a probability

fj1 ≥ 0.3j−1, j > 1.

• Thus all states intercommunicate and the chain is

irreducible.

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

• From state j we can reach state 1 with a probability

fj1 ≥ 0.3j−1, j > 1.

• Thus all states intercommunicate and the chain is

irreducible. Generally we won’t bother with bounds for the

fij’s.

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

• From state j we can reach state 1 with a probability

fj1 ≥ 0.3j−1, j > 1.

• Thus all states intercommunicate and the chain is

irreducible. Generally we won’t bother with bounds for the

fij’s.

• Since the chain is finite all states are positive recurrent

Bo Friis Nielsen – 3/10-2000 12C04141

Properties of that chainProperties of that chain

• We have a finite number of states

• From state 1 we can reach state j with a probability

f1j ≥ 0.4j−1, j > 1.

• From state j we can reach state 1 with a probability

fj1 ≥ 0.3j−1, j > 1.

• Thus all states intercommunicate and the chain is

irreducible. Generally we won’t bother with bounds for the

fij’s.

• Since the chain is finite all states are positive recurrent

• A look on the behaviour of the chain

Bo Friis Nielsen – 3/10-2000 13IMM

A number of different sample paths Xn’s

Bo Friis Nielsen – 3/10-2000 13IMM

A number of different sample paths Xn’s

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

Bo Friis Nielsen – 3/10-2000 13IMM

A number of different sample paths Xn’s

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

Bo Friis Nielsen – 3/10-2000 13IMM

A number of different sample paths Xn’s

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

Bo Friis Nielsen – 3/10-2000 13IMM

A number of different sample paths Xn’s

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

0 10 20 30 40 50 60 701

2

3

4

5

6

7

8

Bo Friis Nielsen – 3/10-2000 14IMM

The state probabilities µ(n)j

Bo Friis Nielsen – 3/10-2000 14IMM

The state probabilities µ(n)j

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n)

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

• If the chain is positive recurrent pij(n) →

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

• If the chain is positive recurrent pij(n) → 1µj

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

• If the chain is positive recurrent pij(n) → 1µj

• The limiting probability of Xn = j

Bo Friis Nielsen – 3/10-2000 15C04141

Limiting distributionLimiting distribution

Theorem 5 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

• If the chain is positive recurrent pij(n) → 1µj

• The limiting probability of Xn = j does not depend on the

starting state X0 = i

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n)

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n) = ~µ(n−1)P

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n) = ~µ(n−1)P = ~µ(n−1)

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n) = ~µ(n−1)P = ~µ(n−1) by our

assumption of ~µ(n) being constant

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n) = ~µ(n−1)P = ~µ(n−1) by our

assumption of ~µ(n) being constant

• Expressed differently

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n) = ~µ(n−1)P = ~µ(n−1) by our

assumption of ~µ(n) being constant

• Expressed differently ~π =

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n) = ~µ(n−1)P = ~µ(n−1) by our

assumption of ~µ(n) being constant

• Expressed differently ~π = ~π

Bo Friis Nielsen – 3/10-2000 16C04141

The stationary distributionThe stationary distribution

• A distribution that does not change with n

• The elements of ~µ(n) are all constant

• The implication of this is ~µ(n) = ~µ(n−1)P = ~µ(n−1) by our

assumption of ~µ(n) being constant

• Expressed differently ~π = ~πP

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Theorem 7 (3) page 208 VERY IMPORTANT

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Theorem 7 (3) page 208 VERY IMPORTANT

An irreducible chain has a stationary distribution ~π

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Theorem 7 (3) page 208 VERY IMPORTANT

An irreducible chain has a stationary distribution ~π if and only

if all the states are non-null persistent

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Theorem 7 (3) page 208 VERY IMPORTANT

An irreducible chain has a stationary distribution ~π if and only

if all the states are non-null persistent (positive recurrent);

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Theorem 7 (3) page 208 VERY IMPORTANT

An irreducible chain has a stationary distribution ~π if and only

if all the states are non-null persistent (positive recurrent);in

this case, ~π is the unique stationary distribution

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Theorem 7 (3) page 208 VERY IMPORTANT

An irreducible chain has a stationary distribution ~π if and only

if all the states are non-null persistent (positive recurrent);in

this case, ~π is the unique stationary distribution and is given

by πi = 1µi

for each i ∈ S,

Bo Friis Nielsen – 3/10-2000 17C04141

Stationary distributionStationary distribution

Definition 6 The vector ~π is called a stationary distribution

of the chain if ~π has entries (πj : j ∈ S) such that

• (a) πj ≥ 0 for all j, and∑

j πj = 1

• (b) ~π = ~πP, which is to say that πj =∑

i πipij for all j.

Theorem 7 (3) page 208 VERY IMPORTANT

An irreducible chain has a stationary distribution ~π if and only

if all the states are non-null persistent (positive recurrent);in

this case, ~π is the unique stationary distribution and is given

by πi = 1µi

for each i ∈ S, where µi is the mean recurrence

time of i.

Bo Friis Nielsen – 3/10-2000 18C04141

Limiting distributionLimiting distribution

Theorem 8 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

Bo Friis Nielsen – 3/10-2000 18C04141

Limiting distributionLimiting distribution

Theorem 8 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

• If the chain is positive recurrent pij(n) → 1µj

Bo Friis Nielsen – 3/10-2000 18C04141

Limiting distributionLimiting distribution

Theorem 8 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

• If the chain is positive recurrent pij(n) → 1µj

= πj.

Bo Friis Nielsen – 3/10-2000 18C04141

Limiting distributionLimiting distribution

Theorem 8 (17) page 214 For an irreducible aperiodic

chain, we have that

pij(n) →1

µj

as n →∞, for all i and j

Three important remarks (also on page 214)

• If the chain is transient or null-persistent (null-recurrent)

pij(n) → 0

• If the chain is positive recurrent pij(n) → 1µj

= πj.

• The limiting probability of Xn = j does not depend on the

starting state X0 = i

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1·0.4

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1·0.4+π2

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1·0.4+π2·

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1·0.4+π2·0.3+

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1·0.4+π2·0.3+π3

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1·0.4+π2·0.3+π3·0.3

Bo Friis Nielsen – 3/10-2000 19C04141

The example chain (random walk with

reflecting barriers)

The example chain (random walk with

reflecting barriers)

P =

0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.0

0.3 0.3 0.4 0.0 0.0 0.0 0.0 0.0

0.0 0.3 0.3 0.4 0.0 0.0 0.0 0.0

0.0 0.0 0.3 0.3 0.4 0.0 0.0 0.0

0.0 0.0 0.0 0.3 0.3 0.4 0.0 0.0

0.0 0.0 0.0 0.0 0.3 0.3 0.4 0.0

0.0 0.0 0.0 0.0 0.0 0.3 0.3 0.4

0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7

~π = ~πP

Elementwise the matrix equation is πi =∑

j πjpji

π1 = π1·0.6+π2·0.3 π2 = π1·0.4+π2·0.3+π3·0.3 π3 = π2·0.4+π3·0.3+π4·0.3

Bo Friis Nielsen – 3/10-2000 20C04141

π1 = π1 · 0.6 + π2 · 0.3

Bo Friis Nielsen – 3/10-2000 20C04141

π1 = π1 · 0.6 + π2 · 0.3

πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3

Bo Friis Nielsen – 3/10-2000 20C04141

π1 = π1 · 0.6 + π2 · 0.3

πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3

π8 = π7 · 0.4 + π8 · 0.7

Bo Friis Nielsen – 3/10-2000 20C04141

π1 = π1 · 0.6 + π2 · 0.3

πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3

π8 = π7 · 0.4 + π8 · 0.7

Or

π2 =1− 0.6

0.3π1

Bo Friis Nielsen – 3/10-2000 20C04141

π1 = π1 · 0.6 + π2 · 0.3

πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3

π8 = π7 · 0.4 + π8 · 0.7

Or

π2 =1− 0.6

0.3π1

πj+1 =1

0.3((1− 0.3)πj − 0.4πj−1)

Bo Friis Nielsen – 3/10-2000 20C04141

π1 = π1 · 0.6 + π2 · 0.3

πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3

π8 = π7 · 0.4 + π8 · 0.7

Or

π2 =1− 0.6

0.3π1

πj+1 =1

0.3((1− 0.3)πj − 0.4πj−1)

Can be solved recursively

Bo Friis Nielsen – 3/10-2000 20C04141

π1 = π1 · 0.6 + π2 · 0.3

πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3

π8 = π7 · 0.4 + π8 · 0.7

Or

π2 =1− 0.6

0.3π1

πj+1 =1

0.3((1− 0.3)πj − 0.4πj−1)

Can be solved recursively to find:

πj =(

0.4

0.3

)j−1

π1

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

7∑

k=0

(

0.4

0.3

)k

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

7∑

k=0

(

0.4

0.3

)k

N∑

i=0

ai =

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

7∑

k=0

(

0.4

0.3

)k

N∑

i=0

ai =

1−aN+1

1−aN < ∞, a 6= 1

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

7∑

k=0

(

0.4

0.3

)k

N∑

i=0

ai =

1−aN+1

1−aN < ∞, a 6= 1

N + 1 N < ∞, a = 1

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

7∑

k=0

(

0.4

0.3

)k

N∑

i=0

ai =

1−aN+1

1−aN < ∞, a 6= 1

N + 1 N < ∞, a = 1

11−a

N = ∞, |a| < 1

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

7∑

k=0

(

0.4

0.3

)k

N∑

i=0

ai =

1−aN+1

1−aN < ∞, a 6= 1

N + 1 N < ∞, a = 1

11−a

N = ∞, |a| < 1

Such that

1 = π1

1−(

0.40.3

)8

1− 0.40.3

Bo Friis Nielsen – 3/10-2000 21C04141

The normalising conditionThe normalising condition

• We note that we don’t have to use the last equation

• We need a solution which is a probability distribution

8∑

j=1

πj = 1,8

j=1

(

0.4

0.3

)j−1

π1 = π1

7∑

k=0

(

0.4

0.3

)k

N∑

i=0

ai =

1−aN+1

1−aN < ∞, a 6= 1

N + 1 N < ∞, a = 1

11−a

N = ∞, |a| < 1

Such that

1 = π1

1−(

0.40.3

)8

1− 0.40.3

⇔ π1 =1− 0.4

0.3

1−(

0.40.3

)8

Bo Friis Nielsen – 3/10-2000 22C04141

Interpretation of πj’sInterpretation of πj’s

Bo Friis Nielsen – 3/10-2000 22C04141

Interpretation of πj’sInterpretation of πj’s

• Limiting probabilities

Bo Friis Nielsen – 3/10-2000 22C04141

Interpretation of πj’sInterpretation of πj’s

• Limiting probabilities

• Long term averages

Bo Friis Nielsen – 3/10-2000 22C04141

Interpretation of πj’sInterpretation of πj’s

• Limiting probabilities

• Long term averages

• Stationary distribution

Bo Friis Nielsen – 3/10-2000 23C04141

Reading recommendationsReading recommendations

• For Tuesday October 3, read 6.4

• For Friday October 6, read 6.4-6.5, exercise10, (solution

exercise 10?).

• For Tuesday October 10, read 6.8

• For Friday October 13, read 6.9

top related