Top Banner
Markov Analysis
37
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Markov Analysis

Markov Analysis

Page 2: Markov Analysis

IntroductionMarkov AnalysisA technique dealing with probabilities of

future occurrences with currently known probabilities

Numerous applications in◦ Business (e.g., market share analysis),

◦ Bad debt prediction

◦ University enrollment predictions

◦ Machine breakdown prediction

Page 3: Markov Analysis

Markov Analysis

Matrix of Transition Probabilities Shows the likelihood that the system will

change from one time period to the next This is the Markov Process.

◦ It enables the prediction of future states or conditions.

Page 4: Markov Analysis

States and State Probabilities

States are◦ used to identify all possible conditions of a process or

system

A system can exist in only one state at a time. Examples include:◦ Working and broken states of a machine

◦ Three shops in town, with a customer able to patronize one at a time

◦ Courses in a student schedule with the student able to occupy only one class at a time

Page 5: Markov Analysis

Assumptions of Markov Analysis

1. A finite number of possible states

2. Probability of changing states remains the

same over time

3. Future state predictable from previous state

and the transition matrix

4. Size and states of the system remain the same

during analysis

5. States are collectively exhaustive

◦ All possible states have been identified

6. States are mutually exclusive

◦ Only one state at a time is possible

Page 6: Markov Analysis

States and State Probabilities continued

1. Identify all states.

2. Determine the probability that the system is in this state.

◦ This information is placed into a vector of state probabilities.

p(i) = vector of state probabilities for period i

= (p1, p2, p3,…,pn)

where

n = number of states

p1, p2,…,pn = P (being in state 1, 2, …, state n)

Most of the time, problems deal with more than one item!

Page 7: Markov Analysis

States and State Probabilities continued

Three Departmental Stores example: 100,000 customers monthly for the 3 grocery stores

◦ State 1 = store 1 = 40,000/100,000 = 40%

◦ State 2 = store 2 = 30,000/100,000 = 30%

◦ State 3 = store 3 = 30,000/100,000 = 30%

vector of state probabilities:

p(1) = (0.4,0.3,0.3)

where

p(1) = vector of state probabilities in period 1

p1 = 0.4 = P (of a person being in store 1)

p2 = 0.3 = P (of a person being in store 2)

p3 = 0.3 = P (of a person being in store 3)

Page 8: Markov Analysis

States and State Probabilities continued

Three Departmental Stores example, continued: Probabilities in the vector of states for the stores

represent market share for the first period.

In period 1, the market shares are

◦ Store 1: 40%

◦ Store 2: 30%

◦ Store 3: 30%

But, every month, customers who frequent one store have a likelihood of visiting another store.

Customers from each store have different probabilities for visiting other stores.

Page 9: Markov Analysis

States and State Probabilities continued

Three Grocery Store example, continued:

Store-specific customer probabilities for visiting a

store in the next month:

Store 1: Store 2:

Return to Store 1 = 80% Visit Store 1 = 10%

Visit Store 2 = 10% Return to Store 2 = 70%

Visit Store 3 = 10% Visit Store 3 = 20%

Store 3:

Visit Store 1 = 20%

Visit Store 2 = 20%

Return to Store 3 = 60%

Page 10: Markov Analysis

States and State Probabilities continued

Three Grocery Store example, continued:

Combining the starting market share with the

customer probabilities for visiting a store next period

yields the market shares in the next period:

Initial

ShareP(1) P(2) P(3)

Store 1: 40% 80% 10% 10%

Next Period: 32% 4% 4%

Store 2: 30% 10% 70% 20%

Next Period: 3% 21% 6%

Store 3: 30% 20% 20% 60%

Next Period: 6% 6% 18%

New Shares: 41% 31% 28% = 100%

Page 11: Markov Analysis

Matrix of Transition Probabilities

To calculate periodic changes, it is much more convenient to use

◦ a matrix of transition probabilities.

◦ a matrix of conditional probabilities of being in a future state given a current state.

Let Pij = conditional probability of being in state j in

the future given the current state of i, P (state j at time = 1 | state i at time = 0)

For example, P12 is the probability of being in state

2 in the future given the event was in state 1 in the prior period

Page 12: Markov Analysis

Matrix of Transition Probabilities continued

Let P = matrix of transition probabilities

P11 P12 P13 ***** P1n

P21 P22 P23 ***** P2n

P =

Pm1 ****** Pmn

Important:

Each row must sum to 1.

But, the columns do NOT necessarily sum to 1.

****

****

Row Sum

1

1

1

Page 13: Markov Analysis

Matrix of Transition Probabilities continued

Three Grocery Stores, revisited

The previously identified transitional probabilities for each of the stores can now be put into a matrix:

0.8 0.1 0.1

P = 0.1 0.7 0.2

0.2 0.2 0.6

Row 1 interpretation:

0.8 = P11 = P (in state 1 after being in state 1)

0.1 = P12 = P (in state 2 after being in state 1)

0.1 = P13 = P (in state 3 after being in state 1)

Page 14: Markov Analysis

Predicting Future Market Shares

Grocery Store example

A purpose of Markov analysis is to predict the future

Given the

1. vector of state probabilities and

2. matrix of transitional probabilities.

It is easy to find the state probabilities in the future.

This type of analysis allows the computation of the probability that a person will be at one of the grocery stores in the future.

Since this probability is equal to market share, it is possible to determine the future market shares of the grocery store.

Page 15: Markov Analysis

Predicting Future Market Shares continued

Grocery Store example

When the current period is 0, finding the state

probabilities for the next period (1) can be found

using:

p(1) = p(0)P

Generally, in any period n, the state probabilities for

period n+1 can be computed as:

p(n+1) = p(n)P

Page 16: Markov Analysis

Predicting Future States continued

[

]

)1(

6.2.2.2.7.1.1.1.8.

(0)

6.2.2.2.7.1.1.1.8.

[ ].4 .3 .3 (0) = state probabilitiesπ

=

úúú

û

ù

êêê

ë

é=

úúú

û

ù

êêê

ë

é=

=

p

p(1)p

p

P

P

[ ].4 .3 .3(1) =p

(.4*.8 + .3*.1 + .3*.2), (.4*.1 + .3*.7 + .3*.2),

(.4*.1 + .3*.2 + .3*.6) (1)p = [ 0.41 0.31 0.28]

Page 17: Markov Analysis

Predicting Future Market Shares continued

In general,

p(n) = p(0)Pn

Therefore, the state probabilities n periods in the future can be obtained from the

current state probabilities and the matrix of transition probabilities.

Page 18: Markov Analysis

Another Example of Markov Analysis: The Machine Operations

Page 19: Markov Analysis

States and State Probabilities

For example: If dealing with only 1 machine, given the

fact that it is currently functioning correctly.

The vector of states can then be shown.

p(1) = (1,0)

where

p(1) = vector of states for the machine in period 1

p1 = 1 = P (being in state 1) = P (machine working)

p2 = 0 = P (being in state 2) = P (machine broken)

Page 20: Markov Analysis

Markov Analysis of Machine Operations

where

P11 = 0.8 = probability of machine working this period if

working last period

P12 = 0.2 = probability of machine not working this period if

working last

P21 = 0.1 = probability of machine working

this period if not working last

P22 = 0.9 = probability machine not working this period if

not working last

0.1 0.90.8 0.2P =

Page 21: Markov Analysis

Markov Analysis of Machine Operations continued

What is the probability the machine will be working next month?

p(1) = p(0)P

= (1,0) 0.1 0.90.8 0.2

= [(1)(0.8)+(0)(0.1), (1)(0.2)+(0)(0.9)]= (0.8, 0.2)

Thus, if the machine works this month, then there is • an 80% chance that it will be working next month and • a 20% chance it will be broken.

Page 22: Markov Analysis

Markov Analysis of Machine Operations continued

What is the probability the machine will be working in two months?

p(2) = p(1)P

= (0.8, 0.2)

0.1 0.90.8 0.2

= [(0.8)(0.8)+(0.2)(0.1), (0.8)(0.2)+(0.2)(0.9)]= (0.66, 0.34)Thus, if the machine works next month, then in two months there is • a 66% chance that it will be working and • a 34% chance it will be broken.

Page 23: Markov Analysis

Equilibrium State and Absorbing State (Only Understanding of Concepts Required/No Maths Required)

Page 24: Markov Analysis

Equilibrium ConditionsEquilibrium state probabilities are the long-run

average probabilities for being in each state.

Equilibrium conditions exist if state probabilities do not change after a large number of periods.

At equilibrium, state probabilities for the next period equal the state probabilities for current period.

Page 25: Markov Analysis

Equilibrium Conditions continued

One way to compute the equilibrium share of the market is to use Markov analysis for a large number of periods and see if the future amounts approach stable values.

On the next slide, the Markov analysis is repeated for 15 periods for the machine example.

By the 15th period, the share of time the machine spends working and broken is around 66% and 34%, respectively.

Page 26: Markov Analysis

Machine Example: Periods to Reach Equilibrium

Period123456789101112131415

State 11.0 .8

.66 .562

.4934 .44538

.411766 .388236 .371765 .360235 .352165 .346515 .342560 .339792 .337854

0.0 .2

.34 .438

.5066 .55462

.588234 .611763 .628234 .639754 .647834 .653484 .657439 .660207 .662145

State 2

Page 27: Markov Analysis

The Markov Process

(n) P (n+1)

Equilibrium Conditions

Matrix ofTransition

NewState

CurrentState

Page 28: Markov Analysis

Equilibrium Equations

[ ]

[ ] [ ]

1

and 1

:or

Then:

P , (i) Assume:

)()1(

22

1212

11

2121

2221212 ,2121111

22212121211121

2221

121121

pp

pp

Therefore:PPPP

PPPP

pppp

Pii

-=-=

+=+=

++=

úûù

êëé==

=+

pppp

pppppp

pppppp

ppp

pp

Page 29: Markov Analysis

Equilibrium Equations continued

It is always true thatp (next period) = p (this period) P

p (n+1) = p (n) Por

At Equilibrium:p (n+1) = p (n) = p (n) P*p (n) = p (n) P*

p = p P

Dropping the n term:

Page 30: Markov Analysis

Equilibrium Equations continued

Machine Breakdown example

(p1, p2) = (p1, p2)

At Equilibrium:p = p P

Applying matrix multiplication:

0.1 0.90.8 0.2

(p1, p2) = [(p1)(0.8) + (p2)(0.1), (p1)(0.2) + (p2)(0.9)]

Multiplying through yields:p1 = 0.8 p1 + 0.1 p2p2 = 0.2 p1 + 0.9 p2

Page 31: Markov Analysis

Equilibrium Equations continued

Machine Breakdown example

The state probabilities must sum to 1, therefore: S p’s = 1In this example, then:

p1 + p2 = 1

In a Markov analysis, there are always n state equilibrium equations and 1 equation of state probabilities summing to 1.

Page 32: Markov Analysis

Equilibrium Equations continued

Machine Breakdown ExampleSummarizing the equilibrium equations:

p1 = 0.8 p1 + 0.1 p2p2 = 0.2 p1 + 0.9 p2

p1 + p2 = 1

Solving by simultaneous equations:p1 = 0.333333 p2 = 0.666667

Therefore, in the long-run, the machine will be functioning 33.33% of the time and broken down 66.67% of the time.

Page 33: Markov Analysis

Absorbing StatesAny state that does not have a probability

of moving to another state is called an absorbing state.

If an entity is in an absorbing state now, the probability of being in an absorbing state in the future is 100%.

An example of such a process is accounts receivable.◦ Bills are either paid, delinquent, or written off as

bad debt.◦ Once paid or written off, the debt stays paid or

written off.

Page 34: Markov Analysis

Absorbing StatesAccounts Receivable exampleThe possible states are

◦ Paid◦ Bad debt◦ Less than 1 month old debt◦ 1 to 3 months old debt

A transition matrix for this would look similar to:Paid Bad <1 1-3

Paid 1 0 0 0 Bad 0 1 0 0<1 0.6 0 0.2 0.2 1-3 0.4 0.1 0.3 0.2

Page 35: Markov Analysis

Markov ProcessFundamental Matrix

P =0 1 0 01 0 0 0

0.6 0 0.2 0.20.4 0 0.3 0.2

I 0

A B

Partition the probability matrix into 4 quadrants to make 4 new sub-matrices:I, 0, A, and B

Page 36: Markov Analysis

Markov ProcessFundamental Matrix continued

BA

0I PLet

Where I = Identity matrix,

and 0 = Null matrix

1 BIFThen

The FA indicates the probability that an amount in one of the non-absorbing states will end up in one of the absorbing states.

Once F is found, multiply by the A matrix: FA

Page 37: Markov Analysis

Markov ProcessFundamental Matrix continued

Once the FA matrix is found, multiply by the M vector, which is the starting values for the non-absorbing states, MFA,

whereM = (M1, M2, M3, … Mn)

The resulting vector will indicate how many observations end up in the first non-absorbing state and the second non-absorbing state, respectively.