Top Banner
CSE 190: Reinforcement Learning: An Introduction Chapter 4: Dynamic Programming Acknowledgment: Acknowledgment: A good number of these slides A good number of these slides are cribbed from Rich Sutton are cribbed from Rich Sutton 2 CSE 190: Reinforcement Learning, Lecture CSE 190: Reinforcement Learning, Lecture on Chapter on Chapter 4 Administrivia Any Any email sent to me about the course should have “CSE 190” in the subject line! 3 CSE 190: Reinforcement Learning, Lecture CSE 190: Reinforcement Learning, Lecture on Chapter on Chapter 4 Goals for this chapter Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP) Show how DP can be used to compute value functions, and hence, optimal policies Discuss efficiency and utility of DP Last Time: Value Functions State - value function for policy ! : V ! ( s) = E ! R t s t = s { } = E ! " k r t +k +1 s t = s k =0 # $ % & ' ( ) * Action- value function for policy ! : Q ! (s , a) = E ! R t s t = s , a t = a { } = E ! " k r t + k +1 s t = s , a t = a k = 0 # $ % & ' ( ) * The value of a state is the expected return starting from that state; depends on the agent’s policy: The value of taking an action in a state under policy ! is the expected return starting from that state, taking that action, and thereafter following ! :
11

Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

Nov 20, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

CSE 190: Reinforcement Learning:An Introduction

Chapter 4: Dynamic Programming

Acknowledgment:Acknowledgment:A good number of these slidesA good number of these slides are cribbed from Rich Suttonare cribbed from Rich Sutton

22CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Administrivia

•• AnyAny email sent to me about the course should have “CSE190” in the subject line!

33CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Goals for this chapter

• Overview of a collection of classical solution methods forMDPs known as dynamic programming (DP)

• Show how DP can be used to compute value functions, andhence, optimal policies

• Discuss efficiency and utility of DP

44CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Last Time: Value Functions

State - value function for policy ! :

V! (s) = E! Rt st = s{ } = E! " krt+k +1 st = sk =0

#

$% & '

( ) *

Action- value function for policy ! :

Q! (s, a) = E! Rt st = s, at = a{ } = E! " krt+ k+1 st = s,at = ak= 0

#

$% & '

( ) *

• The value of a state is the expected return starting fromthat state; depends on the agent’s policy:

• The value of taking an action in a state under policy! is the expected return starting from that state, takingthat action, and thereafter following ! :

Page 2: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

55CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Last Time: Bellman Equation for aPolicy !

Rt = rt+1 + ! rt+2 + !2rt+3 + !

3rt+4!

= rt+1 + ! rt+2 + ! rt+3 + !2rt+4!( )

= rt+1 + ! Rt+1

The basic idea:

So: V! (s) = E! Rt st = s{ }= E! rt+1 + "V st+1( ) st = s{ }

Or, without the expectation operator:

V ! (s) = ! (s,a) Ps "sa Rs "s

a + #V ! ( "s )$% &'"s(

a(

66CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Last Time: More on the BellmanEquation

V ! (s) = ! (s,a) Ps "sa Rs "s

a + #V ! ( "s )$% &'"s(

a(

This is a set of equations (in fact, linear), one for each state.The value function for ! is its unique solution.

Backup diagrams:

for V ! for Q!

77CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Last Time: Bellman OptimalityEquation for V*

V !(s) = maxa"A(s )

Q#!

(s,a)

= maxa"A(s )

E rt+1 + $V!(st+1) st = s,at = a{ }

= maxa"A(s )

Ps %sa

%s& Rs %s

a + $V !( %s )'( )*

The value of a state under an optimal policy must equalthe expected return for the best action from that state:

The relevant backup diagram:

V* is the unique solution of this system of nonlinear equations.88CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Last Time: Bellman OptimalityEquation for Q*

Q!(s,a) = E rt+1 + " max#a Q!(st+1, #a ) st = s,at = a{ }= Ps #s

a Rs #sa + " max

#aQ!( #s , #a )$

%&'

#s(

The relevant backup diagram:

Q* is the unique solution of this system of nonlinear equations.

Page 3: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

99CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

This Time

• How to solve these equations using iteration

• Can solve for optimal V*

• But often it is faster to evaluate and improve the policy

• Alternating figuring out V! and improving !

1010CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Policy Evaluation

State - value function for policy ! :

V ! (s) = E! Rt st = s{ } = E! " krt+ k+1 st = sk=0

#

$%&'

()*

Bellman equation for V ! :

V ! (s) = ! (s,a) Ps "sa Rs "s

a + #V ! ( "s )$% &'"s(

a(

— a system of S simultaneous linear equations

Policy Evaluation: for a given policy !, compute the state-value function V !

Recall:

1111CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Iterative Methods

V0 !V1 !!!Vk !Vk+1 !!!V "

Vk+1(s)! " (s,a) Ps #sa Rs #s

a + $Vk ( #s )%& '(#s)

a)

a “sweep”

A sweep consists of applying a backup operation to each state.

A full policy-evaluation backup:

1212CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Iterative Methods

V0 !V1 !!!Vk !Vk+1 !!!V "

Vk+1(s)! " (s,a) Ps #sa Rs #s

a + $Vk ( #s )%& '(#s)

a)

a “sweep”

A sweep consists of applying a backup operation to each state.

A full policy-evaluation backup:

Page 4: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

1313CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Iterative Policy Evaluation

1414CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

• An undiscounted episodic task

• Nonterminal states: 1, 2, . . ., 14;

• One terminal state (shown twice as shaded squares)

• Actions that would take agent off the grid leave stateunchanged

• Reward is –1 until the terminal state is reached

1515CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

• Note here that the actions are deterministic, so thisequation:

• Becomes:

Vk+1(s)! " (s,a) Ps #sa Rs #s

a + $Vk ( #s )%& '(#s)

a)

Vk+1(s)! " (s,a) Rs #sa + $Vk ( #s )%& '(

a)

1616CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

• And…it is undiscounted, so this:

• Becomes:

Vk+1(s)! " (s,a) Rs #sa + $Vk ( #s )%& '(

a)

Vk+1(s)! " (s,a) Rs #sa +Vk ( #s )$% &'

a(

Page 5: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

1717CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

Vk+1(s)!" (s,UP) Rs #sUP +Vk ( #s )$% &' +

" (s,RIGHT ) Rs #sRIGHT +Vk ( ##s )$% &' +

" (s,DOWN ) Rs #sDOWN +Vk ( ###s )$% &' +

" (s,LEFT ) Rs #sLEFT +Vk ( ####s )$% &'

Vk+1(s)! " (s,a) Rs #sa +Vk ( #s )$% &'

a(

1818CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

Vk+1(s)! 0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ]

1919CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

• For state 4, for example, we have:

Vk+1(4)! 0.25UP "1+Vk (terminal)[ ] +0.25RIGHT "1+Vk (5)[ ] +0.25DOWN "1+Vk (8)[ ] +0.25LEFT "1+Vk (4)[ ]

2020CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

Vk+1(s)! 0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ]

Page 6: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

2121CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

Vk+1(s)! 0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ] +0.25 "1+Vk ( #s )[ ]

2222CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

Vk+1(4)! 0.25UP "1+Vk (terminal)[ ] +0.25RIGHT "1+Vk (5)[ ] +0.25DOWN "1+Vk (8)[ ] +0.25LEFT "1+Vk (4)[ ]

2323CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

A Small Gridworld

Vk+1(4)! 0.25UP "1+ 0[ ] +0.25RIGHT "1+ "1[ ] +0.25DOWN "1+"1[ ] +0.25LEFT "1+"1[ ]= "1.75

2424CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Iterative PolicyEvaluationfor the SmallGridworld

! = equiprobable random action choices

Page 7: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

2525CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Iterative PolicyEvaluationfor the SmallGridworld! = equiprobable random action choices

But look what happensBut look what happensif these values are usedif these values are usedto make a new policy!to make a new policy!

(note - this won(note - this won’’t alwayst alwayshappen!)happen!)

Exercise for the reader:Exercise for the reader:What are the values of the statesWhat are the values of the states

under the optimal policy?under the optimal policy?

2626CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Policy Improvement

Suppose we have computed for a deterministic policy !.V !

For a given state s, would it be better to do an action ? a ! " (s)

Q! (s,a) = E! rt+1 + "V! (st+1) st = s,at = a{ }

= Ps #sa

#s$ Rs #s

a + "V ! ( #s )%& '(

The value of doing a in state s is:

It is better to switch to action a for state s if and only if Q! (s,a) >V ! (s)

2727CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Policy Improvement Cont.

!" (s) = argmaxaQ" (s,a)

= argmaxa

Ps !sa

!s# Rs !s

a + $V " ( !s )%& '(

Do this for all states to get a new policy !" that is greedy with respect to V " :

Then V !" #V "

2828CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Policy Improvement Cont.

What if V !" = V " ?

i.e., for all s #S, V !" (s) = maxa

Ps !sa

!s$ Rs !s

a + %V " ( !s )&' () ?

But this is the Bellman Optimality Equation.So V !" = V # and both " and !" are optimal policies.

Page 8: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

2929CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Policy Iteration

! 0 "V !0 "!1 "V !1 "!! * "V * "! *

policy evaluation policy improvement“greedification”

3030CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Policy Iteration

3131CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Jack’s Car Rental

• $10 for each car rented (must be available when request rec’d)

• Two locations, maximum of 20 cars at each

• Cars returned and requested randomly• Poisson distribution, n returns/requests with prob "ne-"/n! (where " is

the expected number)

• 1st location: average requests = 3, average returns = 3

• 2nd location: average requests = 4, average returns = 2

• Can move up to 5 cars between locations overnight at $2/car.

• States, Actions, Rewards?

• Transition probabilities?

3232CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Jack’s Car Rental

Note this makes sense - location 2 on average loses 2 cars per day.

Page 9: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

3333CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Jack’s CR Exercise

• Suppose the first car moved is free• From 1st to 2nd location

• Because an employee travels that way anyway (by bus)

• Suppose only 10 cars can be parked for free at eachlocation• More than 10 cost $4 for using an extra parking lot

• Such arbitrary nonlinearities are common in realproblems

3434CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Policy iteration: Can we do better?• Each iteration involves policy evaluation, which is itself an iterative

process

• It looks like from the previous example that policy evaluation mayconverge long after the greedy policy based on the values hasconverged.

• Can we skip steps somehow?

• Yes: policy evaluation can be stopped early and under most cases,convergence is still guaranteed!

• A very special case: Stopping after oneone sweepsweep of policy evaluation.

• This is called value iteration

3535CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Value Iteration

Vk+1(s)! " (s,a) Ps #sa Rs #s

a + $Vk ( #s )%& '(#s)

a)

Recall the full policy-evaluation backup:

Vk+1(s)!maxa

Ps "sa Rs "s

a + #Vk ( "s )$% &'"s(

Here is the full value-iteration backup:

Note how this combines policy improvement and evaluation.It is simply the Bellman optimality equation turned into anupdate equation!In practice, often policy evaluation (sum) is performedseveral times between policy improvement (max) sweeps.

3636CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Value Iteration Cont.

Page 10: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

3737CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Gambler’s Problem

• Gambler can repeatedly bet $ on a coin flip

• Heads he wins his stake, tails he loses it• Initial capital # {$1, $2, … $99}

• Gambler wins if his capital becomes $100loses if it becomes $0

• Coin is unfair• Heads (gambler wins) with probability p = .4

• States, Actions, Rewards?

!n

n!e"!

3838CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Gambler’s Problem Solution

3939CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Herd Management

• You are a consultant to a farmer managing a herd of cows

• Herd consists of 5 kinds of cows:• Young

• Milking

• Breeding

• Old

• Sick

• Number of each kind is the State

• Number sold of each kind is the Action

• Cows transition from one kind to another

• Young cows can be born

4040CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Asynchronous DP

• All the DP methods described so far require exhaustivesweeps of the entire state set.

• Asynchronous DP does not use sweeps. Instead it workslike this:• Repeat until convergence criterion is met:

• Pick a state at random and apply the appropriate backup

• Still need lots of computation, but does not get lockedinto hopelessly long sweeps

• Can you select states to backup intelligently? YES: anagent’s experience can act as a guide.

Page 11: Administrivia CSE 190: Reinforcement Learning: AnyAny An ...

4141CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Generalized Policy Iteration

Generalized Policy Iteration (GPI): any interaction of policy evaluation and policy improvement, independent of their granularity.

A geometric metaphor forconvergence of GPI:

4242CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Efficiency of DP

• To find an optimal policy is polynomial in the number ofstates…

• BUT, the number of states is often astronomical, e.g., oftengrowing exponentially with the number of state variables(what Bellman called “the curse of dimensionality”).

• In practice, classical DP can be applied to problems with afew million states.

• Asynchronous DP can be applied to larger problems, and isappropriate for parallel computation.

• It is surprisingly easy to come up with MDPs for which DPmethods are not practical.

4343CSE 190: Reinforcement Learning, LectureCSE 190: Reinforcement Learning, Lecture on Chapteron Chapter 44

Summary

• Policy evaluation: backups without a max

• Policy improvement: form a greedy policy, if only locally

• Policy iteration: alternate the above two processes

• Value iteration: backups with a max

• Full backups (to be contrasted later with sample backups)

• Asynchronous DP: a way to avoid exhaustive sweeps

• Generalized Policy Iteration (GPI)

• Bootstrapping: updating estimates based on otherestimates END