Chapter 4: Dynamic Programming

Post on 13-Jan-2016

75 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Chapter 4: Dynamic Programming. Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP) Show how DP can be used to compute value functions , and hence, optimal policies Discuss efficiency and utility of DP. Objectives of this chapter:. - PowerPoint PPT Presentation

Transcript

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 4: Dynamic Programming

Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP)

Show how DP can be used to compute value functions, and hence, optimal policies

Discuss efficiency and utility of DP

Objectives of this chapter:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 2

Policy Evaluation

State - value function for policy :

V (s) E Rt st s E krt k 1 st sk 0

Bellman equation for V :

V (s) (s,a) Ps s a Rs s

a V ( s ) s

a

— a system of S simultaneous linear equations

Policy Evaluation: for a given policy , compute the state-value function V

Recall:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 3

Iterative Methods

V0 V1 Vk Vk1 V

Vk1 (s) (s,a) Ps s a Rs s

a Vk ( s ) s

a

a “sweep”

A sweep consists of applying a backup operation to each state.

A full policy evaluation backup:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 4

Iterative Policy Evaluation

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 5

A Small Gridworld

An undiscounted episodic task Nonterminal states: 1, 2, . . ., 14; One terminal state (shown twice as shaded squares) Actions that would take agent off the grid leave state unchanged Reward is –1 until the terminal state is reached

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 6

Iterative Policy Eval for the Small Gridworld

random (uniform) action choices

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 7

Policy Improvement

Suppose we have computed for a deterministic policy .V

For a given state s, would it be better to do an action ? a (s)

Q (s,a) E rt1 V (st 1 ) st s, at a Ps s

a

s Rs s

a V ( s )

The value of doing a in state s is :

It is better to switch to action a for state s if and only if

Q (s,a) V (s)

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 8

Policy Improvement Cont.

(s) argmaxa

Q (s,a)

argmaxa

Ps s a

s R

s s a V ( s )

Do this for all states to get a new policy that is

greedy with respect to V :

Then V V

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 9

Policy Improvement Cont.

What if V V ?

i.e., for all s S, V (s) maxa

Ps s a

s R

s s a V ( s ) ?

But this is the Bellman Optimality Equation.

So V V and both and are optimal policies.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 10

Policy Iteration

0 V 0 1 V1 * V * *

policy evaluation policy improvement“greedification”

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 11

Policy Iteration

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 12

Jack owns two locations for car rental.

Customers come to both locations at random with poison distribution

Suppose that =3 and 4 for rent and 3 and 2 for return in two locations

He gets $10 if he rents a car.

Moving the car from one location to the second one costs $2

Maximum 20 cars at each location

Maximum 5 cars can be moved overnight

Use MDP with

States are number of cars

Actions are number of cars moved each night

Starting policy never moves any car

Jack’s Cars Rental

en

Xn

!

9.0

0

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 13

Jack’s Cars Rental

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 14

Value Iteration

Vk1 (s) (s,a) Ps s a Rs s

a Vk ( s ) s

a

Recall the full policy evaluation backup:

Vk1 (s) maxa

Ps s a Rs s

a Vk ( s ) s

Here is the full value iteration backup:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 15

Value Iteration Cont.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 16

Gambler’s Problem

A gambler plys coins if it is heads he wins

As much as he gambled.

The game ends when he gets $100 or loses

His money

The state is his capital

And actions are his bets

The optimum policy maximizes the

probability of reaching the goal.

If the probability of coins coming

up heads is p=0.4 then the optimum

policy is as shown

}99,...,2,1{s

)}100,min(,...,2,1{ ssa

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 17

Asynchronous DP

All the DP methods described so far require exhaustive sweeps of the entire state set.

Asynchronous DP does not use sweeps. Instead it works like this:

Repeat until convergence criterion is met:

– Pick a state at random and apply the appropriate backup

Still need lots of computation, but does not get locked into hopelessly long sweeps

Can you select states to backup intelligently? YES: an agent’s experience can act as a guide.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 18

Generalized Policy Iteration

Generalized Policy Iteration (GPI): any interaction of policy evaluation and policy improvement, independent of their granularity.

A geometric metaphor forconvergence of GPI:

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 19

Efficiency of DP

To find an optimal policy is polynomial in the number of states…

BUT, the number of states is often astronomical, e.g., often growing exponentially with the number of state variables (what Bellman called “the curse of dimensionality”).

In practice, classical DP can be applied to problems with a few millions of states.

Asynchronous DP can be applied to larger problems, and appropriate for parallel computation.

It is surprisingly easy to come up with MDPs for which DP methods are not practical.

R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 20

Summary

Policy evaluation: backups without a max Policy improvement: form a greedy policy, if only locally Policy iteration: alternate the above two processes Value iteration: backups with a max Full backups (to be contrasted later with sample backups) Generalized Policy Iteration (GPI) Asynchronous DP: a way to avoid exhaustive sweeps Bootstrapping: updating estimates based on other

estimates

top related