Top Banner
Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University of Toronto many thanks to ´ Alvaro Cartea, Oxford Apr, 2017 (c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 1 / 25
26

Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Dec 27, 2019

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Algorithmic Trading: ReinforcementLearning

Sebastian JaimungalUniversity of Toronto

many thanks to

Alvaro Cartea, Oxford

Apr, 2017

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 1 / 25

Page 2: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Reinforcement Learning – Intro

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 2 / 25

Page 3: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Reinforcement Learning – Intro

I Reinforcement learning is unsupervised – based only on therewards from actions & how the system reacts

I As in continuous time stochastic control, the actions affectthe reward and the system

I Can be model-free

I Goal is to maximize performance criteria

Ha(s) = E

[ ∞∑k=0

γk R(ak ;Sak , S

ak+1)

∣∣∣∣∣ Sat = s

]I St ∈ S is the state of the system at time t

I a ∈ A admissible set of actions which depend only on thestate of the system

I The system evolves in an action dependent manner:Sat+1 ∼ F (St ; at)

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 3 / 25

Page 4: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Reinforcement Learning – Intro

Xk Xk+1 Xk+2

Rk+1 Yk+1ak

Sk Sk+1 Sk+2

Figure: Directed graphical representation. When Yt = Xt theenvironment is fully observed.

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 4 / 25

Page 5: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Reinforcement Learning – Intro

I Reinforcement learning aims to discover the best policy by:I Exploration – trying an action, and see what the response is,

update action choices (learn from the environment)

I Exploitation – use what you already know to make the bestaction choice

I Both are important!

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 5 / 25

Page 6: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Reinforcement Learning – Intro

I A Bellman principle applies and one can show that H = Ha∗

satisfies

H(s) = maxa∈A

E[R(a; s, S s,a

1 ) + γ H(S s,a1 )

]I With known transition probability of states, this can be

applied recursively to find H and hence a∗

H(s)← maxa∈A

∑s′

[Pass′{R(a; s, s ′) + γ H(s ′)

} ]

I Make an initial assumption on H – e.g., zeros

I Iterate until “converged”

I choose actions which maximize the expression

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 6 / 25

Page 7: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 7 / 25

Page 8: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning

I Q-learning is an “off-policy” learning.

[NB: “on-policy” means the algorithm estimates the value function

of the policy which generates the data. ]

I Define

Q(s, a) = E[R(a; s, S s,a

1 ) + γ H(S s,a1 )

]= E

[R(a; s, S s,a

1 ) + γ maxa′∈A

Q(S s,a1 , a′)

]because H(s) = max

a∈AQ(s, a)

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 8 / 25

Page 9: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning

I We wish to approximate the expectation from actualobservations while you learn...

I Algorithm is ε–greedy: at iteration kI Select a random action ak with probability εk (Explore)

I Otherwise select the current best policy (Exploit)

a∗(s) = arg maxa∈A

Q(s, a)

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 9 / 25

Page 10: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning

1. Initialize Q(s, a) (random or often just to zero)

2. Repeat (for every run)

3. initialize state s

4. Repeat (for each step in the run)

5. Select ε-greedy action a

6. Take action a, observe s ′ & reward R

7. update Q according to

Q(s, a)← (1− αk)Q(s, a) + αk

[R + γ max

a′∈AQ(S ′, a′)

]8. update s ← s ′

9. goto 5 until run is done

10. goto 3 until all runs are done

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 10 / 25

Page 11: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning

I Require decreasing αk → 0 s.t.∑k

αk = +∞ and∑k

α2k < +∞

I Often

εk =A

B + kand αk =

C

D + k

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 11 / 25

Page 12: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning

I The updating rule is akin to using updating to estimate themean µ = E[X ] of a r.v. X , from its samples x = {x1, . . . , xn}

µk =1

k

k∑i=1

xi = µk−1 + 1k (xk − µk−1)

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 12 / 25

Page 13: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 13 / 25

Page 14: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

I Here, we look at why Q-learning converges to the optimalsolution

I Note first that the operate B which acts as follows

(BQ)(s, a) = E[R(a; s,S s,a

1 ) + γ maxa′∈A

Q(S s,a1 , a′)

]is a contraction operator in the L∞-norm, i.e.

‖BQ1 −BQ2‖∞ ≤ γ ‖Q1 − Q2‖∞

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 14 / 25

Page 15: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

Proof:

||BQ1 −BQ2||∞ = γmaxs,a

∣∣∣∣E[ maxa′∈A

Q1(S s,a1 , a′)−max

a′∈AQ2(S s,a

1 , a′)]∣∣∣∣

≤ γmaxs,a

E[ ∣∣∣∣max

a′∈AQ1(S s,a

1 , a′)−maxa′∈A

Q2(S s,a1 , a′)

∣∣∣∣ ]≤ γmax

s,aE[

maxs′∈S ; a′∈A

∣∣Q1(s ′, a′)− Q2(s ′, a′)∣∣ ]

= γ ||Q1 − Q2| |∞

Hence, there is a chance the procedure converges... but we needmore...

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 15 / 25

Page 16: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

To illustrate that Q-learning converges to the optimal we willneed a general stochastic approximation result...

TheoremAn iterative process

ζk+1(x) = (1− αk(x)) ζk(x) + βk(x)Fk(x)

converges to zero a.s. under the following assumptions

1.∑

k α =∞,∑

k α2k <∞,

∑k β

2k <∞ and

E [βk(x) | Fk ] ≤ E [αk(x) | Fk ] uniformly a.s.

2. ‖E[Fk(x) | Fk , βk ]‖W ≤ δ‖ζk‖W , for some δ ∈ (0, 1)

3. V[Fk(x) | Fk , βk ] ≤ C (1 + ‖ζk‖W )2

Here, ‖ · ‖W denotes a weighted norm.

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 16 / 25

Page 17: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

Next, set∆k(s, a) = Qk(s, a)− Q∗(s, a)

where Qk is the k − th iteration, i.e., Qk = BkQ0 and

BQk = (1− αk)Qk(s, a) + αk

[Rk + γ max

a′∈AQk(S ′, a′)

]

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 17 / 25

Page 18: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

For Q-learning, by definition, we then have

∆k(sk , ak)

= (1− αk)Qk(sk , ak) + αk

[Rk + γ max

a′∈AQk(sak ,skk+1 , a

′)− Q∗(sk , ak)︸ ︷︷ ︸Ψk (s, a)

]

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 18 / 25

Page 19: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

I Writing

Ψk(s, a) := Ra,sk + γ max

a′∈AQk(sa,sk+1, a

′)− Q∗(s, a)

then,E[Ψk(s, a) | Fk ] = (BQk)(s, a)− Q∗(s, a)

and since Q∗ is a fixed point of B,

E[Ψk(s, a) | Fk ] = (BQk −BQ∗)(s, a)

so that ∥∥∥E[Ψk(s, a) | Fk ]∥∥∥∞≤ γ||Qk − Q∗||∞

so part 2) of the general SA result holds

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 19 / 25

Page 20: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Q-learning convergence

Next, we need the variance to be bounded...

V[Ψk(s, a) | Fk ]

= V[Ra,sk + γ max

a′∈AQk(sa,sk+1, a

′)

∣∣∣∣ Fk

]= V

[Ra,sk | Fk

]+ 2C

[Ra,sk max

a′∈AQk(sa,sk+1, a

′) | Fk

]+ V

[maxa′∈A

Qk(sa,sk+1, a′) | Fk

]≤ C (1 + ‖ζk‖2

W )

under the assumption of bounded rewards, the variance constraintholds

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 20 / 25

Page 21: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Dyna-Q Learning

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 21 / 25

Page 22: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Dyna-Q Learning

I Idea is to combine experience and model

I Update Q from experience (ε-greedy)

I Learn a model from experience

I Simulate from model, and update Q

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 22 / 25

Page 23: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Dyna-Q Learning1. Initialize Q(s, a) (random or often just to zero)

2. initialize state s

3. Select ε-greedy action a from Q

4. Take action a, observe s′ & reward R

5. update Q according to

Q(s, a)← (1− αk )Q(s, a) + αk

[R + γ max

a′∈AQ(S ′, a′)

]6. Update Model:(s, a) 7→ (s′, r). Repeat n times

I randomly select s from previously visited

I randomly select a from previous actions at state s

I Use model to get (s, a) 7→ (s′, r)

I Update Q according to

Q(s, a)← (1− αk )Q(s, a) + αk

[r + γ max

a′∈AQ(s′, a′)

]7. update s ← s′

8. goto 3

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 23 / 25

Page 24: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Dyna-Q Learning

A mean-reverting asset

0 1 2 3

104

9.5

10

10.5

11

-0.05 0 0.050

2

4104

-2 -1 0 1 2-5

0

5

+1 -1 0

-0.01 0 0.01

-5

0

5 -2

-1

0

1

2

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 24 / 25

Page 25: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Dyna-Q LearningAn execution strategy

10 20 30 40 50 60

10

20

30

0

10

20

30

10 20 30 40 50 60

10

20

30

0

10

20

30

10 20 30 40 50 60

10

20

30

0

10

20

30

0 10 20 30 40 50 60

9.5

10

10.5 0

10

20

30

0 10 20 30 40 50 60

9.5

10

10.5 0

10

20

30

0 10 20 30 40 50 60

9.5

10

10.5 0

10

20

30

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 25 / 25

Page 26: Algorithmic Trading: Reinforcement Learningsebastian.utstat.utoronto.ca/wp-content/uploads/2017/11/RiO-RL.pdf · Algorithmic Trading: Reinforcement Learning Sebastian Jaimungal University

Dyna-Q Learning

An execution strategy

0 0.2 0.4 0.6 0.8 10

1000

2000

3000

9.99.951010.0510.1

(c) Cartea & Jaimungal, 2016 Algo Trading Apr, 2017 26 / 25