Real Time Learning

Post on 10-May-2015

1348 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

DESCRIPTION

A talk about real-time learning, especially using

Transcript

1©MapR Technologies - Confidential

Real-time Learning

2©MapR Technologies - Confidential

Contact:– tdunning@maprtech.com– @ted_dunning

Slides and such (available late tonight):– http://slideshare.net/tdunning

Hash tags: #mapr #hivedata

3©MapR Technologies - Confidential

We have a product to sell … from a web-site

4©MapR Technologies - Confidential

What picture?

What tag-line?

What call to action?

5©MapR Technologies - Confidential

The Challenge

Design decisions affect probability of success– Cheesy web-sites don’t even sell cheese

The best designers do better when allowed to fail– Exploration juices creativity

But failing is expensive– If only because we could have succeeded– But also because offending or disappointing customers is bad

6©MapR Technologies - Confidential

More Challenges

Too many designs– 5 pictures– 10 tag-lines– 4 calls to action– 3 back-ground colors=> 5 x 10 x 4 x 3 = 600 designs

It gets worse quickly– What about changes on the back-end?– Search engine variants?– Checkout process variants?

7©MapR Technologies - Confidential

Example – AB testing in real-time

I have 15 versions of my landing page Each visitor is assigned to a version– Which version?

A conversion or sale or whatever can happen– How long to wait?

Some versions of the landing page are horrible– Don’t want to give them traffic

8©MapR Technologies - Confidential

A Quick Diversion

You see a coin– What is the probability of heads?– Could it be larger or smaller than that?

I flip the coin and while it is in the air ask again I catch the coin and ask again I look at the coin (and you don’t) and ask again Why does the answer change?– And did it ever have a single value?

9©MapR Technologies - Confidential

A Philosophical Conclusion

Probability as expressed by humans is subjective and depends on information and experience

10©MapR Technologies - Confidential

I Dunno

11©MapR Technologies - Confidential

5 heads out of 10 throws

12©MapR Technologies - Confidential

2 heads out of 12 throws

13©MapR Technologies - Confidential

So now you understand Bayesian probability

14©MapR Technologies - Confidential

Another Quick Diversion

Let’s play a shell game This is a special shell game It costs you nothing to play The pea has constant probability of being under each shell

(trust me) How do you find the best shell? How do you find it while maximizing the number of wins?

15©MapR Technologies - Confidential

Pause for short con-game

16©MapR Technologies - Confidential

Interim Thoughts

Can you identify winners or losers without trying them out?

Can you ever completely eliminate a shell with a bad streak?

Should you keep trying apparent losers?

17©MapR Technologies - Confidential

Pause for second con-game

18©MapR Technologies - Confidential

So now you understand multi-armed bandits

19©MapR Technologies - Confidential

Conclusions

Can you identify winners or losers without trying them out?No

Can you ever completely eliminate a shell with a bad streak?No

Should you keep trying apparent losers?Yes, but at a decreasing rate

20©MapR Technologies - Confidential

Is there an optimum strategy?

21©MapR Technologies - Confidential

Bayesian Bandit

Compute distributions based on data so far Sample p1, p2 and p2 from these distributions

Pick shell i where i = argmaxi pi

Lemma 1: The probability of picking shell i will match the probability it is the best shell

Lemma 2: This is as good as it gets

22©MapR Technologies - Confidential

And it works!

23©MapR Technologies - Confidential

Video Demo

24©MapR Technologies - Confidential

The Code

Select an alternative

Select and learn

But we already know how to count!

n = dim(k)[1] p0 = rep(0, length.out=n) for (i in 1:n) { p0[i] = rbeta(1, k[i,2]+1, k[i,1]+1) } return (which(p0 == max(p0)))

for (z in 1:steps) { i = select(k) j = test(i) k[i,j] = k[i,j]+1 } return (k)

25©MapR Technologies - Confidential

The Basic Idea

We can encode a distribution by sampling Sampling allows unification of exploration and exploitation

Can be extended to more general response models

26©MapR Technologies - Confidential

The Original Problem

x1x2

x3

27©MapR Technologies - Confidential

Response Function

28©MapR Technologies - Confidential

Generalized Banditry

Suppose we have an infinite number of bandits– suppose they are each labeled by two real numbers x and y in [0,1]– also that expected payoff is a parameterized function of x and y

– now assume a distribution for θ that we can learn online Selection works by sampling θ, then computing f Learning works by propagating updates back to θ– If f is linear, this is very easy

Don’t just have to have two labels, could have labels and context

29©MapR Technologies - Confidential

Context Variables

x1x2

x3

user.geo env.time env.day_of_week env.weekend

30©MapR Technologies - Confidential

Caveats

Original Bayesian Bandit only requires real-time

Generalized Bandit may require access to long history for learning– Pseudo online learning may be easier than true online

Bandit variables can include content, time of day, day of week

Context variables can include user id, user features

Bandit × context variables provide the real power

31©MapR Technologies - Confidential

You can do thisyourself!

32©MapR Technologies - Confidential

Thank You

33©MapR Technologies - Confidential

Contact:– tdunning@maprtech.com– @ted_dunning

Slides and such (available late tonight):– http://slideshare.net/tdunning

Hash tags: #mapr #hivedata

top related