Top Banner
Dr. Florian Wilhelm March 13 th 2016 PyData Amsterdam P 1
29

Explaining the idea behind automatic relevance determination and bayesian interpolation

Apr 21, 2017

Download

Data & Analytics

Florian Wilhelm
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Explaining the idea behind automatic relevance determination and bayesian interpolation

Dr. Florian WilhelmMarch 13th 2016PyData Amsterdam

P 1

Page 2: Explaining the idea behind automatic relevance determination and bayesian interpolation

2

What‘s the best model to describe our data?

And what does „best“ actually mean?

Page 3: Explaining the idea behind automatic relevance determination and bayesian interpolation

3

Page 4: Explaining the idea behind automatic relevance determination and bayesian interpolation

4

Page 5: Explaining the idea behind automatic relevance determination and bayesian interpolation

5

Simple model

„Generality“Complex model

„Best Fit“

Occam‘s Razor:

„It is vain to do with more

what can be done with fewer“

Page 6: Explaining the idea behind automatic relevance determination and bayesian interpolation

6

Simple Model

ℋ1Complex model

ℋ2

Space of all possible datasets 𝐷

Page 7: Explaining the idea behind automatic relevance determination and bayesian interpolation

7

Simple Model

ℋ1Complex model

ℋ2

ℋ1 fits only a small subset of 𝐷 well

Page 8: Explaining the idea behind automatic relevance determination and bayesian interpolation

8

Simple Model

ℋ1Complex model

ℋ2

ℋ2 can fit large parts of 𝐷 well

Page 9: Explaining the idea behind automatic relevance determination and bayesian interpolation

9

Prefer the model with high evidence for a given dataset

Source: D. J. C. MacKay. Bayesian Interpolation. 1992

Page 10: Explaining the idea behind automatic relevance determination and bayesian interpolation

1. Model fitting: Assume ℋ𝑖 is the right model and fit its parameters 𝒘 with Bayes:

𝑃 𝒘 𝐷,ℋ𝑖 =𝑃 𝐷 𝒘,ℋ𝑖 𝑃(𝒘|ℋ𝑖)

𝑃(𝐷|ℋ𝑖)

“Business as usual”

2. Model comparison: Compare different models with the help of their evidence 𝑃 𝐷 ℋ𝑖 and model prior 𝑃 ℋ𝑖 :

𝑃 ℋ𝑖 𝐷 ∝ 𝑃 𝐷 ℋ𝑖 𝑃 ℋ𝑖

“Occam‘s razor at work“

10

Page 11: Explaining the idea behind automatic relevance determination and bayesian interpolation

Marginalize & approximate:

𝑃 𝐷 ℋ𝑖 = 𝑃 𝐷 𝒘,ℋ𝑖 𝑃 𝒘 ℋ𝑖 𝑑𝒘

𝑃 𝐷 ℋ𝑖 ≅ 𝑃 𝐷 𝒘𝑀𝑃,ℋ𝑖 𝑃 𝒘𝑀𝑃 ℋ𝑖 ∆𝒘𝑒𝑣𝑖𝑑𝑒𝑛𝑐𝑒 ≅ 𝑏𝑒𝑠𝑡 𝑓𝑖𝑡 𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 × 𝑂𝑐𝑐𝑎𝑚 𝑓𝑎𝑐𝑡𝑜𝑟

11

Occam factor:∆𝒘

∆𝟎𝒘

Source: D. J. C. MacKay. Bayesian Interpolation. 1992

Page 12: Explaining the idea behind automatic relevance determination and bayesian interpolation

12

Page 13: Explaining the idea behind automatic relevance determination and bayesian interpolation

Given:

Dataset 𝐷 = 𝑥𝑛, 𝑡𝑛 with 𝑛 = 1…𝑁

Set of (non-linear) functions Φ = {𝜙ℎ: 𝑥 ⟼ 𝜙(𝑥)} with ℎ = 1…𝑀

Assumption:

𝑦 𝒙;𝒘 =

ℎ=1

𝑀

𝑤ℎ𝜙ℎ(𝒙) ,

𝑡𝑛 = 𝑦 𝒙;𝒘 + 𝜐𝑛,

where 𝜐𝑛 is an additive noise with 𝒩 0, 𝛼−1

Task: Find min𝒘‖Φ𝒘 − 𝒕‖2 (Ordinary Least Squares)

13

Page 14: Explaining the idea behind automatic relevance determination and bayesian interpolation

14

Problem:

Having too many features leads to overfitting!

Regularization

Assumption: „Weights are small“

𝑝 𝒘; 𝜆 ~𝒩(0, 𝜆−1𝕀)

Task: Given 𝛼, 𝜆 find

min𝒘𝛼 Φ𝒘− 𝒕 2 + 𝜆 𝒘 2

Page 15: Explaining the idea behind automatic relevance determination and bayesian interpolation

15

Consider each 𝛼𝑖 , 𝜆𝑖 defining a model ℋ𝑖 𝛼, 𝜆 .

Yes! That means we can use

our Bayesian Interpolation to

find 𝒘,𝜶, 𝝀 with the highest

evidence!

This is the idea behind BayesianRidge as found in sklearn.linear_model

Page 16: Explaining the idea behind automatic relevance determination and bayesian interpolation

Consider that each weight has an individual variance, so that𝑝 𝒘 𝝀 ~𝒩 0, Λ−1 ,

where Λ = diag(𝜆1, … , 𝜆𝐻), 𝜆ℎ ∈ ℝ+.

Now, our minimization problem is:min𝒘𝛼 Φ𝒘− 𝒕 2 +𝒘𝑡Λ𝒘

16

Pruning: If precision 𝜆ℎ of feature ℎ is high, its weight 𝑤ℎ is very likely to

be close to zero and is therefore pruned.

This is called Sparse Bayesian Learning or Automatic Relevance

Determination. Found as ARDRegression under sklearn.linear_model.

Page 17: Explaining the idea behind automatic relevance determination and bayesian interpolation

Crossvalidation can be used for the estimation of hyperparmeters but suffers from the curse of dimensionality (inappropriate for low-statistics).

17Source: Peter Ellerton, http://pactiss.org/2011/11/02/bayesian-inference-homo-bayesianis/

Page 18: Explaining the idea behind automatic relevance determination and bayesian interpolation

• Random 100 × 100 design matrix Φ with 100 samples and 100features

• Weights 𝑤𝑖, 𝑖 ∈ 𝐼 = 1,… , 100 , random subset J ⊂ 𝐼 with 𝐽 = 10, and

𝑤𝑖 = 0, 𝑖 ∈ 𝐼\J

𝒩(𝑤𝑖; 0, 14), 𝑖 ∈ 𝐽

• Target 𝒕 = Φ𝒘+ 𝝂 with random noise 𝜈𝑖 ∼ 𝒩(0, 150)

Task: Reconstruct the weights, especially the 10 non-zero weights!

Source: http://scikit-learn.org/stable/auto_examples/linear_model/plot_ard.html#example-linear-model-plot-ard-py

18

Page 19: Explaining the idea behind automatic relevance determination and bayesian interpolation

19

Page 20: Explaining the idea behind automatic relevance determination and bayesian interpolation

20

Page 21: Explaining the idea behind automatic relevance determination and bayesian interpolation

21

Page 22: Explaining the idea behind automatic relevance determination and bayesian interpolation

22

Page 23: Explaining the idea behind automatic relevance determination and bayesian interpolation

We have to determine the parameters 𝑤, 𝜆, 𝛼 for

𝑃 𝒘, 𝝀, 𝛼 𝒕 = 𝑃 𝒘 𝒕, 𝝀, 𝛼 𝑃 𝝀, 𝛼 𝒕

1) Model fitting:

For the first factor, we have 𝑃 𝒘 𝒕, 𝝀, 𝛼 ~𝒩(𝝁, Σ) with

Σ = Λ + 𝛼Φ𝑇Φ −1,

𝝁 = 𝛼ΣΦT𝐭.

23

Page 24: Explaining the idea behind automatic relevance determination and bayesian interpolation

2) Model comparison:

For the second factor, we have

𝑃 𝝀, 𝛼 𝒕 ∝ 𝑃 𝒕 𝝀, 𝛼 𝑃 𝝀 𝑃 𝛼 ,

where 𝑃 𝝀 and 𝑃(𝛼) are hyperpriors which we assume uniform.

Using marginalization, we have

𝑃 𝒕 𝝀, 𝛼 = 𝑃 𝒕 𝒘, 𝛼 𝑃 𝒘 𝝀 𝑑𝒘,

i.e. marginal likelihood or the “evidence for the hyperparameter“.

24

Page 25: Explaining the idea behind automatic relevance determination and bayesian interpolation

Differentiation of the log marginal likelihood with respect to 𝜆𝑖 and 𝛼 as well as setting these to zero, we get

𝜆𝑖 =𝛾𝑖

𝜇𝑖2 ,

𝛼 =𝑁 − 𝑖 𝛾𝑖𝒕 − Φ𝝁 2

,

with 𝛾𝑖 = 1 − 𝜆𝑖Σ𝑖𝑖.

These formulae are used to find the maximum points 𝝀𝑀𝑃 and 𝛼𝑀𝑃.

25

Page 26: Explaining the idea behind automatic relevance determination and bayesian interpolation

1. Starting values 𝛼 = 𝜎−2(𝒕), 𝝀 = 𝟏

2. Calculate Σ = Λ + 𝛼Φ𝑇Φ −1 and 𝒘 = 𝝁 = 𝛼ΣΦT𝐭

3. Update 𝜆𝑖 =𝛾𝑖

𝜇𝑖2 and 𝛼 =

𝑁− 𝑖 𝛾𝑖

𝒕−Φ𝝁 2where 𝛾𝑖 = 1 − 𝜆𝑖Σ𝑖𝑖

4. Prune 𝜆𝑖 and 𝜙𝑖 if 𝜆𝑖 > 𝜆𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑5. If not converged go to 2.

Sklearn implementation:

The parameters 𝛼1, 𝛼2 as well as 𝜆1, 𝜆2 are the hyperprior parameters for 𝛼 and 𝝀 with

𝑃 𝛼 ∼ Γ 𝛼1, 𝛼2−1 , 𝑃 𝜆𝑖 ∼ Γ 𝜆1, 𝜆2

−1 .

𝐸 Γ 𝛼, 𝛽 =𝛼

𝛽and 𝑉 Γ 𝛼, 𝛽 =

𝛼

𝛽2.

26

Page 27: Explaining the idea behind automatic relevance determination and bayesian interpolation

Given a some new data 𝑥∗, a prediction for 𝑡∗ is made by

𝑃 𝑡∗ 𝒕, 𝝀𝑀𝑃, 𝛼𝑀𝑃 = 𝑃 𝑡∗ 𝒘,𝛼𝑀𝑃 𝑃 𝒘 𝒕, 𝝀𝑀𝑃, 𝛼𝑀𝑃 𝑑𝒘

= 𝒩 𝝁𝑇𝜙 𝑥∗ , 𝛼𝑀𝑃−1 + 𝜙 𝑥∗

𝑡Σ𝜙 𝑥∗ .

This is a good approximation of the predictive distribution

𝑃 𝑡∗ 𝒕 = 𝑃 𝑡∗ 𝒘, 𝝀, 𝛼 𝑃 𝒘, 𝝀, 𝛼 𝒕 𝑑𝒘 𝑑𝝀 𝑑α .

27

Page 28: Explaining the idea behind automatic relevance determination and bayesian interpolation

1. D. J. C. MacKay. Bayesian Interpolation. 1992 (… to understand the overall idea)

2. M. E. Tipping. Sparse Bayesian learning and the Relevance Vector Machine. June, 2001 (… to understand the ARD algorithm)

3. T. Fletcher. Relevance Vector Machines Explained. October, 2010(… to understand the ARD algorithm in detail)

4. D. Wipf. A New View of Automatic Relevance Determination. 2008(… not as good as the ones above)

Graphs from slides 7 and 9 were taken from [1] and the awesome tutorials of Scikit-Learn were consulted many times.

28

Page 29: Explaining the idea behind automatic relevance determination and bayesian interpolation

29