Top Banner
Modern regression 1: Ridge regression Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL 6.2.1, ESL 3.4.1 1
21

Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Feb 22, 2019

Download

Documents

duongnhu
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Modern regression 1: Ridge regression

Ryan TibshiraniData Mining: 36-462/36-662

March 19 2013

Optional reading: ISL 6.2.1, ESL 3.4.1

1

Page 2: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Reminder: shortcomings of linear regression

Last time we talked about:

1. Predictive ability: recall that we can decompose predictionerror into squared bias and variance. Linear regression has lowbias (zero bias) but suffers from high variance. So it may beworth sacrificing some bias to achieve a lower variance

2. Interpretative ability: with a large number of predictors, it canbe helpful to identify a smaller subset of important variables.Linear regression doesn’t do this

Also: linear regression is not defined when p > n (Homework 4)

Setup: given fixed covariates xi ∈ Rp, i = 1, . . . n, we observe

yi = f(xi) + εi, i = 1, . . . n,

where f : Rp → R is unknown (think f(xi) = xTi β∗ for a linear

model) and εi ∈ R with E[εi] = 0,Var(εi) = σ2,Cov(εi, εj) = 0

2

Page 3: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Example: subset of small coefficients

Recall our example: we have n = 50, p = 30, and σ2 = 1. Thetrue model is linear with 10 large coefficients (between 0.5 and 1)and 20 small ones (between 0 and 0.3). Histogram:

True coefficients

Fre

quen

cy

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

67

The linear regression fit:

Squared bias ≈ 0.006Variance ≈ 0.627Pred. error ≈ 1 + 0.006 + 0.627 ≈ 1.633

We reasoned that we can do better by shrinking the coefficients, toreduce variance

3

Page 4: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

0 5 10 15 20 25

1.50

1.55

1.60

1.65

1.70

1.75

1.80

Amount of shrinkage

Pre

dict

ion

erro

r

Low High

Linear regressionRidge regression

Linear regression:Squared bias ≈ 0.006Variance ≈ 0.627Pred. error ≈ 1 + 0.006 + 0.627Pred. error ≈ 1.633

Ridge regression, at its best:Squared bias ≈ 0.077Variance ≈ 0.403Pred. error ≈ 1 + 0.077 + 0.403Pred. error ≈ 1.48

4

Page 5: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Ridge regression

Ridge regression is like least squares but shrinks the estimatedcoefficients towards zero. Given a response vector y ∈ Rn and apredictor matrix X ∈ Rn×p, the ridge regression coefficients aredefined as

βridge = argminβ∈Rp

n∑i=1

(yi − xTi β)2 + λ

p∑j=1

β2j

= argminβ∈Rp

‖y −Xβ‖22︸ ︷︷ ︸Loss

+λ ‖β‖22︸︷︷︸Penalty

Here λ ≥ 0 is a tuning parameter, which controls the strength ofthe penalty term. Note that:

I When λ = 0, we get the linear regression estimate

I When λ =∞, we get βridge = 0

I For λ in between, we are balancing two ideas: fitting a linearmodel of y on X, and shrinking the coefficients

5

Page 6: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Example: visual representation of ridge coefficients

Recall our last example (n = 50, p = 30, and σ2 = 1; 10 large truecoefficients, 20 small). Here is a visual representation of the ridgeregression coefficients for λ = 25:

−0.5 0.0 0.5 1.0 1.5 2.0 2.5

0.0

0.5

1.0

Coe

ffici

ents

●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

True Linear Ridge

6

Page 7: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Important details

When including an intercept term in the regression, we usuallyleave this coefficient unpenalized. Otherwise we could add someconstant amount c to the vector y, and this would not result in thesame solution. Hence ridge regression with intercept solves

β0, βridge = argmin

β0∈R, β∈Rp‖y − β01−Xβ‖22 + λ‖β‖22

If we center the columns of X, then the intercept estimate ends upjust being β0 = y, so we usually just assume that y,X have beencentered and don’t include an intercept

Also, the penalty term ‖β‖22 =∑p

j=1 β2j is unfair is the predictor

variables are not on the same scale. (Why?) Therefore, if we knowthat the variables are not measured in the same units, we typicallyscale the columns of X (to have sample variance 1), and then weperform ridge regression

7

Page 8: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Bias and variance of ridge regression

The bias and variance are not quite as simple to write down forridge regression as they were for linear regression, but closed-formexpressions are still possible (Homework 4). Recall that

βridge = argminβ∈Rp

‖y −Xβ‖22 + λ‖β‖22

The general trend is:

I The bias increases as λ (amount of shrinkage) increases

I The variance decreases as λ (amount of shrinkage) increases

What is the bias at λ = 0? The variance at λ =∞?

8

Page 9: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Example: bias and variance of ridge regression

Bias and variance for our last example (n = 50, p = 30, σ2 = 1; 10large true coefficients, 20 small):

0 5 10 15 20 25

0.0

0.1

0.2

0.3

0.4

0.5

0.6

λ

Bias^2Var

9

Page 10: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Mean squared error for our last example:

0 5 10 15 20 25

0.0

0.2

0.4

0.6

0.8

λ

Linear MSERidge MSERidge Bias^2Ridge Var

Ridge regression in R: see the function lm.ridge in the packageMASS, or the glmnet function and package

10

Page 11: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

What you may (should) be thinking now

Thought 1:

I “Yeah, OK, but this only works for some values of λ. So howwould we choose λ in practice?”

This is actually quite a hard question. We’ll talk about this indetail later

Thought 2:

I “What happens when we none of the coefficients are small?”

In other words, if all the true coefficients are moderately large, is itstill helpful to shrink the coefficient estimates? The answer is(perhaps surprisingly) still “yes”. But the advantage of ridgeregression here is less dramatic, and the corresponding range forgood values of λ is smaller

11

Page 12: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Example: moderate regression coefficients

Same setup as our last example: n = 50, p = 30, and σ2 = 1.Except now the true coefficients are all moderately large (between0.5 and 1). Histogram:

True coefficients

Fre

quen

cy

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

The linear regression fit:

Squared bias ≈ 0.006Variance ≈ 0.628Pred. error ≈ 1 + 0.006 + 0.628 ≈ 1.634

Why are these numbers essentially the same as those from the lastexample, even though the true coefficients changed?

12

Page 13: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Ridge regression can still outperform linear regression in terms ofmean squared error:

0 5 10 15 20 25 30

0.0

0.5

1.0

1.5

2.0

λ

Linear MSERidge MSERidge Bias^2Ridge Var

Only works for λ less than ≈ 5, otherwise it is very biased. (Why?)13

Page 14: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Variable selection

To the other extreme (of a subset of small coefficients), supposethat there is a group of true coefficients that are identically zero.This means that the mean response doesn’t depend on thesepredictors at all; they are completely extraneous.

The problem of picking out the relevant variables from a larger setis called variable selection. In the linear model setting, this meansestimating some coefficients to be exactly zero. Aside frompredictive accuracy, this can be very important for the purposes ofmodel interpretation

Thought 3:

I “How does ridge regression perform if a group of the truecoefficients was exactly zero?”

The answer depends whether on we are interested in prediction orinterpretation. We’ll consider the former first

14

Page 15: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Example: subset of zero coefficients

Same general setup as our running example: n = 50, p = 30, andσ2 = 1. Now, the true coefficients: 10 are large (between 0.5 and1) and 20 are exactly 0. Histogram:

True coefficients

Fre

quen

cy

0.0 0.2 0.4 0.6 0.8 1.0

05

1015

20

The linear regression fit:

Squared bias ≈ 0.006Variance ≈ 0.627Pred. error ≈ 1 + 0.006 + 0.627 ≈ 1.633

Note again that these numbers haven’t changed

15

Page 16: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Ridge regression performs well in terms of mean-squared error:

0 5 10 15 20 25

0.0

0.2

0.4

0.6

0.8

λ

Linear MSERidge MSERidge Bias^2Ridge Var

Why is the bias not as large here for large λ?

16

Page 17: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Remember that as we vary λ we get different ridge regressioncoefficients, the larger the λ the more shrunken. Here we plotthem again λ

0 5 10 15 20 25

−0.

50.

00.

51.

0

λ

Coe

ffici

ents

●●

●●

True nonzeroTrue zero

The red paths correspond to thetrue nonzero coefficients; the graypaths correspond to true zeros.The vertical dashed line at λ = 15marks the point above which ridgeregression’s MSE starts losing tothat of linear regression

An important thing to notice is that the gray coefficient paths arenot exactly zero; they are shrunken, but still nonzero

17

Page 18: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Ridge regression doesn’t perform variable selection

We can show that ridge regression doesn’t set coefficients exactlyto zero unless λ =∞, in which case they’re all zero. Hence ridgeregression cannot perform variable selection, and even though itperforms well in terms of prediction accuracy, it does poorly interms of offering a clear interpretation

E.g., suppose that we are studying the level ofprostate-specific antigen (PSA), which is oftenelevated in men who have prostate cancer. Welook at n = 97 men with prostate cancer, andp = 8 clinical measurements.1 We are inter-ested in identifying a small number of predic-tors, say 2 or 3, that drive PSA

2

1Data from Stamey et al. (1989), “Prostate specific antigen in the diag...”2Figure from http://www.mens-hormonal-health.com/psa-score.html

18

Page 19: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Example: ridge regression coefficients for prostate data

We perform ridge regression over a wide range of λ values (aftercentering and scaling). The resulting coefficient profiles:

0 200 400 600 800 1000

0.0

0.2

0.4

0.6

λ

Coe

ffici

ents

lcavol

lweight

age

lbph

svi

lcp

gleason

pgg45

0 2 4 6 8

0.0

0.2

0.4

0.6

df(λ)

Coe

ffici

ents

lcavol

lweight

age

lbph

svi

lcp

gleason

pgg45

This doesn’t give us a clear answer to our question ...

19

Page 20: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Recap: ridge regression

We learned ridge regression, which minimizes the usual regressioncriterion plus a penalty term on the squared `2 norm of thecoefficient vector. As such, it shrinks the coefficients towards zero.This introduces some bias, but can greatly reduce the variance,resulting in a better mean-squared error

The amount of shrinkage is controlled by λ, the tuning parameterthat multiplies the ridge penalty. Large λ means more shrinkage,and so we get different coefficient estimates for different values ofλ. Choosing an appropriate value of λ is important, and alsodifficult. We’ll return to this later

Ridge regression performs particularly well when there is a subsetof true coefficients that are small or even zero. It doesn’t do aswell when all of the true coefficients are moderately large; however,in this case it can still outperform linear regression over a prettynarrow range of (small) λ values

20

Page 21: Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013ryantibs/datamining/lectures/16-modr1.pdf · Ryan Tibshirani Data Mining: 36-462/36-662 March 19 2013 Optional reading: ISL

Next time: the lasso

The lasso combines some of the shrinking advantages of ridge withvariable selection

(From ESL page 71)

21