Top Banner
Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini
24

Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Dec 25, 2015

Download

Documents

Holly Phelps
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Overview of Kernel Methods

Prof. Bennett

Math Model of Learning and Discovery 2/27/05

Based on Chapter 2 of

Shawe-Taylor and Cristianini

Page 2: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Outline

Overview

Ridge Regression

Kernel Ridge Regression

Other Kernels

Summary

Page 3: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Kernel Method

Two parts (+):Mapping into embedding or feature space defined by kernel.Learning algorithm for discovering linear patterns in that space.(Leaning algorithm must be regularized.)(Learning algorithm must work in dual space.)

Illustrate using linear regression.

Page 4: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Linear Regression

Given training data:

Construct linear function:

Creates pattern function:

1 1 2 2, , , , , , , , ,

points and labels

i

ni

S y y y y

R y R

i

i

x x x x

x

1

( ) , 'n

i ii

g w x

x w x w x

( , ) ( ) , 0f y y g y x x w x

Page 5: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

1-d Regression

,w x

x

y

Page 6: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Predict Drug Absorption

Human intestinal cell line CACO-2

Predict permeability

718 descriptors generatedElectronic TAEShape/Property (PEST)Traditional

27 molecules with tested permeability

y R

718i Rx

28

Page 7: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Least Squares Approximation

Want

Define error

Minimize loss

( )g x y

( , ) ( )f y y g x x

1

2

1 1

( , ) ( , ) ( )

, ,

i ii

i i ii i

L g s L w S y g

l y g

x

x

Page 8: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Optimal Solution

Want:

Mathematical Model:

Optimality Condition:

Solution satisfies:

2min ( , ) 'L S w w y Xw y Xw y Xw

y Xw

( , )2 ' 2 ' 0

L S

w

X y X Xww

' 'X Xw X y

Solving nn equation is 0(n3)

Page 9: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Solution

Assume exists, then

Alternative Representation:

1'

X X

1' ' ' '

X Xw X y w X X X y

1 1 1

2

2

1

' ' ' ' ' '

' ' ' '

where ' ' , i ii

w X X X y X X X X X X X y

X X X X X y X α

α X X X X y w x

Is this a good assumption?

Page 10: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Ridge Regression

Inverse typically does not exist.

Use least norm solution for fixed

Regularized problem

Optimality Condition:

2 2min ( , )L S w w w y Xw

( , )2 2 ' 2 ' 0

L S

w

w X y X Xww

' 'n X X I w X y

0.

Requires 0(n3) operations

Page 11: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Ridge Regression (cont)

Inverse always exists for any

Alternative representation:

1' ' w X X I X y

0.

1

1

' ' ' '

' '

X X I w X y w X y X Xw

w X y Xw X α

1

1

'

'

where '

α y Xw

α y Xw y XX α

XX α α y

α G I y G XX

Solving ll equation is 0(l3)

Page 12: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Dual Ridge Regression

To predict new point:

Note need only compute G, the Gram Matrix

1

1

( ) , , '

where ,

i ii

i

g

x w x x x y G I z

z x x

' ,ij i jG G XX x x

Ridge Regression requires only inner products between data points

Page 13: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Efficiency

To compute w in primal ridge regression is 0(n3)

in primal ridge regression is 0(l3)

To predict new point x primal 0(n)

dual 0(nl)

1 1 1

( ) , ,n

i i i i j ji i j

g

x w x x x x x

Dual is better if n>>l

1

( ) ,n

i ii

g w

x w x x

Page 14: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Notes on Ridge Regression

“Regularization” is key to addressstability and regularization.

Regularization lets method work when n>>l.Dual more efficient when n>>l.Dual only requires inner products of data.

Page 15: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Linear Regression in Feature Space

Key Idea:

Map data to higher dimensional space (feature space) and perform linear regression in embedded space.

Embedding Map:: n NR F R N n x

Page 16: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Nonlinear Regression in Feature Space

1 2

2 2

2 21 2 1 3

,

,

( ) , , 2

( ) ( ),

2

F

a b

w a w b

a b ab

g

w a w b w w ab

x

x w

x

x x w

In primal representation:

Page 17: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Nonlinear Regression in Feature Space

1

1

( ) ( ),

( ), ( )

( , )

F

i ii

i ii

g

K

x x w

x x

x x

In dual representation:

Page 18: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Kernel Function

A kernel is a function K such that

There are many possible kernels.

Simplest is linear kernel.

, ( ), ( )

where is a mapping from input space

to feature space .

FK

F

x u x u

, ,K x u x u

Page 19: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Derivation of Kernel

2 2 2 21 2 1 2 1 2 1 2

2 2 2 21 1 2 2 1 2 1 2

2

1 1 2 2

2

( ), ( )

( , , 2 ), ( , , 2 )

2

,

u u u u v v v v

u v u v u u v v

u v u v

u v

u v

2Thus: ( , ) ,K u v u v

Page 20: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Ridge Regression in Feature Space

To predict new point:

To compute the Gram Matrix

1

1

( ( )) , ( ) ( ), ( ) '

where ( ), ( )

i ii

i

g

x w x x x y G I z

z x x

( ) ( ) ' ( ), ( ) ( , )ij i j i jG K G X X x x x x

Use kernel to compute inner product

Page 21: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Popular Kernels based on vectors

By Hilbert-Schmidt Kernels (Courant and Hilbert 1953)

( ), ( ) ( , )u v K u v for certain and K, e.g.

2

( ) ( , )

Degree polynomial ( , 1)

|| ||Radial Basis Function Machine exp

Two Layer Neural Network ( , )

d

u K u v

d u v

u v

sigmoid u v c

Page 22: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Kernels Intuition

Kernels encode the notion of similarity to be used for a specific applications.Document use cosine of “bags of text”.Gene sequences can used edit distance.

Similarity defines distance:

Trick is to get right encoding for domain.

2|| || ( ) '( ) , 2 , , u v u v u v u u u v v v

Page 23: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Important Points

Kernel method = linear method + embedding in feature

space.Kernel functions used to do embedding efficiently.Feature space is higher dimensional space so must regularize.Choose kernel appropriate to domain.

Page 24: Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.

Kernel Ridge Regression

Simple to derive kernel method

Works great in practice with

some finessing.

Next time:

Practical issues.

More standard dual derivation.