Top Banner
Introduction to Machine Learning Lecture 13 Mehryar Mohri Courant Institute and Google Research [email protected]
32

Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

May 22, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

Introduction to Machine LearningLecture 13

Mehryar MohriCourant Institute and Google Research

[email protected]

Page 2: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 2

Multi-Class Classification

Page 3: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 3

Motivation

Real-world problems often have multiple classes: text, speech, image, biological sequences.

Algorithms studied so far: designed for binary classification problems.

How do we design multi-class classification algorithms?

• can the algorithms used for binary classification be generalized to multi-class classification?

• can we reduce multi-class classification to binary classification?

Page 4: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 4

Multi-Class Classification Problem

Training data: sample drawn i.i.d. from set according to some distribution ,

• mono-label case: .

• multi-label case: .

Problem: find classifier in with small generalization error,

• mono-label case: .

• multi-label case: .

H

XD

Card(Y )=k

Y ={−1, +1}k

h : X→Y

RD(h)=Ex∼D[1h(x) �=f(x)]

S =((x1, y1), . . . , (xm, ym))∈X×Y,

RD(h)=Ex∼D

�1k

�kl=1 1[h(x)]k �=[f(x)]k

Page 5: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Notes

In most tasks, number of classes

For large or infinite, problem often not treated as a multi-class classification problem, e.g., automatic speech recognition.

Computational efficiency issues arise for larger s.

In general, classes not balanced.

5

k≤100.

k

k

Page 6: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 6

Approaches

Single classifier:

• Decision trees.

• AdaBoost-type algorithm.

• SVM-type algorithm.

Combination of binary classifiers:

• One-vs-all.

• One-vs-one.

• Error-correcting codes.

Page 7: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 7

AdaBoost-Type Algorithm

Training data (multi-label case):

Reduction to binary classification:

• each example leads to binary examples:

• apply AdaBoost to the resulting problem.

• choice of .

Computational cost: distribution updates at each round.

(Schapire and Singer, 2000)

(x1, y1), . . . , (xm, ym)∈X×{−1, 1}k.

(xi, yi)→ ((xi, 1), yi[1]), . . . , ((xi, k), yi[k]), i ∈ [1, m].

αt

k

mk

Page 8: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

AdaBoost.MH

8

AdaBoost.MH(S=((x1, y1), . . . , (xm, ym)))1 for i← 1 to m do2 for l← 1 to k do3 D1(i, l)← 1

mk4 for t← 1 to T do5 ht ← base classifier in H with small error �t =PrDt [ht(xi, l) �=yi[l]]6 αt ← choose � to minimize Zt

7 Zt ←�

i,l Dt(i, l) exp(−αtyi[l]ht(xi, l))8 for i← 1 to m do9 for l ← 1 to k do

10 Dt+1(i, l)← Dt(i,l) exp(−αtyi[l]ht(xi,l))Zt

11 fT ←�T

t=1 αtht

12 return hT = sgn(fT )

H⊆({−1, +1}k)(X×Y ).

Page 9: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Bound on Empirical Error

Theorem: The empirical error of the classifier output by AdaBoost.MH verifies:

Proof: similar to the proof for AdaBoost.

Choice of :

• for , as for AdaBoost,

• for , same choice: minimize upper bound.

• other cases: numerical/approximation method.9

�R(h) ≤T�

t=1

Zt.

αt = 12 log 1−�t

�t.

αt

H⊆({−1, +1}k)X×Y

H⊆([−1, 1]k)X×Y

Page 10: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 10

Multi-Class SVMs

Optimization problem:

Decision function:

(Weston and Watkins, 1999)

minw,ξ

12

k�

l=1

�wl�2 + Cm�

i=1

l �=yi

ξil

subject to: wyi · xi + byi ≥ wl · xi + bl + 2− ξil

ξil ≥ 0, (i, l)∈ [1, m]×(Y −{yi}).

h : x �→argmaxl∈Y

(wl · x + bl).

Page 11: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Notes

Idea: slack variable penalizes case where

Binary SVM obtained as special case:

11

ξil

(wyi · xi + byi)− (wl · xi + bl) < 2.

w1 · x + b1 = +1w1 · x + b1 = −1

w2 · x + b2 = −1

w2 · x + b2 = +1

w1 = −w2, b1 = −b2, ξi1 = ξi2 = 2ξi.

Page 12: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 12

Notation:

Optimization problem:

Decision function:

Dual Formulation

αi =k�

l=1

αil cil = 1yi=l.

maxα

2m�

i=1

αi +�

i,j,l

[−12cjyiαiαj + αilαjyi −

12αilαjl](xi · xj)

subject to: ∀l ∈ [1, k],m�

i=1

αil =m�

i=1

cilαi

∀(i, l) ∈ [1, m]×(Y −{yi}), 0 ≤ αil ≤ C, αiyi = 0.

h : x �→ argmaxl=1,...,k

� m�

i=1

(cilαi − αil)(xi · x) + bl

�.

Page 13: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 13

PDS kernel instead of inner product

Optimization: complex constraints, -size problem.

Generalization error: leave-one-out analysis and bounds of binary SVM apply similarly.

One-vs-all solution (non-optimal) feasible solution of multi-class SVM problem.

Simplification: single slack variable per point (Crammer

and Singer, 2002), .ξil → ξi

Notes

mk

Page 14: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 14

Simplified Multi-Class SVMs

Optimization problem:

Decision function:

(Crammer and Singer, 2001)

h : x �→argmaxl∈Y

(wl · x).

minw,ξ

12

k�

l=1

�wl�2 + Cm�

i=1

ξi

subject to: wyi · xi + δyi,l ≥ wl · xi + 1− ξi

(i, l)∈ [1, m]×Y.

Page 15: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 15

Single slack variable per point: maximum of previous slack variables (penalty for worst class):

PDS kernel instead of inner product

Optimization: complex constraints, -size problem.

• specific solution based on decomposition into disjoint sets of constraints (Crammer and Singer, 2001).

Notes

mk

m

k�

l=1

ξil →kmax

l=1ξil.

Page 16: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Dual Formulation

Optimization problem: ( th row of matrix )

Decision function:

16

maxα=[αij ]

m�

i=1

αi · eyi −12

m�

i=1

(αi · αj)(xi · xj)

subject to: 0 ≤ αi ≤ C ∧αi · 1 = 0, i ∈ [1, m].

h(x) = kargmaxl=1

� m�

i=1

αil(xi · x)�.

αi αi

Page 17: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 17

Approaches

Single classifier:

• Decision trees.

• AdaBoost-type algorithm.

• SVM-type algorithm.

Combination of binary classifiers:

• One-vs-all.

• One-vs-one.

• Error-correcting codes.

Page 18: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 18

One-vs-All

Technique:

• for each class learn binary classifier .

• combine binary classifiers via voting mechanism, typically majority vote:

Problem: poor justification.

• calibration: classifier scores not comparable.

• nevertheless: simple and frequently used in practice, computational advantages in some cases.

l∈Yhl =sgn(fl)

h : x �→ argmaxl∈Y

fl(x).

Page 19: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 19

One-vs-One

Technique:

• for each pair learn binary classifier .

• combine binary classifiers via majority vote:

Problem:

• computational: train binary classifiers.

• overfitting: size of training sample could become small for a given pair.

(l, l�)∈Y, l �= l�

hll� :X→{0, 1}

h(x) = argmaxl�∈Y

��{l : hll� (x) = 1}��.

k(k − 1)/2

Page 20: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 20

Computational Comparison

O(kBtrain(m)) O(kBtest)

O(k2Btrain(m/k))(on average)

O(k2Btest)

Training Testing

One-vs-all

One-vs-one

O(km!)

O(k2!!m

!) smaller NSV per B

Time complexity for SVMs, α less than 3.

Page 21: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 21

Heuristics

Training:

• reuse of computation between classifiers, e.g., sharing of kernel computations.

• caching.

Testing:

• directed acyclic graph.

• smaller number of tests.

• ordering?

1 vs 4

2 vs 4

not 1

1 vs 3

not 4

3 vs 4

not 2

2 vs 3

not 4 not 1

1 vs 2

not 3

4 3 2 1

(Platt et al., 2000)

Page 22: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 22

Error-Correcting Code Approach

Technique:

• assign -long binary code word to each class:

• learn binary classifier for each column. Example in class labeled with .

• classifier output: ,

(Dietterich and Bakiri, 1995)

x l

F

M = [Mlj ] ∈ {0, 1}[1,k]×[1,F ].

Mlj

fj: X→{0, 1}

h : x �→argminl∈Y

dHamming

�Ml, f(x)

�.

�f(x)=

�f1(x), . . . , fF (x)

��

Page 23: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 23

8 classes, code length: 6.

Illustration

1 2 3 4 5 61 0 0 0 1 0 02 1 0 0 0 0 03 0 1 1 0 1 04 1 1 0 0 0 05 1 1 0 0 1 06 0 0 1 1 0 17 0 0 1 0 0 08 0 1 0 1 0 0

f1(x)f2(x)f3(x)f4(x)f5(x)f6(x)

0 1 1 0 1 1

clas

ses

codes

new example x

Page 24: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 24

Error-Correcting Codes - Design

Main ideas:

• independent columns: otherwise no effective discrimination.

• distance between rows: if the minimal Hamming distance between rows is , then the multi-class can correct errors.

• columns may correspond to features selected for the task.

• one-vs-all and one-vs-one (with ternary codes) are special cases.

d�d−12

Page 25: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Extensions

Matrix entries in :

• examples marked with disregarded during training.

• one-vs-one becomes also a special case.

Margin loss : function of , e.g., hinge loss.

• Hamming loss:

• Margin loss:

25

(Allwein et al., 2000)

{−1, 0, +1}0

L yf(x)

h(x) = argminl∈{1,...,k}

F�

j=1

1− sgn�Mljfj(x)

2.

h(x) = argminl∈{1,...,k}

F�

j=1

L�Mljfj(x)

�.

Page 26: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Continuous Codes

Optimization problem: ( th row of )

Decision function:

26

Ml l M

minM,ξ

�M�22 + Cm�

i=1

ξi

subject to: K(f(xi),Myi) ≥ K(f(xi),Ml) + 1− ξi

(i, l) ∈ [1, m]× [1, k].

h : x �→ argmaxl∈{1,...,k}

K(f(x),Ml).

(Crammer and Singer, 2000, 2002)

Page 27: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Ideas

Continuous codes: real-valued matrix.

Learn matrix code .

Similar optimization problems with other matrix norms.

Kernel used for similarity between matrix row and prediction vector.

27

K

M

Page 28: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Multiclass Margin

Definition: let . The margin of the training point for the hypothesis is

Thus, misclassifies iff .

28

H⊆RX×Y

(x, y)∈X×Y h∈H

h (x, y)

γh(x, y) = h(x, y)−maxy� �=y

h(x, y�).

γh(x, y)≤0

Page 29: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

Margin Bound

Theorem: Let where . Fix . Then, for any , with probability at least , the following holds

29

H⊆RX×YH1 ={x �→ h(x, y) : h ∈ H, y ∈ Y }

δ>01−δ

ρ>0

Pr[γh(x, y) ≤ 0] ≤ �Pr[γh(x, y) ≤ ρ] + ck

2Rm(H1)ρ

+ c�

�log 1

δ

m,

for some constants .c, c� >0

(Koltchinskii and Panchenko, 2002)

Page 30: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning 30

Applications

One-vs-all approach is the most widely used.

No clear empirical evidence of the superiority of other approaches (Rifkin and Klautau, 2004).

• except perhaps on small data sets with relatively large error rate.

Large structured multi-class problems: often treated as ranking problems (see next lecture).

Page 31: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

References• Erin L. Allwein, Robert E. Schapire and Yoram Singer. Reducing multiclass to binary: A

unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113-141, 2000.

• K. Crammer and Y. Singer. Improved output coding for classification using continuous relaxation. In Proceedings of NIPS, 2000.

• Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265–292, 2001.

• Koby Crammer and Yoram Singer. On the Learnability and Design of Output Codes for Multiclass Problems. Machine Learning 47, 2002.

• Thomas G. Dietterich, Ghulum Bakiri: Solving Multiclass Learning Problems via Error-Correcting Output Codes. Journal of Artificial Intelligence Research (JAIR) 2: 263-286, 1995.

• John C. Platt, Nello Cristianini, and John Shawe-Taylor. Large Margin DAGS for Multiclass Classification. In Advances in Neural Information Processing Systems 12 (NIPS 1999), pp. 547-553, 2000.

31

Page 32: Introduction to Machine Learning Lecture 13mohri/mlu/mlu_lecture_13.pdf · Introduction to Machine Learning Lecture 13 ... Introduction to Machine Learning page 2 Multi-Class Classification.

pageMehryar Mohri - Introduction to Machine Learning

References• Ryan Rifkin. “Everything Old Is New Again: A Fresh Look at Historical Approaches in

Machine Learning.” Ph.D. Thesis, MIT, 2002.

• Rifkin and Klautau. “In Defense of One-Vs-All Classification.” Journal of Machine Learning Research, 5:101-141, 2004.

• Robert E. Schapire. The boosting approach to machine learning: An overview. In D. D. Denison, M. H. Hansen, C. Holmes, B. Mallick, B. Yu, editors, Nonlinear Estimation and Classification. Springer, 2003.

• Robert E. Schapire, Yoav Freund, Peter Bartlett and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651-1686, 1998.

• Robert E. Schapire and Yoram Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135-168, 2000.

• Jason Weston and Chist Watkins. Support Vector Machines for Multi-Class Pattern Recognition. Proceedings of the Seventh European Symposium On Artificial Neural Networks (ESANN ‘99), 1999.

32