Top Banner
Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar, and slides from Jiawei Han for the book of Data Mining – Concepts and Techniqies by Jiawei Han and Micheline Kamber. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1
54

Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

Dec 15, 2015

Download

Documents

Genesis Tolles
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

Data Mining Classification

This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar, and slides from Jiawei Han for the book of Data Mining – Concepts and Techniqies by Jiawei Han and Micheline Kamber.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1

Page 2: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 2

Classification: Definition

Given a collection of records (training set )– Each record contains a set of attributes, one of the

attributes is the class. Find a model for class attribute as a function

of the values of other attributes. Goal: previously unseen records should be

assigned a class as accurately as possible.– A test set is used to determine the accuracy of the

model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.

Page 3: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3

Classification Techniques

Decision Tree Naïve Bayes kNN Classification

Page 4: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Page 5: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 5

Example of a Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

class

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Splitting Attributes

Training Data Model: Decision Tree

Page 6: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 6

Another Example of Decision Tree

categoric

al

categoric

al

continuous

classMarSt

Refund

TaxInc

YESNO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

There could be more than one tree that fits the same data!

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

Page 7: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Page 8: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test DataStart from the root of tree.

Page 9: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 9

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 10: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 11: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 11

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 12: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 12

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 13: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Assign Cheat to “No”

Page 14: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 14

Performance Metrics

PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a(TP)

b(FN)

Class=No c(FP)

d(TN)

FNFPTNTP

TNTP

dcba

da

Accuracy

Page 15: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15

General Structure of Hunt’s Algorithm

Let Dt be the set of training records that reach a node t

General Procedure:

– If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt

– If Dt is an empty set, then t is a leaf node labeled by the default class, yd

– If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.

Tid Refund Marital Status

Taxable Income Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes 10

Dt

?

Page 16: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 16

Hunt’s Algorithm

Don’t Cheat

Refund

Don’t Cheat

Don’t Cheat

Yes No

Refund

Don’t Cheat

Yes No

MaritalStatus

Don’t Cheat

Cheat

Single,Divorced

Married

TaxableIncome

Don’t Cheat

< 80K >= 80K

Refund

Don’t Cheat

Yes No

MaritalStatus

Don’t Cheat

Cheat

Single,Divorced

Married

Page 17: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17

Tree Induction

Greedy strategy.

– Split the records based on an attribute test that optimizes certain criterion.

Issues

– Determine how to split the recordsHow to specify the attribute test condition?How to determine the best split?

– Determine when to stop splitting

Page 18: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 18

How to determine the Best Split

OwnCar?

C0: 6C1: 4

C0: 4C1: 6

C0: 1C1: 3

C0: 8C1: 0

C0: 1C1: 7

CarType?

C0: 1C1: 0

C0: 1C1: 0

C0: 0C1: 1

StudentID?

...

Yes No Family

Sports

Luxury c1c10

c20

C0: 0C1: 1

...

c11

Before Splitting: 10 records of class 0,10 records of class 1

Which test condition is the best?

Page 19: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19

How to determine the Best Split

Greedy approach:

– Nodes with homogeneous class distribution are preferred

Need a measure of node impurity:

C0: 5C1: 5

C0: 9C1: 1

Non-homogeneous,

High degree of impurity

Homogeneous,

Low degree of impurity

Page 20: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 20

How to Find the Best Split

B?

Yes No

Node N3 Node N4

A?

Yes No

Node N1 Node N2

Before Splitting:

C0 N10 C1 N11

C0 N20 C1 N21

C0 N30 C1 N31

C0 N40 C1 N41

C0 N00 C1 N01

M0

M1 M2 M3 M4

M12 M34Gain = M0 – M12 vs M0 – M34

Page 21: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 21

Measure of Impurity: GINI

Gini Index for a given node t :

(NOTE: p( j | t) is the relative frequency of class j at node t).

– Maximum when records are equally distributed among all classes, implying least interesting information

– Minimum (0.0) when all records belong to one class, implying most interesting information

j

tjptGINI 2)]|([1)(

C1 0C2 6

Gini=0.000

C1 2C2 4

Gini=0.444

C1 3C2 3

Gini=0.500

C1 1C2 5

Gini=0.278

Page 22: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 22

Examples for computing GINI

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

j

tjptGINI 2)]|([1)(

P(C1) = 1/6 P(C2) = 5/6

Gini = 1 – (1/6)2 – (5/6)2 = 0.278

P(C1) = 2/6 P(C2) = 4/6

Gini = 1 – (2/6)2 – (4/6)2 = 0.444

Page 23: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 23

Splitting Based on GINI

When a node p is split into k partitions (children), the quality of split is computed as,

where, ni = number of records at child i,

n = number of records at node p.

k

i

isplit iGINI

n

nGINI

1

)(

Page 24: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 24

Binary Attributes: Computing GINI Index

Split into two partitions Effect of weighing partitions:

– Larger and Purer Partitions are sought for.

B?

Yes No

Node N1 Node N2

Parent

C1 6

C2 6

Gini = 0.500

Gini(N1) = 1 – (5/7)2 – (2/7)2 = 0.408

Gini(N2) = 1 – (1/5)2 – (4/5)2 = 0.320

Gini(Children) = 7/12 * 0.408 + 5/12 * 0.320= 0.371

N1 N2 C1 5 1 C2 2 4

Gini=0.371

Page 25: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25

Categorical Attributes: Computing Gini Index

For each distinct value, gather counts for each class in the dataset

Use the count matrix to make decisions

CarType{Sports,Luxury}

{Family}

C1 3 1

C2 2 4

Gini 0.400

CarType

{Sports}{Family,Luxury}

C1 2 2

C2 1 5

Gini 0.419

CarType

Family Sports Luxury

C1 1 2 1

C2 4 1 1

Gini 0.393

Multi-way split Two-way split (find best partition of values)

Page 26: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 26

Continuous Attributes: Computing Gini Index

Use Binary Decisions based on one value

Several Choices for the splitting value– Number of possible splitting values

= Number of distinct values Each splitting value has a count matrix

associated with it– Class counts in each of the

partitions, A < v and A v Simple method to choose best v

– For each v, scan the database to gather count matrix and compute its Gini index

– Computationally Inefficient! Repetition of work.

TaxableIncome> 80K?

Yes No

Page 27: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 27

Alternative Splitting Criteria based on INFO

Entropy at a given node t:

(NOTE: p( j | t) is the relative frequency of class j at node t).

– Measures homogeneity of a node. Maximum when records are equally distributed among

all classes implying least informationMinimum (0.0) when all records belong to one class,

implying most information

– Entropy based computations are similar to the GINI index computations

j

tjptjptEntropy )|(log)|()(

Page 28: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 28

Examples for computing Entropy

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0

P(C1) = 1/6 P(C2) = 5/6

Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65

P(C1) = 2/6 P(C2) = 4/6

Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92

j

tjptjptEntropy )|(log)|()(2

Page 29: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29

Splitting Based on INFO...

Information Gain:

Parent Node, p is split into k partitions;

ni is number of records in partition i

– Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN)

– Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

k

i

i

splitiEntropy

nn

pEntropyGAIN1

)()(

Page 30: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30

Splitting Based on INFO...

Gain Ratio:

Parent Node, p is split into k partitions

ni is the number of records in partition i

– Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized!

– Designed to overcome the disadvantage of Information Gain

SplitINFO

GAINGainRATIO Split

split

k

i

ii

nn

nn

SplitINFO1

log

Page 31: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31

Splitting Criteria based on Classification Error

Classification error at a node t :

Measures misclassification error made by a node. Maximum when records are equally distributed among all

classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying

most interesting information

)|(max1)( tiPtErrori

Page 32: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32

Examples for Computing Error

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Error = 1 – max (0, 1) = 1 – 1 = 0

P(C1) = 1/6 P(C2) = 5/6

Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6

P(C1) = 2/6 P(C2) = 4/6

Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

)|(max1)( tiPtErrori

Page 33: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 33

Comparison among Splitting Criteria

For a 2-class problem:

Page 34: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 34

Stopping Criteria for Tree Induction

Stop expanding a node when all the records belong to the same class

Stop expanding a node when all the records have similar attribute values

Early termination

Page 35: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 35

Training Dataset

age income student credit_rating buys_computer<=30 high no fair no<=30 high no excellent no30…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

This follows an example from Quinlan’s ID3

Page 36: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 36

More on Information Gain

S contains si tuples of class Ci for i = {1, …, m} information measures info required to classify

any arbitrary tuple

entropy of attribute A with values {a1,a2,…,av}

information gained by branching on attribute A

j

tjptjptEntropy )|(log)|()(2

Page 37: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 37

Attribute Selection by Information Gain

g Class P: buys_computer = “yes”

g Class N: buys_computer = “no”

g I(p, n) = I(9, 5) =0.940

g Compute the entropy for age:

means “age <=30” has 5 out

of 14 samples, with 2 yes’es and

3 no’s. Hence

Similarly,

age pi ni I(pi, ni)<=30 2 3 0.97130…40 4 0 0>40 3 2 0.971

694.0)2,3(14

5

)0,4(14

4)3,2(

14

5)(

I

IIageE

048.0)_(

151.0)(

029.0)(

ratingcreditGain

studentGain

incomeGain

246.0)(),()( ageEnpIageGainage income student credit_rating buys_computer<=30 high no fair no<=30 high no excellent no31…40 high no fair yes>40 medium no fair yes>40 low yes fair yes>40 low yes excellent no31…40 low yes excellent yes<=30 medium no fair no<=30 low yes fair yes>40 medium yes fair yes<=30 medium yes excellent yes31…40 medium no excellent yes31…40 high yes fair yes>40 medium no excellent no

)3,2(14

5I

Page 38: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 38

Output: A Decision Tree for

“buys_computer”

age?

overcast

student? credit rating?

no yes fairexcellent

<=30 >40

no noyes yes

yes

30..40

Page 39: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 39

Classification Rules

Represent the knowledge in the form of IF-THEN rules One rule is created for each path from the root to a leaf Each attribute-value pair along a path forms a conjunction The leaf node holds the class prediction Rules are easier for humans to understand Example

IF age = “<=30” AND student = “no” THEN buys_computer = “no”

IF age = “<=30” AND student = “yes” THEN buys_computer = “yes”

IF age = “31…40” THEN buys_computer = “yes”

IF age = “>40” AND credit_rating = “excellent” THEN buys_computer = “yes”

IF age = “<=30” AND credit_rating = “fair” THEN buys_computer = “no”

Page 40: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 40

Decision Tree Based Classification

Advantages:

– Inexpensive to construct

– Extremely fast at classifying unknown records

– Easy to interpret for small-sized trees

– Accuracy is comparable to other classification techniques for many simple data sets

Page 41: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 41

Decision Boundary

y < 0.33?

: 0 : 3

: 4 : 0

y < 0.47?

: 4 : 0

: 0 : 4

x < 0.43?

Yes

Yes

No

No Yes No

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

• Border line between two neighboring regions of different classes is known as decision boundary

• Decision boundary is parallel to axes because test condition involves a single attribute at-a-time

Page 42: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 42

Oblique Decision Trees

x + y < 1

Class = + Class =

• Test condition may involve multiple attributes

• More expressive representation

• Finding optimal test condition is computationally expensive

Page 43: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 43

Model Underfitting/Overfitting

The errors of a classification model are divided into two types

– Training errors: the number of misclassification errors committed on training records.

– Generalization errors: the expected error of the model on previously unseen records.

A good model must have both errors low. Model underfitting: both type of errors are large when

the decision tree is too small. Model overfitting: training error is small but

generalization error is large, when the decision tree is too large.

Page 44: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 44

Underfitting and Overfitting (Example)

500 circular and 500 triangular data points.

Circular points:

0.5 sqrt(x12+x2

2) 1

Triangular points:

sqrt(x12+x2

2) > 0.5 or

sqrt(x12+x2

2) < 1

Page 45: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 45

Underfitting and Overfitting

Overfitting

Underfitting: when model is too simple, both training and test errors are large

Page 46: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 46

An Example: Training Dataset

Page 47: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 47

An Example: Testing Dataset

Page 48: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 48

An Example: Two Models, M1 and M2

0% training error30% testing error

20% training error10% testing error

(human, warm-blooded, yes, no, no)(dolphin, warm-blooded, yes, no, no)(spiny anteater, warm-blooded, no, yes, yes)

Page 49: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 49

Occam’s Razor

Given two models of similar generalization errors, one should prefer the simpler model over the more complex model

For complex models, there is a greater chance that it was fitted accidentally by errors in data

Therefore, one should include model complexity when evaluating a model

Page 50: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 50

Incorporating Model Complexity (2)

Training errors: error on training ( e(t)) Generalization errors: error on testing ( e’(t))

Methods for estimating generalization errors:

– Optimistic approach: e’(t) = e(t)

– Pessimistic approach: For each leaf node: e’(t) = (e(t)+0.5) Total errors: e’(T) = e(T) + N 0.5 (N: number of leaf nodes) For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10/1000 = 1%

Generalization error = (10 + 300.5)/1000 = 2.5%

Page 51: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 51

An Example: Pessimistic Error Estimate

The generalization error of TL is (4 + 7 * 0.5)/24 = 0.3125, and the generalization error of TR is (6 + 4 * 0.5)/24 = 0.3333, where the penalty term is 0.5.

Based on pessimistic error estimate, the TL should be used.

Page 52: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 52

How to Address Overfitting (1)

Pre-Pruning (Early Stopping Rule)

– Stop the algorithm before it becomes a fully-grown tree

– Typical stopping conditions for a node: Stop if all instances belong to the same class Stop if all the attribute values are the same

– More restrictive conditions: Stop if number of instances is less than some user-specified threshold Stop if class distribution of instances are independent of the available features (e.g., using chi-squared test) Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain).

Page 53: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 53

How to Address Overfitting (2)

Post-pruning

– Grow decision tree to its entirety

– Trim the nodes of the decision tree in a bottom-up fashion

– If generalization error improves after trimming, replace sub-tree by a leaf node.

– Class label of leaf node is determined from majority class of instances in the sub-tree

– Can use MDL (Minimum Description Length) for post-pruning

Page 54: Data Mining Classification This lecture node is modified based on Lecture Notes for Chapter 4/5 of Introduction to Data Mining by Tan, Steinbach, Kumar,

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54

Example of Post-Pruning

A?

A1

A2 A3

A4

Class = Yes 20

Class = No 10

Error = 10/30

Training Error (Before splitting) = 10/30

Pessimistic error = (10 + 0.5)/30 = 10.5/30

Training Error (After splitting) = 9/30

Pessimistic error (After splitting)

= (9 + 4 0.5)/30 = 11/30

PRUNE!

Class = Yes 8

Class = No 4

Class = Yes 3

Class = No 4

Class = Yes 4

Class = No 1

Class = Yes 5

Class = No 1