Top Banner
14.11.2001 Data mining: Classificati on 1 Course on Data Mining (581550-4 Course on Data Mining (581550-4 Intro/Ass. Rules Intro/Ass. Rules Episodes Episodes Text Mining Text Mining Home Exam Home Exam 24./26.10. 30.10. Clustering Clustering KDD Process KDD Process Appl./Summary Appl./Summary 14.11. 21.11. 7.11. 28.11.
48

Course on Data Mining (581550-4)

Jan 25, 2016

Download

Documents

blaze

7.11. 24./26.10. 14.11. Home Exam. 30.10. 21.11. 28.11. Course on Data Mining (581550-4). Intro/Ass. Rules. Clustering. Episodes. KDD Process. Text Mining. Appl./Summary. Course on Data Mining (581550-4). Today 14.11.2001. Today's subject : Classification, clustering - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 1

Course on Data Mining (581550-4)Course on Data Mining (581550-4)

Intro/Ass. RulesIntro/Ass. RulesIntro/Ass. RulesIntro/Ass. Rules

EpisodesEpisodesEpisodesEpisodes

Text MiningText MiningText MiningText Mining

Home ExamHome Exam

24./26.10.

30.10.

ClusteringClusteringClusteringClustering

KDD ProcessKDD ProcessKDD ProcessKDD Process

Appl./SummaryAppl./SummaryAppl./SummaryAppl./Summary

14.11.

21.11.

7.11.

28.11.

Page 2: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 2

Today 14.11.2001Today 14.11.2001Today 14.11.2001Today 14.11.2001

• Today's subjectToday's subject: :

o Classification, clusteringClassification, clustering

• Next week's programNext week's program: :

o Lecture: Data mining processLecture: Data mining process

o Exercise: Classification, Exercise: Classification, clusteringclustering

o Seminar: Classification, Seminar: Classification, clusteringclustering

Course on Data Mining (581550-4)Course on Data Mining (581550-4)

Page 3: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 3

Classification and clusteringClassification and clustering

I.I. Classification and Classification and predictionprediction

II. Clustering and similarity

Page 4: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 4

• What is classification? What is What is classification? What is prediction?prediction?

• Decision tree inductionDecision tree induction

• Bayesian classificationBayesian classification

• Other classification methodsOther classification methods

• Classification accuracyClassification accuracy

• SummarySummary

Classification and predictionClassification and prediction

OverviewOverview

Page 5: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 5

• Aim: Aim: to predict categorical class labels for new tuples/samples

• Input: Input: a training set of tuples/samples, each with a class label

• Output: Output: a model (a classifier) based on the training set and the class labels

What is classification?What is classification?

Page 6: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 6

Typical classification applicationsTypical classification applications

• Credit approvalCredit approval

• Target marketingTarget marketing

• Medical diagnosisMedical diagnosis

• Treatment effectiveness Treatment effectiveness analysisanalysis

ApplicationsApplications

Page 7: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 7

• Is similar to classificationIs similar to classification

o constructs a model

o uses the model to predict unknown or missing values

• Major method: regressionMajor method: regression

o linear and multiple regression

o non-linear regression

What is prediction?What is prediction?

Page 8: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 8

• Classification:Classification:

o predicts categorical class labels

o classifies data based on the training set and the values in a classification attribute and uses it in classifying new data

• Prediction:Prediction:

o models continuous-valued functions

o predicts unknown or missing values

Classification vs. predictionClassification vs. prediction

Page 9: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 9

• Classification = supervised learningClassification = supervised learning

o training set of tuples/samples accompanied by class labels

o classify new data based on the training set

• Clustering = unsupervised learningClustering = unsupervised learning

o class labels of training data are unknown

o aim in finding possibly existing classes or clusters in the data

TerminologyTerminology

Page 10: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 10

1. step:1. step:

Model constructionModel construction, i.e., build the model from the training set

2. step:2. step:

Model usageModel usage, i.e., check the accuracy of the model and use it for classifying new data

Classification - a two step processClassification - a two step process

It’s a 2-stepIt’s a 2-stepprocess!process!

Page 11: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 11

Model constructionModel construction

• Each tuple/sample Each tuple/sample is assumed to belong a prefined class a prefined class

• The class of a tuple/sample is determined by the class label class label attributeattribute

• The training set training set of tuples/samples is used for model construction model construction

• The model is represented as classification rules, decision trees classification rules, decision trees or mathematical formulaemathematical formulae

Step 1Step 1

Page 12: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 12

• Classify future or unknown objectsClassify future or unknown objects

• Estimate accuracy of the modelEstimate accuracy of the model

o the known class of a test tuple/sample is compared with the result given by the model

o accuracy rate = precentage of the tests tuples/samples correctly classified by the model

Model usageModel usage

Step 2Step 2

Page 13: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 13

An example: model constructionAn example: model construction

TrainingData

NAME RANK YEARS TENUREDMary Assistant Prof 3 noJames Assistant Prof 7 yesBill Professor 2 noJohn Associate Prof 7 yesMark Assistant Prof 6 noAnnie Associate Prof 3 no

ClassificationAlgorithms

IF rank = ‘professor’OR years > 6THEN tenured = yes

Classifier(Model)

Page 14: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 14

An example: model usageAn example: model usage

TestingData

Classifier

NAME RANK YEARS TENUREDTom Assistant Prof 2 noLisa Associate Prof 7 noJack Professor 5 yesAnn Assistant Prof 7 yes

Unseen Data

(Jeff, Professor, 4)

Tenured?

YesYes

Page 15: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 15

• Data cleaningData cleaning

o noise

o missing values

• Relevance analysis Relevance analysis (feature selection)

• Data transformationData transformation

Data PreparationData Preparation

Page 16: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 16

• AccuracyAccuracy

• SpeedSpeed

• RobustnessRobustness

• ScalabilityScalability

• InterpretabilityInterpretability

• SimplicitySimplicity

Evaluation of Evaluation of classification methodsclassification methods

Page 17: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 17

Decision tree inductionDecision tree induction

A decision tree is a tree whereA decision tree is a tree where

• internal nodeinternal node = a test on an attribute

• tree branch branch = an outcome of the test

• leaf nodeleaf node = class label or class distribution

A?

B? C?

D? Yes

Page 18: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 18

Decision tree generationDecision tree generation

Two phases of decision tree generation:Two phases of decision tree generation:

• tree constructiontree construction

o at start, all the training examples at the root

o partition examples based on selected attributes

o test attributes are selected based on a heuristic or a statistical measure

• tree pruningtree pruning

o identify and remove branches that reflect noise or outliers

Page 19: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 19

Decision tree induction – Decision tree induction – Classical example: play tennis?Classical example: play tennis?

Outlook Temperature Humidity Windy Classsunny hot high false Nsunny hot high true Novercast hot high false Prain mild high false Prain cool normal false Prain cool normal true Novercast cool normal true Psunny mild high false Nsunny cool normal false Prain mild normal false Psunny mild normal true Povercast mild high true Povercast hot normal false Prain mild high true N

Training set Training set from from Quinlan’s Quinlan’s ID3ID3

Page 20: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 20

Decision tree obtained with ID3 Decision tree obtained with ID3 (Quinlan 86)(Quinlan 86)

outlook

overcast

humidity windy

high normal falsetrue

sunny rain

P

PN N P

Page 21: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 21

From a decision tree From a decision tree to classification rulesto classification rules

• One rulerule is generated for each pathpath in the tree from the root to a leaf

• Each attribute-value pair along a path forms a conjunction

• The leaf node holds the class prediction

• Rules are generally simpler to understand than trees

IF outlook=sunnyAND

humidity=normal

THEN play tennis

outlook

overcast

humidity windy

high normal falsetrue

sunny rain

P

PN N P

Page 22: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 22

Decision tree algorithmsDecision tree algorithms

• Basic algorithmBasic algorithm

o constructs a tree in a top-downtop-down recursive divide-divide-and-conquerand-conquer manner

o attributes are assumed to be categorical

o greedy (may get trapped in local maxima)

• Many variantsMany variants: ID3, C4.5, CART, CHAID

o main difference: divide (split) criterion / attribute selection measure

Page 23: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 23

Attribute selection measuresAttribute selection measures

• Information gainInformation gain

• Gini indexGini index 22 contingency table contingency table

statisticstatistic

• G-statisticG-statistic

Page 24: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 24

InformationInformation gaingain (1)(1)

• Select the attribute with the highest information highest information gaingain

• Let PP and NN be two classes and SS a dataset with p PP-elements and n NN-elements

• The amount of information needed to decide if an arbitrary example belongs to PP or NN is

npn

npn

npp

npp

npI

22 loglog),(

Page 25: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 25

Information gain (2)Information gain (2)

• Let sets {SS11, S, S22 , …, S , …, Svv} form a partition of the set SS,, when using the attribute A

• Let each SSii contain pi examples of PP and ni examples of NN

• The entropyentropy, or the expected information needed to classify objects in all the subtrees SSi is

• The information that would be gained by branching on A is

1),()(

iii

ii npInp

npAE

)(),()( AEnpIAGain

Page 26: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 26

Information gain – Example (1)Information gain – Example (1)

Assumptions:Assumptions:

• Class PP: plays_tennis = “yes”

• Class NN: plays_tennis = “no”

• Information needed to classify a given sample:

940.0)5,9(),( InpI

Page 27: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 27

Information gain – Example (2)Information gain – Example (2)

Compute the entropy for the attribute outlook:

outlook pi ni I(pi, ni)sunny 2 3 0,971overcast 4 0 0rain 3 2 0,971

694.0)2,3(14

5)0,4(

14

4)3,2(

14

5)( IIIoutlookE

048.0)(

151.0)(

029.0)(

windyGain

humidityGain

etemperaturGain

246.0)()5,9()( outlookEIoutlookGainHence

Now

Similarly

Page 28: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 28

Other criteria used in decision tree Other criteria used in decision tree constructionconstruction

• Conditions for stopping partitioningConditions for stopping partitioning

o all samples belong to the same class

o no attributes left for further partitioning => majority voting for classifying the leaf

o no samples left for classifying

• Branching schemeBranching scheme

o binary vs. k-ary splits

o categorical vs. continuous attributes

• Labeling ruleLabeling rule: a leaf node is labeled with the class to which most samples at the node belong

Page 29: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 29

Overfitting in Overfitting in decision tree classificationdecision tree classification

• The generated tree may overfit the The generated tree may overfit the training datatraining data

o too many branches

o poor accuracy for unseen samples

• Reasons for overfittingReasons for overfitting

o noise and outliers

o too little training data

o local maxima in the greedy search

Page 30: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 30

How to avoid overfitting?How to avoid overfitting?

Two approaches:Two approaches:

• prepruning:prepruning: Halt tree construction early

• postpruning:postpruning: Remove branches from a “fully grown” tree

Page 31: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 31

Classification in Large DatabasesClassification in Large Databases

• ScalabilityScalability: classifying data sets with millions of samples and hundreds of attributes with reasonable speed

• Why decision tree induction in data mining?Why decision tree induction in data mining?

o relatively faster learning speed than other methods

o convertible to simple and understandable classification rules

o can use SQL queries for accessing databases

o comparable classification accuracy

Page 32: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 32

Scalable decision tree induction Scalable decision tree induction methods in data mining studiesmethods in data mining studies

• SLIQSLIQ (EDBT’96 — Mehta et al.)

• SPRINTSPRINT (VLDB’96 — J. Shafer et al.)

• PUBLICPUBLIC (VLDB’98 — Rastogi & Shim)

• RainForestRainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)

Page 33: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 33

Bayesian Classification: Why? (1)Bayesian Classification: Why? (1)

• Probabilistic learningProbabilistic learning:

o calculate explicit probabilities for hypothesis

o among the most practical approaches to certain types of learning problems

• IncrementalIncremental:

o each training example can incrementally increase/decrease the probability that a hypothesis is correct

o prior knowledge can be combined with observed data

Page 34: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 34

Bayesian Classification: Why? (2)Bayesian Classification: Why? (2)

• Probabilistic predictionProbabilistic prediction:

o predict multiple hypotheses, weighted by their probabilities

• StandardStandard:

o even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured

Page 35: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 35

Bayesian classificationBayesian classification

• The classification problem may be formalized using a-posteriori probabilities:a-posteriori probabilities:

P(P(C|X) = ) = probability that the sample tuple

X=<x1,…,xk> is of the class C

• For example

P(P(class==N | N | outlook=sunny,windy=true,…))

• Idea: Idea: assign to sample X the class label C such that P(P(C|X)) is maximal

Page 36: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 36

Estimating a-posteriori probabilitiesEstimating a-posteriori probabilities

• Bayes theoremBayes theorem:

P(P(C|X) = P() = P(X|C)·P()·P(C) / P() / P(X))

• P(P(X)) is constant for all classes

• P(P(C)) = relative freq of class CC samples

• C such that P(P(C|X) is maximum = C such that P(P(X|C)·P()·P(C)) is maximum

• ProblemProblem: computing P(P(X|C)) is unfeasible!

Page 37: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 37

Naïve Bayesian classificationNaïve Bayesian classification

• Naïve assumption: attribute independenceattribute independence

P(P(x1,…,xk|C) = P() = P(x1|C)·…·P()·…·P(xk|C))

• If i-th attribute is categoricalcategorical:P(P(xi|C)) is estimated as the relative frequency of samples having value xi as i-th attribute in the class C

• If i-th attribute is continuouscontinuous:P(P(xi|C)) is estimated thru a Gaussian density function

• Computationally easy in both cases

Page 38: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 38

Naïve Bayesian classification – Naïve Bayesian classification – Example (1)Example (1)

• Estimating P(P(xi|C)) P(pp) = 9/14

P(nn) = 5/14

OutlookP(sunny | p) = 2/9 P(sunny | n) = 3/5P(overcast | p) = 4/9 P(overcast | n) = 0P(rain | p) = 3/9 P(rain | n) = 2/5TemperatureP(hot | p) = 2/9 P(hot | n) = 2/5P(mild | p) = 4/9 P(mild | n) = 2/5P(cool | p) = 3/9 P(cool | n) = 1/5

HumidityP(high | p) = 3/9 P(high | n) = 4/5P(normal | p) = 6/9 P(normal | n) = 1/5

WindyP(true | p) = 3/9 P(true | n) = 3/5P(false | p) = 6/9 P(false | n) = 2/5

Page 39: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 39

Naïve Bayesian classification – Naïve Bayesian classification – Example (2)Example (2)

• Classifying X:

o an unseen sample X = <rain, hot, high, false>

o P(P(X|p)·P()·P(p) = ) = P(P(rain|p)·P()·P(hot|p)·P()·P(high|p)·P()·P(false|p)·P()·P(p) = ) = 3/9·2/9·3/9·6/9·9/14 = = 0.010582

o P(P(X|n)·P()·P(n) = ) = P(P(rain|n)·P()·P(hot|n)·P()·P(high|n)·P()·P(false|n)·P()·P(n) = ) = 2/5·2/5·4/5·2/5·5/14 = = 0.0182860.018286

o Sample X is classified in class n (don’t play)

Page 40: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 40

Naïve Bayesian classification –Naïve Bayesian classification –the independence hypothesisthe independence hypothesis

• … makes computation possible

• … yields optimal classifiers when satisfied

• … but is seldom satisfied in practice, as attributes (variables) are often correlated.

• Attempts to overcome this limitation:

o Bayesian networksBayesian networks, that combine Bayesian reasoning with causal relationships between attributes

o Decision treesDecision trees, that reason on one attribute at the time, considering most important attributes first

Page 41: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 41

Other classification methodsOther classification methods(not covered)(not covered)

• Neural networksNeural networks

• k-nearest neighbor classifierk-nearest neighbor classifier

• Case-based reasoningCase-based reasoning

• Genetic algorithmGenetic algorithm

• Rough set approachRough set approach

• Fuzzy set approachesFuzzy set approaches

More More methodsmethods

Page 42: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 42

Classification accuracyClassification accuracy

Estimating error rates:Estimating error rates:

• PartitionPartition: training-and-testing (large data sets)

o use two independent data sets, e.g., training set (2/3), test set(1/3)

• Cross-validationCross-validation (moderate data sets)

o divide the data set into k subsamples

o use k-1 subsamples as training data and one sub-sample as test data --- k-fold cross-validation

• BootstrappingBootstrapping: leave-one-out (small data sets)

Page 43: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 43

• Classification is an Classification is an extensively studied problem extensively studied problem

• Classification is probably Classification is probably one of the most widely used one of the most widely used data mining techniques with data mining techniques with a lot of extensionsa lot of extensions

Summary (1)Summary (1)

Page 44: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 44

• Scalability is still an Scalability is still an important issue for important issue for database applicationsdatabase applications

• Research directions: Research directions: classification of non-classification of non-relational data, e.g., text, relational data, e.g., text, spatial and multimediaspatial and multimedia

Summary (2)Summary (2)

Page 45: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 45

Thanks to Thanks to Jiawei Han from Simon Fraser University Jiawei Han from Simon Fraser University

for his slides which greatly helped for his slides which greatly helped in preparing this lecture! in preparing this lecture!

Also thanks to Also thanks to Fosca Giannotti and Dino Pedreschi from Pisa Fosca Giannotti and Dino Pedreschi from Pisa

for their slides of classification.for their slides of classification.

Course on Data MiningCourse on Data Mining

Page 46: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 46

References - classificationReferences - classification

• C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future Generation Computer Systems, 13, 1997.

• F. Bonchi, F. Giannotti, G. Mainetto, D. Pedreschi. Using Data Mining Techniques in Fiscal Fraud Detection. In Proc. DaWak'99, First Int. Conf. on Data Warehousing and Knowledge Discovery, Sept. 1999.

• F. Bonchi , F. Giannotti, G. Mainetto, D. Pedreschi. A Classification-based Methodology for Planning Audit Strategies in Fraud Detection. In Proc. KDD-99, ACM-SIGKDD Int. Conf. on Knowledge Discovery & Data Mining, Aug. 1999.

• J. Catlett. Megainduction: machine learning on very large databases. PhD Thesis, Univ. Sydney, 1991.

• P. K. Chan and S. J. Stolfo. Metalearning for multistrategy and parallel learning. In Proc. 2nd Int. Conf. on Information and Knowledge Management, p. 314-323, 1993.

• J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufman, 1993.

• J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986.

• L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth International Group, 1984.

• P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data for scaling machine learning. In Proc. KDD'95, August 1995.

Page 47: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 47

References - classificationReferences - classification

• J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree construction of large datasets. In Proc. 1998 Int. Conf. Very Large Data Bases, pages 416-427, New York, NY, August 1998.

• B. Liu, W. Hsu and Y. Ma. Integrating classification and association rule mining. In Proc. KDD’98, New York, 1998.

• J. Magidson. The CHAID approach to segmentation modeling: Chi-squared automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of Marketing Research, pages 118-159. Blackwell Business, Cambridge Massechusetts, 1994.

• M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data mining. In Proc. 1996 Int. Conf. Extending Database Technology (EDBT'96), Avignon, France, March 1996.

• S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-Diciplinary Survey. Data Mining and Knowledge Discovery 2(4): 345-389, 1998

• J. R. Quinlan. Bagging, boosting, and C4.5. In Proc. 13th Natl. Conf. on Artificial Intelligence (AAAI'96), 725-730, Portland, OR, Aug. 1996.

• R. Rastogi and K. Shim. Public: A decision tree classifer that integrates building and pruning. In Proc. 1998 Int. Conf. Very Large Data Bases, 404-415, New York, NY, August 1998.

Page 48: Course on Data Mining (581550-4)

14.11.2001 Data mining: Classification 48

References - classificationReferences - classification

• J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data mining. In Proc. 1996 Int. Conf. Very Large Data Bases, 544-555, Bombay, India, Sept. 1996.

• S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufman, 1991.

• D. E. Rumelhart, G. E. Hinton and R. J. Williams. Learning internal representation by error propagation. In D. E. Rumelhart and J. L. McClelland (eds.) Parallel Distributed Processing. The MIT Press, 1986