Top Banner
Department of Computer Science Department of Computer Science University of Wisconsin – Eau Claire University of Wisconsin – Eau Claire Eau Claire, WI 54701 Eau Claire, WI 54701 [email protected] [email protected] 715-836-2526 715-836-2526 Introduction to Introduction to Data Mining Data Mining Michael R. Wick Professor and Chair
45
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Støtterne

Department of Computer ScienceDepartment of Computer Science

University of Wisconsin – Eau ClaireUniversity of Wisconsin – Eau Claire

Eau Claire, WI 54701Eau Claire, WI 54701

[email protected]@uwec.edu

715-836-2526715-836-2526

Introduction to Data Introduction to Data MiningMiningMichael R. Wick

Professor and Chair

Page 2: Støtterne

AcknowledgementsAcknowledgements

Some of the material used in this talk is Some of the material used in this talk is drawn from:drawn from:– Dr. Jiawei Han at University of Illinois at

Urbana Champaign– Dr. Bhavani Thuraisingham (MITRE Corp. and

UT Dallas)– Dr. Chris Clifton, Indiana Center for Database

Systems, Purdue University

Page 3: Støtterne

Road MapRoad Map

• Definition and NeedDefinition and Need• ApplicationsApplications• ProcessProcess• Types Types • Example: The Apriori AlgorithmExample: The Apriori Algorithm• State of PracticeState of Practice• Related TechniquesRelated Techniques• Data PreprocessingData Preprocessing

Page 4: Støtterne

What Is Data Mining?What Is Data Mining?

• Data mining (knowledge discovery from data) Data mining (knowledge discovery from data) – Extraction of interesting (non-trivial, implicit, previously unknown

and potentially useful) patterns or knowledge from huge amount of data

– Data mining: a misnomer

• Alternative namesAlternative names– Knowledge discovery (mining) in databases (KDD), knowledge

extraction, data/pattern analysis, data archeology, data dredging, information harvesting, business intelligence, etc.

• Watch out: Is everything “data mining”?Watch out: Is everything “data mining”?– (Deductive) query processing. – Expert systems or small learning programs

Page 5: Støtterne

What is Data Mining?What is Data Mining?Real Example from the NBAReal Example from the NBA

• Play-by-play information recorded by teamsPlay-by-play information recorded by teams– Who is on the court– Who shoots– Results

• Coaches want to know what works bestCoaches want to know what works best– Plays that work well against a given team– Good/bad player matchups

• Advanced ScoutAdvanced Scout (from IBM Research) is a (from IBM Research) is a data mining tool to answer these questionsdata mining tool to answer these questions

http://www.nba.com/news_feat/beyond/0126.html

0 20 40 60

OverallShootingPercentage

Starks+Houston+Ward playing

Page 6: Støtterne

Necessity for Data MiningNecessity for Data Mining• Large amounts of current and historical data being storedLarge amounts of current and historical data being stored

– Only small portion (~5-10%) of collected data is analyzed– Data that may never be analyzed is collected in the fear that something that may

prove important will be missed• As databases grow larger, decision-making from the data is not As databases grow larger, decision-making from the data is not

possible; need knowledge derived from the stored datapossible; need knowledge derived from the stored data• Data sourcesData sources

– Health-related services, e.g., benefits, medical analyses– Commercial, e.g., marketing and sales– Financial– Scientific, e.g., NASA, Genome– DOD and Intelligence

• Desired analysesDesired analyses– Support for planning (historical supply and demand trends)– Yield management (scanning airline seat reservation data to maximize yield per

seat)– System performance (detect abnormal behavior in a system)– Mature database analysis (clean up the data sources)

Page 7: Støtterne

Potential ApplicationsPotential Applications

• Data analysis and decision supportData analysis and decision support– Market analysis and management

• Target marketing, customer relationship management (CRM), market basket analysis, cross selling, market segmentation

– Risk analysis and management• Forecasting, customer retention, improved underwriting, quality

control, competitive analysis

– Fraud detection • Finding outliers in credit card purchases

• Other ApplicationsOther Applications– Text mining (news group, email, documents) and Web mining– Stream data mining– DNA and bio-data analysis

Page 8: Støtterne

adapted from:U. Fayyad, et al. (1995), “From Knowledge Discovery to Data Mining: An Overview,” Advances in Knowledge Discovery and Data Mining, U. Fayyad et al. (Eds.), AAAI/MIT Press

DataTargetData

Selection

KnowledgeKnowledge

PreprocessedData

Patterns

Data Mining

Interpretation/Evaluation

Knowledge Discovery in Knowledge Discovery in Databases: ProcessDatabases: Process

Preprocessing

Page 9: Støtterne

Steps of a KDD ProcessSteps of a KDD Process

• Learning the application domainLearning the application domain– relevant prior knowledge and goals of application

• Creating a target data set: Creating a target data set: data selectiondata selection• Data Data cleaningcleaning: (may take 60% of effort!): (may take 60% of effort!)• Data Data reductionreduction and and transformationtransformation

– Find useful features, dimensionality/variable reduction, invariant representation.

• Choosing Choosing methodsmethods of data mining of data mining – summarization, classification, regression, association, clustering.

• Choosing the mining Choosing the mining algorithm(s)algorithm(s)• Data mining: search for patterns of interestData mining: search for patterns of interest• Pattern evaluation and knowledge presentationPattern evaluation and knowledge presentation

– visualization, transformation, removing redundant patterns, etc.• Use of discovered knowledgeUse of discovered knowledge

Page 10: Støtterne

Data Mining and Business Data Mining and Business IntelligenceIntelligence

Increasing potentialto supportbusiness decisions End User

Business Analyst

DataAnalyst

DBA

MakingDecisions

Data Presentation

Visualization Techniques

Data MiningInformation Discovery

Data Exploration

OLAP, MDA

Statistical Analysis, Querying and Reporting

Data Warehouses / Data Marts

Data SourcesPaper, Files, Information Providers, Database Systems, OLTP

Page 11: Støtterne

Multiple Perspectives in Multiple Perspectives in Data MiningData Mining

• Data to be minedData to be mined– Relational, data warehouse, transactional, stream, object-

oriented/relational, active, spatial, time-series, text, multi-media, heterogeneous, legacy, WWW

• Knowledge to be minedKnowledge to be mined– Characterization, discrimination, association, classification,

clustering, trend/deviation, outlier analysis, etc.– Multiple/integrated functions and mining at multiple levels

• Techniques utilizedTechniques utilized– Database-oriented, data warehouse (OLAP), machine learning,

statistics, visualization, etc.• Applications adaptedApplications adapted

– Retail, telecommunication, banking, fraud analysis, bio-data mining, stock market analysis, Web mining, etc.

Page 12: Støtterne

Ingredients of an Effective Ingredients of an Effective KDD ProcessKDD Process

Background Knowledge

Goals for Learning Knowledge Base Database(s)

Plan for

Learning

DiscoverKnowledge

DetermineKnowledgeRelevancy

EvolveKnowledge/

Data

Generateand Test

Hypotheses

Visualization andHuman Computer

Interaction

Discovery Algorithms

“In order to discoveranything, you must

be looking forsomething.” Murphy’s1st Law of Serendipity

Page 13: Støtterne

What Can Data Mining Do?What Can Data Mining Do?

• ClusteringClustering– Identify previously unknown groups

• ClassificationClassification– Give operational definitions to categories

• AssociationAssociation– Find Association rules

• Many others…Many others…

Page 14: Støtterne

ClusteringClustering

• Cluster: a collection of data objectsCluster: a collection of data objects– Similar to one another within the same cluster– Dissimilar to the objects in other clusters

• Cluster analysisCluster analysis– Grouping a set of data objects into clusters

• Clustering is unsupervised classification: no Clustering is unsupervised classification: no predefined classespredefined classes

• Typical applicationsTypical applications– As a stand-alone tool to get insight into data distribution – As a preprocessing step for other algorithms

Page 15: Støtterne

Some Clustering ApproachesSome Clustering Approaches

• Iterative Distance-based ClusteringIterative Distance-based Clustering– Specify in advance the number of desired clusters (k)– K random points chosen as cluster centers– Instances assigned to closest center– Centroid (or mean) of all points in cluster is calculated– Repeat until clusters are stable

• Incremental ClusteringIncremental Clustering– Uses tree to represent clusters– Nodes represent clusters (or subclusters)– Instances added one by one and tree updated– Updating can involve simple placement of instance in cluster or re-clustering– Uses category utility function to determine if instance fits with each cluster– Can result in merging or splitting of existing clusters

• Category UtilityCategory Utility– Uses quadratic loss function of conditional probabilities– Does the addition of new instance help us better predict the value of attributes for other

instances?

Page 16: Støtterne

General Applications of General Applications of Clustering Clustering

• Pattern RecognitionPattern Recognition

• Spatial Data Analysis Spatial Data Analysis – create thematic maps in GIS by clustering feature spaces– detect spatial clusters and explain them in spatial data mining

• Image ProcessingImage Processing

• Economic Science (especially market research)Economic Science (especially market research)

• WWWWWW– Document classification– Cluster Weblog data to discover groups of similar access patterns

Page 17: Støtterne

Examples of Clustering Examples of Clustering ApplicationsApplications

• Marketing:Marketing: Help marketers discover distinct groups in their Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop customer bases, and then use this knowledge to develop targeted marketing programstargeted marketing programs

• Land use:Land use: Identification of areas of similar land use in an earth Identification of areas of similar land use in an earth observation databaseobservation database

• Insurance:Insurance: Identifying groups of motor insurance policy Identifying groups of motor insurance policy holders with a high average claim costholders with a high average claim cost

• City-planning:City-planning: Identifying groups of houses according to their Identifying groups of houses according to their house type, value, and geographical locationhouse type, value, and geographical location

• Earth-quake studies:Earth-quake studies: Observed earth quake epicenters should Observed earth quake epicenters should be clustered along continent faultsbe clustered along continent faults

Page 18: Støtterne

Classification (vs Prediction)Classification (vs Prediction)

• Classification:Classification: – predicts categorical class labels (discrete/nominal)– classifies data (constructs a model) based on the training set

and the values (class labels) in a classifying attribute and uses it in classifying new data

– Learns operational definition• Prediction:Prediction:

– models continuous-valued functions, i.e., predicts unknown or missing values

• Typical ApplicationsTypical Applications– credit approval– target marketing– medical diagnosis– treatment effectiveness analysis

Page 19: Støtterne

Classification—A Two-Step Classification—A Two-Step ProcessProcess

• Model construction: describing a set of predetermined classesModel construction: describing a set of predetermined classes– Each tuple/sample is assumed to belong to a predefined class, as

determined by the class label attribute– The set of tuples used for model construction is training set– The model is represented as classification rules, decision trees, or

mathematical formula

• Model usage: for classifying future or unknown objectsModel usage: for classifying future or unknown objects– Estimate accuracy of the model

• The known label of test sample is compared with the classified result from the model

• Accuracy rate is the percentage of test set samples that are correctly classified by the model

• Test set is independent of training set, otherwise over-fitting will occur

– If the accuracy is acceptable, use the model to classify data tuples whose class labels are not known

Page 20: Støtterne

Classification Process (1): Classification Process (1): Model ConstructionModel Construction

TrainingData

NAME RANK YEARS TENUREDMike Assistant Prof 3 noMary Assistant Prof 7 yesBill Professor 2 yesJim Associate Prof 7 yesDave Assistant Prof 6 noAnne Associate Prof 3 no

ClassificationAlgorithms

IF rank = ‘professor’OR years > 6THEN tenured = ‘yes’

Classifier(Model)

Page 21: Støtterne

Classification Process (2): Classification Process (2): Use the Model in PredictionUse the Model in Prediction

Classifier

TestingData

NAME RANK YEARS TENUREDTom Assistant Prof 2 noMerlisa Associate Prof 7 noGeorge Professor 5 yesJoseph Assistant Prof 7 yes

Unseen Data

(Jeff, Professor, 4)

Tenured?

Page 22: Støtterne

Classification ApproachesClassification Approaches

• Divide and ConquerDivide and Conquer

– Results in decision tree

– Uses “information gain” function

• CoveringCovering - Select category for which to learn rule

- Add conditions on rule until “good enough”

Page 23: Støtterne

AssociationAssociation

• Association rule mining:Association rule mining:– Finding frequent patterns, associations, correlations, or causal

structures among sets of items or objects in transaction databases, relational databases, and other information repositories.

– Frequent pattern: pattern (set of items, sequence, etc.) that occurs frequently in a database [AIS93]

• Motivation: finding regularities in dataMotivation: finding regularities in data– What products were often purchased together? — Beer and

diapers?!– What are the subsequent purchases after buying a PC?– What kinds of DNA are sensitive to this new drug?– Can we automatically classify web documents?

Page 24: Støtterne

Why Is Association Mining Why Is Association Mining Important?Important?

• Foundation for many essential data mining Foundation for many essential data mining taskstasks– Association, correlation, causality– Sequential patterns, temporal or cyclic association,

partial periodicity, spatial and multimedia association– Associative classification, cluster analysis, iceberg

cube, fascicles (semantic data compression)

• Broad applicationsBroad applications– Basket data analysis, cross-marketing, catalog

design, sale campaign analysis– Web log (click stream) analysis, DNA sequence

analysis, etc.

Page 25: Støtterne

Basic Concepts:Basic Concepts:Association RulesAssociation Rules

Transaction-idTransaction-id Items boughtItems bought

1010 A, B, CA, B, C

2020 A, CA, C

3030 A, DA, D

4040 B, E, FB, E, F

• Itemset X={xItemset X={x11, …, x, …, xkk}}

• Find all the rules Find all the rules XXYY with with min confidence and supportmin confidence and support– support, s, probability that

a transaction contains XY– confidence, c, conditional

probability that a transaction having X also contains Y.

Let min_support = 50%, min_conf = 50%:

A C (50%, 66.7%)C A (50%, 100%)

Customerbuys diaper

Customerbuys both

Customerbuys beer

Page 26: Støtterne

Mining Association Rules:Mining Association Rules:ExampleExample

For rule For rule AA CC::support = support({A}{C}) = 50%

confidence = support({A}{C})/support({A}) = 66.6%

Min. support 50%Min. confidence 50%

Transaction-idTransaction-id Items boughtItems bought

1010 A, B, CA, B, C

2020 A, CA, C

3030 A, DA, D

4040 B, E, FB, E, F

Frequent patternFrequent pattern SupportSupport

{A}{A} 75%75%

{B}{B} 50%50%

{C}{C} 50%50%

{A, C}{A, C} 50%50%

Page 27: Støtterne

Apriori: A Candidate Generation-Apriori: A Candidate Generation-and-test Approachand-test Approach

• Any subset of a frequent itemset must be frequentAny subset of a frequent itemset must be frequent– if {beer, diaper, nuts} is frequent, so is {beer, diaper}– Every transaction having {beer, diaper, nuts} also contains {beer,

diaper}

• Apriori pruning principleApriori pruning principle: If there is any itemset : If there is any itemset which is infrequent, its superset should not be which is infrequent, its superset should not be generated/tested!generated/tested!

• Method: Method: – generate length (k+1) candidate itemsets from length k frequent

itemsets, and– test the candidates against DB

• Performance studies show its efficiency and Performance studies show its efficiency and scalabilityscalability

Page 28: Støtterne

The Apriori Algorithm—A Mathematical The Apriori Algorithm—A Mathematical DefinitionDefinition

Let I = {a,b,c,…} be a set of all items in the domainLet T = { S | S I } be a set of all transaction records of item setsLet support(S) = {A | A T S A} |Let L1 = { {a} | a I support({a}) minSupport }k (k > 1 Lk-1 ) Let

Lk = { Si Sj | (Si Lk-1) (Sj Lk-1) ( |Si – Sj| = 1 ) ( |Sj – Si| = 1) ( S[ ((S Si Sj) (|S| = k-1)) S Lk-1] ) ( support(Si Sj) minSupport )

Then, the set of all frequent item sets is given byL = Lk

and the set of all association rules is given byR = { A C | A (Lk) (C = Lk – A) (A

) (C ) support(Lk) / support(A) minConfidence }

k

Page 29: Støtterne

The Apriori Algorithm—An ExampleThe Apriori Algorithm—An Example

Example: minSupport = 2 I= {Table Saw, Router, Kreg Jig, Sander, Drill Press}

T= { {Table Saw, Router, Drill Press},

{ Router, Sander },

{ Router, Kreg Jig }, {Table Saw, Router, , Sander },

{Table Saw, , Kreg Jig },

{ Router, Kreg Jig },

{Table Saw, , Kreg Jig },

{Table Saw, Router, Kreg Jig, , Drill Press},

{Table Saw, Router, Kreg Jig } }

L1 = { {T}, {R}, {K}, {S}, {D} }L2 = { {R,T}, {K,T}, {D,T}, {K,R}, {R,S}, {D,R} }L3 = { {K,R,T}, {D,R,T} }L4 = Rules = ????

Page 30: Støtterne

The Apriori AlgorithmThe Apriori Algorithm

• Pseudo-codePseudo-code::Ck: Candidate itemset of size kLk : frequent itemset of size k

L1 = {frequent items};for (k = 1; Lk !=; k++) do begin Ck+1 = candidates generated from Lk; for each transaction t in database do

increment the count of all candidates in Ck+1 that are contained in t

Lk+1 = candidates in Ck+1 with min_support endreturn k Lk;

Page 31: Støtterne

Important Details of AprioriImportant Details of Apriori

• How to generate candidates?How to generate candidates?– Step 1: self-joining Lk

– Step 2: pruning

• How to count supports of candidates?How to count supports of candidates?

• Example of Candidate-generationExample of Candidate-generation– L3={abc, abd, acd, ace, bcd}

– Self-joining: L3*L3

• abcd from abc and abd

• acde from acd and ace

– Pruning:

• acde is removed because ade is not in L3

– C4={abcd}

Page 32: Støtterne

State of Commercial/Research State of Commercial/Research PracticePractice

• Increasing use of data mining systems in financial Increasing use of data mining systems in financial community, marketing sectors, retailingcommunity, marketing sectors, retailing

• Still have major problems with large, dynamic sets of Still have major problems with large, dynamic sets of data (need better integration with the databases)data (need better integration with the databases)– Off-the-shelf data mining packages perform specialized learning

on small subset of data

• Most research emphasizes machine learning; little Most research emphasizes machine learning; little emphasis on database side (especially text)emphasis on database side (especially text)

• People achieving resultsPeople achieving results are not likely to share are not likely to share knowledgeknowledge

Page 33: Støtterne

Related Techniques: OLAPRelated Techniques: OLAPOn-Line Analytical ProcessingOn-Line Analytical Processing

• On-Line Analytical Processing tools provide the On-Line Analytical Processing tools provide the ability to pose statistical and summary queries ability to pose statistical and summary queries interactivelyinteractively– Traditional On-Line Transaction Processing (OLTP) databases

may take minutes or even hours to answer these queries

• Advantages relative to data miningAdvantages relative to data mining– Can obtain a wider variety of results– Generally faster to obtain results

• Disadvantages relative to data miningDisadvantages relative to data mining– User must “ask the right question”– Generally used to determine high-level statistical summaries,

rather than specific relationships among instances

Page 34: Støtterne

Integration of Data Mining Integration of Data Mining and Data Warehousingand Data Warehousing

• Data mining systems, DBMS, Data warehouse Data mining systems, DBMS, Data warehouse systems couplingsystems coupling– No coupling, loose-coupling, semi-tight-coupling, tight-coupling

• On-line analytical mining dataOn-line analytical mining data– integration of mining and OLAP technologies

• Interactive mining multi-level knowledgeInteractive mining multi-level knowledge– Necessity of mining knowledge and patterns at different levels of

abstraction by drilling/rolling, pivoting, slicing/dicing, etc.

• Integration of multiple mining functionsIntegration of multiple mining functions– Characterized classification, first clustering and then association

Page 35: Støtterne

Why Data Preprocessing?Why Data Preprocessing?

• Data in the real world is dirtyData in the real world is dirty– incomplete: lacking attribute values, lacking certain

attributes of interest, or containing only aggregate data

• e.g., occupation=“”

– noisy: containing errors or outliers• e.g., Salary=“-10”

– inconsistent: containing discrepancies in codes or names

• e.g., Age=“42” Birthday=“03/07/1997”• e.g., Was rating “1,2,3”, now rating “A, B, C”• e.g., discrepancy between duplicate records

Page 36: Støtterne

Why Is Data Dirty?Why Is Data Dirty?

• Incomplete data comes fromIncomplete data comes from– n/a data value when collected– different consideration between the time when the

data was collected and when it is analyzed.– human/hardware/software problems

• Noisy data comes from the process of dataNoisy data comes from the process of data– collection– entry– transmission

• Inconsistent data comes fromInconsistent data comes from– Different data sources– Functional dependency violation

Page 37: Støtterne

Why Is Data Preprocessing Why Is Data Preprocessing Important?Important?

• No quality data, no quality mining results!No quality data, no quality mining results!– Quality decisions must be based on quality data

• e.g., duplicate or missing data may cause incorrect or even misleading statistics.

– Data warehouse needs consistent integration of quality data

• Data extraction, cleaning, and transformation Data extraction, cleaning, and transformation comprises the majority of the work of building a data comprises the majority of the work of building a data warehouse. —Bill Inmon (father of the data warehouse. —Bill Inmon (father of the data warehouse)warehouse)

Page 38: Støtterne

Major Tasks in Data Major Tasks in Data PreprocessingPreprocessing

• Data cleaningData cleaning– Fill in missing values, smooth noisy data, identify or remove

outliers, and resolve inconsistencies• Data integrationData integration

– Integration of multiple databases, data cubes, or files• Data transformationData transformation

– Normalization and aggregation• Data reductionData reduction

– Obtains reduced representation in volume but produces the same or similar analytical results

• Data discretizationData discretization– Part of data reduction but with particular importance, especially

for numerical data

Page 39: Støtterne

Data CleaningData Cleaning

• ImportanceImportance– “Data cleaning is one of the three biggest problems in

data warehousing”—Ralph Kimball– “Data cleaning is the number one problem in data

warehousing”—DCI survey

• Data cleaning tasksData cleaning tasks– Fill in missing values– Identify outliers and smooth out noisy data – Correct inconsistent data– Resolve redundancy caused by data integration

Page 40: Støtterne

Missing DataMissing Data

• Data is not always availableData is not always available– E.g., many tuples have no recorded value for several

attributes, such as customer income in sales data• Missing data may be due to Missing data may be due to

– equipment malfunction– inconsistent with other recorded data and thus

deleted– data not entered due to misunderstanding– certain data may not be considered important at the

time of entry– not register history or changes of the data

• Missing data may need to be inferred.Missing data may need to be inferred.

Page 41: Støtterne

How to Handle Missing How to Handle Missing Data?Data?

• Ignore the tuple: usually done when class label is Ignore the tuple: usually done when class label is missing (assuming the tasks in classification—not missing (assuming the tasks in classification—not effective when the percentage of missing values per effective when the percentage of missing values per attribute varies considerably.attribute varies considerably.

• Fill in the missing value manually: tedious + Fill in the missing value manually: tedious + infeasible?infeasible?

• Fill in it automatically withFill in it automatically with– a global constant : e.g., “unknown”, a new class?! – the attribute mean– the attribute mean for all samples belonging to the same class:

smarter– the most probable value: inference-based such as Bayesian

formula or decision tree

Page 42: Støtterne

Noisy DataNoisy Data

• Noise: random error or variance in a measured Noise: random error or variance in a measured variablevariable

• Incorrect attribute values may due toIncorrect attribute values may due to– faulty data collection instruments– data entry problems– data transmission problems– technology limitation– inconsistency in naming convention

• Other data problems which requires data cleaningOther data problems which requires data cleaning– duplicate records– incomplete data– inconsistent data

Page 43: Støtterne

How to Handle Noisy Data?How to Handle Noisy Data?

• Binning method:Binning method:– first sort data and partition into (equi-depth) bins– then one can smooth by bin means, smooth by bin

median, smooth by bin boundaries, etc.• ClusteringClustering

– detect and remove outliers• Combined computer and human inspectionCombined computer and human inspection

– detect suspicious values and check by human (e.g., deal with possible outliers)

• RegressionRegression– smooth by fitting the data into regression functions

Page 44: Støtterne

Simple Discretization Simple Discretization Methods: BinningMethods: Binning

• Equal-width (distance) partitioning:Equal-width (distance) partitioning:– Divides the range into N intervals of equal size: uniform grid– if A and B are the lowest and highest values of the attribute, the

width of intervals will be: W = (B –A)/N.– The most straightforward, but outliers may dominate

presentation– Skewed data is not handled well.

• Equal-depth (frequency) partitioning:Equal-depth (frequency) partitioning:– Divides the range into N intervals, each containing

approximately same number of samples– Good data scaling– Managing categorical attributes can be tricky.

Page 45: Støtterne

Thank you!Thank you!

Department of Computer ScienceDepartment of Computer Science

University of Wisconsin – Eau ClaireUniversity of Wisconsin – Eau Claire

Eau Claire, WI 54701Eau Claire, WI 54701

[email protected]@uwec.edu

715-836-2526715-836-2526

Michael R. WickProfessor and Chair