In part from:Yizhou Sun 2008 An Introduction to WEKA Explorer
In part from: Yizhou Sun 2008
An Introduction to WEKA Explorer
What is WEKA? � Waikato Environment for Knowledge Analysis
� It’s a data mining/machine learning tool developed by Department of Computer Science, University of Waikato, New Zealand.
� Weka is also a bird found only on the islands of New Zealand.
2 22/01/16
Download and Install WEKA � Website:
http://www.cs.waikato.ac.nz/~ml/weka/index.html � Support multiple platforms (written in java):
� Windows, Mac OS X and Linux
3 22/01/16
Main Features � 49 data preprocessing tools � 76 classification/regression algorithms � 8 clustering algorithms � 3 algorithms for finding association rules � 15 attribute/subset evaluators + 10 search algorithms
for feature selection
4 22/01/16
Main GUI � Three graphical user interfaces
� “The Explorer” (exploratory data analysis) � “The Experimenter” (experimental
environment) � “The KnowledgeFlow” (new process model
inspired interface) � Simple CLI- provides users without a graphic
interface option the ability to execute commands from a terminal window
5 22/01/16
Explorer � The Explorer:
� Preprocess data � Classification � Clustering � Association Rules � Attribute Selection � Data Visualization
� References and Resources
6 22/01/16
22/01/16 7
Explorer: pre-processing the data � Data can be imported from a file in various formats: ARFF,
CSV, C4.5, binary � Data can also be read from a URL or from an SQL database
(using JDBC) � Pre-processing tools in WEKA are called “filters” � WEKA contains filters for:
� Discretization, normalization, resampling, attribute selection, transforming and combining attributes, …
22/01/16 8
@relation heart-disease-simplified @attribute age numeric @attribute sex { female, male} @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina} @attribute cholesterol numeric @attribute exercise_induced_angina { no, yes} @attribute class { present, not_present} @data 63,male,typ_angina,233,no,not_present 67,male,asympt,286,yes,present 67,male,asympt,229,yes,present 38,female,non_anginal,?,no,not_present ...
WEKA only deals with “flat” files
22/01/16 9
@relation heart-disease-simplified @attribute age numeric @attribute sex { female, male} @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina} @attribute cholesterol numeric @attribute exercise_induced_angina { no, yes} @attribute class { present, not_present} @data 63,male,typ_angina,233,no,not_present 67,male,asympt,286,yes,present 67,male,asympt,229,yes,present 38,female,non_anginal,?,no,not_present ...
WEKA only deals with “flat” files
22/01/16 University of Waikato 10
22/01/16 University of Waikato 11
IRIS dataset � 5 attributes, one is the classification � 3 classes: setosa, versicolor, virginica
22/01/16 University of Waikato 15
Attribute data � Min, max and average value of attributes � distribution of values :number of items for which:
� class: distribution of attribute values in the classes
ai = v j | ai ∈ A,v j ∈V
22/01/16 University of Waikato 18
22/01/16 University of Waikato 19
22/01/16 University of Waikato 20
22/01/16 University of Waikato 21
Filtering attributes � Once the initial data has been selected and loaded the user
can select options for refining the experimental data. � The options in the preprocess window include selection of
optional filters to apply and the user can select or remove different attributes of the data set as necessary to identify specific information.
� The user can modify the attribute selection and change the relationship among the different attributes by deselecting different choices from the original data set.
� There are many different filtering options available within the preprocessing window and the user can select the different options based on need and type of data present.
22/01/16 University of Waikato 23
22/01/16 University of Waikato 24
22/01/16 University of Waikato 25
22/01/16 University of Waikato 26
22/01/16 University of Waikato 27
22/01/16 University of Waikato 28
22/01/16 University of Waikato 29
22/01/16 University of Waikato 30
22/01/16 University of Waikato 31
Discretizes in 10 bins of equal frequency
22/01/16 University of Waikato 32
Discretizes in 10 bins of equal frequency
22/01/16 University of Waikato 33
Discretizes in 10 bins of equal frequency
22/01/16 University of Waikato 34
22/01/16 University of Waikato 35
22/01/16 University of Waikato 36
22/01/16 37
Explorer: building “classifiers” � “Classifiers” in WEKA are machine learning algorithmsfor
predicting nominal or numeric quantities � Implemented learning algorithms include:
� Conjunctive rules, decision trees and lists, instance-based classifiers, support vector machines, multi-layer perceptrons, logistic regression, Bayes’ nets, …
Explore Conjunctive Rules learner
Need a simple dataset with few attributes , let’s select the weather dataset
Select a Classifier
Select training method
Right-click to select parameters
numAntds= number of antecedents, -1= empty rule
Select numAntds=10
Results are shown in the right window (can be scrolled)
Can change the right hand side variable
Performance data
Decision Trees with WEKA
22/01/16 University of Waikato 47
22/01/16 University of Waikato 48
22/01/16 University of Waikato 49
22/01/16 University of Waikato 50
22/01/16 University of Waikato 51
22/01/16 University of Waikato 52
22/01/16 University of Waikato 53
22/01/16 University of Waikato 54
22/01/16 University of Waikato 55
22/01/16 University of Waikato 56
22/01/16 University of Waikato 57
22/01/16 University of Waikato 58
22/01/16 University of Waikato 59
22/01/16 University of Waikato 60
22/01/16 University of Waikato 61
22/01/16 University of Waikato 62
22/01/16 University of Waikato 63
22/01/16 University of Waikato 64
22/01/16 University of Waikato 65
22/01/16 University of Waikato 66
22/01/16 University of Waikato 67
22/01/16 University of Waikato 68
22/01/16 69
Explorer: clustering data � WEKA contains “clusterers” for finding groups of similar
instances in a dataset � Implemented schemes are:
� k-Means, EM, Cobweb, X-means, FarthestFirst
� Clusters can be visualized and compared to “true” clusters (if given)
� Evaluation based on loglikelihood if clustering scheme produces a probability distribution
gennaio 22, 2016 70
� Given k, the k-means algorithm is implemented in four steps:
� Partition objects into k nonempty subsets
� Compute seed points as the centroids of the clusters of the
current partition (the centroid is the center, i.e., mean point, of the cluster)
� Assign each object to the cluster with the nearest seed point
� Go back to Step 2, stop when no more new assignment
The K-Means Clustering Method
right click: visualize cluster assignement
22/01/16 73
Explorer: finding associations � WEKA contains an implementation of the Apriori algorithm
for learning association rules � Works only with discrete data
� Can identify statistical dependencies between groups of attributes: � milk, butter ⇒ bread, eggs (with confidence 0.9 and support
2000)
� Apriori can compute all rules that have a given minimum support and exceed a given confidence
gennaio 22, 2016 74
Basic Concepts: Frequent Patterns
� itemset: A set of one or more items � k-itemset X = {x1, …, xk} � (absolute) support, or, support count of X:
Frequency or occurrence of an itemset X
� (relative) support, s, is the fraction of transactions that contains X (i.e., the probability that a transaction contains X)
� An itemset X is frequent if X’s support is no less than a minsup threshold
Customer buys diaper
Customer buys both
Customer buys beer
Tid Items bought
10 Beer, Nuts, Diaper
20 Beer, Coffee, Diaper
30 Beer, Diaper, Eggs
40 Nuts, Eggs, Milk
50 Nuts, Coffee, Diaper, Eggs, Milk
gennaio 22, 2016 75
Basic Concepts: Association Rules
� Find all the rules X à Y with minimum support and confidence � support, s, probability that a
transaction contains X ∪ Y � confidence, c, conditional probability
that a transaction having X also contains Y
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3, {Beer, Diaper}:3
Customer buys diaper
Customer buys both
Customer buys beer
Nuts, Eggs, Milk 40 Nuts, Coffee, Diaper, Eggs, Milk 50
Beer, Diaper, Eggs 30
Beer, Coffee, Diaper 20
Beer, Nuts, Diaper 10
Items bought Tid
n Association rules: (many more!) n Beer à Diaper (60%, 100%) n Diaper à Beer (60%, 75%)
22/01/16 University of Waikato 77
22/01/16 University of Waikato 78
22/01/16 University of Waikato 79
22/01/16 University of Waikato 80
22/01/16 University of Waikato 81
1. adoption-of-the-budget-resolution=y physician-fee-freeze=n 219 ==> Class=democrat 219 conf:(1)
2. adoption-of-the-budget-resolution=y physician-fee-freeze=n aid-to-nicaraguan-contras=y 198 ==> Class=democrat 198 conf:(1)
3. physician-fee-freeze=n aid-to-nicaraguan-contras=y 211 ==> Class=democrat 210 conf:(1)
ecc.
22/01/16 83
Explorer: attribute selection � Panel that can be used to investigate which (subsets of)
attributes are the most predictive ones � Attribute selection methods contain two parts:
� A search method: best-first, forward selection, random, exhaustive, genetic algorithm, ranking
� An evaluation method: correlation-based, wrapper, information gain, chi-squared, …
� Very flexible: WEKA allows (almost) arbitrary combinations of these two
22/01/16 University of Waikato 84
22/01/16 University of Waikato 85
22/01/16 University of Waikato 86
22/01/16 University of Waikato 87
22/01/16 University of Waikato 88
22/01/16 University of Waikato 89
22/01/16 University of Waikato 90
22/01/16 University of Waikato 91
22/01/16 92
Explorer: data visualization � Visualization very useful in practice: e.g. helps to determine
difficulty of the learning problem � WEKA can visualize single attributes (1-d) and pairs of
attributes (2-d) � To do: rotating 3-d visualizations (Xgobi-style)
� Color-coded class values � “Jitter” option to deal with nominal attributes (and to detect “hidden” data points)
� “Zoom-in” function
22/01/16 University of Waikato 94
22/01/16 University of Waikato 95
22/01/16 University of Waikato 96
22/01/16 University of Waikato 97
22/01/16 University of Waikato 98
click on a cell
22/01/16 University of Waikato 99
22/01/16 University of Waikato 100
22/01/16 University of Waikato 101
22/01/16 University of Waikato 102
22/01/16 University of Waikato 103
References and Resources � References:
� WEKA website: http://www.cs.waikato.ac.nz/~ml/weka/index.html
� WEKA Tutorial: � Machine Learning with WEKA: A presentation demonstrating all graphical user
interfaces (GUI) in Weka. � A presentation which explains how to use Weka for exploratory data mining.
� WEKA Data Mining Book: � Ian H. Witten and Eibe Frank, Data Mining: Practical Machine Learning Tools
and Techniques (Second Edition) � WEKA Wiki: http://weka.sourceforge.net/wiki/index.php/
Main_Page � Others:
� Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques, 2nd ed.