This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Values are distinct symbols Values themselves serve only as labels or
names Nominal comes from the Latin word for name
Example: attribute “outlook” from weather data Values: “sunny”,”overcast”, and “rainy”
No relation is implied among nominal values (no ordering or distance measure)
Only equality tests can be performed
Data Mining: Concepts and Techniques
15
Ordinal quantities
Impose order on values But: no distance between values defined Example: attribute “temperature” in weather
data Values: “hot” > “mild” > “cool”
Note: addition and subtraction don’t make sense
Example rule: temperature < hot play = yes Distinction between nominal and ordinal not
always clear (e.g. attribute “outlook”)
Data Mining: Concepts and Techniques
16
1. Data Transformation
NumbericalNominal Two kinds
Supervised Example: weather data (temperature
attribute) and Play attribute (class) Unsupervised
Example: divide into three ranges (small, mid, large)
4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
Data Mining: Concepts and Techniques
17
Discretization
Three types of attributes: Nominal — values from an unordered set Ordinal — values from an ordered set Continuous — real numbers
Discretization: divide the range of a continuous attribute into
intervals Some classification algorithms only accept
categorical attributes. Reduce data size by discretization Prepare for further analysis
Data Mining: Concepts and Techniques
18
Discretization and concept hierarchy generation for numeric data
Binning (see sections before)
Histogram analysis (see sections before)
Clustering analysis (see sections before)
Entropy-based discretization
Segmentation by natural partitioning
Data Mining: Concepts and Techniques
19
Simple Discretization Methods: Binning
Equal-width (distance) partitioning: It divides the range into N intervals of equal size:
uniform grid if A and B are the lowest and highest values of the
attribute, the width of intervals will be: W = (B –A)/N.
The most straightforward But outliers may dominate presentation: Skewed data is
not handled well. Equal-depth (frequency) partitioning:
It divides the range into N intervals, each containing approximately same number of samples
Good data scaling Managing categorical attributes can be tricky.
Data Mining: Concepts and Techniques
20
Binning Methods for Data Smoothing
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into 3 (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34* Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29* Smoothing by bin-boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34
Data Mining: Concepts and Techniques
21
Histograms
A popular data reduction technique
Divide data into buckets and store average (sum) for each bucket
Can be constructed optimally in one dimension using dynamic programming
Related to quantization problems.
0
5
10
15
20
25
30
35
40
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
Data Mining: Concepts and Techniques
22
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two intervals S1 and S2 using boundary T, the entropy after partitioning is
The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization.
The process is recursively applied to partitions obtained until some stopping criterion is met, e.g.,
Experiments show that it may reduce data size and improve classification accuracy
E S TS
EntS
EntS S S S( , )| |
| |( )
| |
| |( ) 1
12
2
Ent S E T S( ) ( , )
Data Mining: Concepts and Techniques
23
How to Calculate ent(S)?
Given two classes Yes and No, in a set S, Let p1 be the proportion of Yes Let p2 be the proportion of No, p1 + p2 = 100%Entropy is: ent(S) = -p1*log(p1) –p2*log(p2) When p1=1, p2=0, ent(S)=0, When p1=50%, p2=50%,
ent(S)=maximum! See TA’s tutorial notes for an Example.
Data Mining: Concepts and Techniques
24
Transformation: Normalization
min-max normalization
z-score normalization
normalization by decimal scaling
AAA
AA
A
minnewminnewmaxnewminmax
minvv _)__('
A
A
devstand
meanvv
_'
j
vv
10' Where j is the smallest integer such that Max(| |)<1'v
Data Mining: Concepts and Techniques
25
Transforming Ordinal to Boolean
Simple transformation allows to code ordinal attribute with n values using n-1 boolean attributes
Example: attribute “temperature”
Why? Not introducing distance concept between different colors: “Red” vs. “Blue” vs. “Green”.
Temperature
Cold
Medium
Hot
Temperature > cold Temperature > medium
False False
True False
True True
Original data Transformed data
Data Mining: Concepts and Techniques
26
2. Feature Selection
Feature selection (i.e., attribute subset selection): Select a minimum set of features such that the
probability distribution of different classes given the values for those features is as close as possible to the original distribution given the values of all features
reduce # of patterns in the patterns, easier to understand
Heuristic methods (due to exponential # of choices): step-wise forward selection step-wise backward elimination combining forward selection and backward
elimination decision-tree induction
Data Mining: Concepts and Techniques
27
Feature Selection (Example)
Which subset of features would you select?
IDOutlo
okTempera
tureHumidi
tyWind
yPlay
1 100 40 90 0 -1
2 100 40 90 1 -1
3 50 40 90 0 1
4 10 30 90 0 1
5 10 15 70 0 1
6 10 15 70 1 -1
7 50 15 70 1 1
8 100 30 90 0 -1
9 100 15 70 0 1
10 10 30 70 0 1
11 100 30 70 1 1
12 50 30 90 1 1
13 50 40 70 0 1
14 10 30 90 1 -1
Data Mining: Concepts and Techniques
29
Some Essential Feature Selection Techniques
Decision Tree Induction Chi-squared Tests Principle Component Analysis Linear Regression Analysis
Data Mining: Concepts and Techniques
30
Example of Decision Tree Induction
Initial attribute set:{A1, A2, A3, A4, A5, A6}
A4 ?
A1? A6?
Class 1 Class 2 Class 1 Class 2
> Reduced attribute set: {A1, A4, A6}
Data Mining: Concepts and Techniques
31
Chi-Squared Test (http://helios.bto.ed.ac.uk/bto/statistics/tress9.html)
Suppose that the ratio of male to female students in the Science Faculty is exactly 1:1,
but in the Honours class over the past ten years there have been 80 females and 40 males.
Question: Is this a significant departure from the (1:1) expectation?
Female Male Total
Observed numbers (O)
80 40 120
Expected numbers (E)
60 60 120
O - E 20 -20 0
(O-E)2 400 400 (O-E)2 / E 6.67 6.67 13.34 = X2
Data Mining: Concepts and Techniques
32
X1
X2
Y1
Y2
Principal Component Analysis
Data Mining: Concepts and Techniques
33
Regression
Age
Height
y = x + 1
X1
Y1
Y1’
Data Mining: Concepts and Techniques
34
5. Data Reduction with Sampling
Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the data
Choose a representative subset of the data Simple random sampling may have very poor
performance in the presence of skew Develop adaptive sampling methods
Stratified sampling: Approximate the percentage of each class (or
subpopulation of interest) in the overall database Used in conjunction with skewed data
Sampling may not reduce database I/Os (page at a time).
Data Mining: Concepts and Techniques
35
Sampling
SRSWOR
(simple random
sample without
replacement)
SRSWR
Raw Data
Data Mining: Concepts and Techniques
36
Sampling
Raw Data Cluster/Stratified Sample
Data Mining: Concepts and Techniques
37
Summary
Data preparation is a big issue for both
warehousing and mining
Data preparation includes
Data cleaning and data integration
Data reduction and feature selection
Discretization
A lot a methods have been developed but still an
active area of research
Data Mining: Concepts and Techniques
38
References Applied Artificial Intelligence Journal, Special Issue on Data Cleaning and
Preparation for Data Mining (2003) Qiang Yang et al. Editors Cost-sensitive Decision Trees. (Draft) Charles Ling, Qiang Yang, et al. 2004 E. Rahm and H. H. Do. Data Cleaning: Problems and Current Approaches. IEEE
Bulletin of the Technical Committee on Data Engineering. Vol.23, No.4 D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse
environments. Communications of ACM, 42:73-78, 1999. H.V. Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of
the Technical Committee on Data Engineering, 20(4), December 1997. A. Maydanchik, Challenges of Efficient Data Cleansing (DM Review - Data
Quality resource portal) D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999. V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data
Cleaning and Transformation, VLDB’2001. T. Redman. Data Quality: Management and Technology. Bantam Books, New
York, 1992. Y. Wand and R. Wang. Anchoring data quality dimensions ontological
foundations. Communications of ACM, 39:86-95, 1996. R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality
research. IEEE Trans. Knowledge and Data Engineering, 7:623-640, 1995.