Top Banner
Distance Measures Remember K-Nearest Neighbor are determined on the bases of some kind of “distance” between points. Two major classes of distance measure: 1. Euclidean : based on position of points in some k -dimensional space. 2. Noneuclidean : not related to position or space.
28

Lecture slides week14-15

Aug 07, 2015

Download

Engineering

Shani729
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture slides week14-15

Distance Measures

• Remember K-Nearest Neighbor are determined on the bases of some kind of “distance” between points.

• Two major classes of distance measure:1. Euclidean : based on position of points in some k

-dimensional space.2. Noneuclidean : not related to position or space.

Page 2: Lecture slides week14-15

Scales of Measurement

• Applying a distance measure largely depends on the type of

input data

• Major scales of measurement:1. Nominal Data (aka Nominal Scale Variables)

• Typically classification data, e.g. m/f • no ordering, e.g. it makes no sense to state that M > F • Binary variables are a special case of Nominal scale variables.

2. Ordinal Data (aka Ordinal Scale)• ordered but differences between values are not important • e.g., political parties on left to right spectrum given labels 0, 1, 2 • e.g., Likert scales, rank on a scale of 1..5 your degree of satisfaction • e.g., restaurant ratings

Page 3: Lecture slides week14-15

Scales of Measurement

• Applying a distance function largely depends on the type of

input data

• Major scales of measurement:

3. Numeric type Data (aka interval scaled)• Ordered and equal intervals. Measured on a linear scale. • Differences make sense• e.g., temperature (C,F), height, weight, age, date

Page 4: Lecture slides week14-15

Scales of Measurement• Only certain operations can be performed

on certain scales of measurement.

Nominal Scale

Ordinal Scale

Interval Scale

1. Equality2. Count

3. Rank(Cannot quantify difference)

4. Quantify the difference

Page 5: Lecture slides week14-15

Axioms of a Distance Measure• d is a distance measure if it is a function from

pairs of points to reals such that:1. d(x,x) = 0.2. d(x,y) = d(y,x).3. d(x,y) > 0.

Page 6: Lecture slides week14-15

Some Euclidean Distances• L2 norm (also common or Euclidean distance):

– The most common notion of “distance.”

• L1 norm (also Manhattan distance)

– distance if you had to travel along coordinates only.

)||...|||(|),( 22

22

2

11 pp jx

ix

jx

ix

jx

ixjid

||...||||),(2211 pp jxixjxixjxixjid

Page 7: Lecture slides week14-15

Examples L1 and L2 norms

x = (5,5)

y = (9,8)L2-norm:dist(x,y) = (42+32) = 5

L1-norm:dist(x,y) = 4+3 = 7

4

35

Page 8: Lecture slides week14-15

Another Euclidean Distance• L∞ norm : d(x,y) = the maximum of the

differences between x and y in any dimension.

Page 9: Lecture slides week14-15

Non-Euclidean Distances• Jaccard measure for binary vectors

• Cosine measure = angle between vectors from the origin to the points in question.

• Edit distance = number of inserts and deletes to change one string into another.

Page 10: Lecture slides week14-15

Jaccard Measure• A note about Binary variables first – Symmetric binary variable

• If both states are equally valuable and carry the same weight, that is, there is no preference on which outcome should be coded as 0 or 1.

• Like “gender” having the states male and female– Asymmetric binary variable:

• If the outcomes of the states are not equally important, such as the positive and negative outcomes of a disease test.

• We should code the rarest one by 1 (e.g., HIV positive), and the other by 0 (HIV negative).

– Given two asymmetric binary variables, the agreement of two 1s (a positive match) is then considered more important than that of two 0s (a negative match).

Page 11: Lecture slides week14-15

Jaccard Measure• A contingency table for binary data

• Simple matching coefficient (invariant, if the binary variable is

symmetric):

• Jaccard coefficient (noninvariant if the binary variable is

asymmetric):

dcbacb jid

),(

cbacb jid

),(

pdbcasum

dcdc

baba

sum

0

1

01

Object i

Object j

Page 12: Lecture slides week14-15

Jaccard Measure Example• Example

– All attributes are asymmetric binary– let the values Y and P be set to 1, and the value N be set to 0

cbacb jid

),(

Name Fever Cough Test-1 Test-2 Test-3 Test-4 Jack Y N P N N N Mary Y N P N P N Jim Y P N N N N

75.0211

21),(

67.0111

11),(

33.0102

10),(

maryjimd

jimjackd

maryjackd

pdbcasum

dcdc

baba

sum

0

1

01

Page 13: Lecture slides week14-15

Cosine Measure

• Think of a point as a vector from the origin (0,0,…,0) to its location.

• Two points’ vectors make an angle, whose cosine is the normalized dot-product of the vectors.– Example:– p1.p2 = 2; |p1| = |p2| = 3.– cos() = 2/3; is about 48 degrees.

p1

p2p1.p2

|p2|dist(p1, p2) = = arccos(p1.p2/|p2||p1|)

Page 14: Lecture slides week14-15

Edit Distance• The edit distance of two strings is the

number of inserts and deletes of characters needed to turn one into the other.

• Equivalently, d(x,y) = |x| + |y| -2|LCS(x,y)|.– LCS = longest common subsequence = longest

string obtained both by deleting from x and deleting from y.

Page 15: Lecture slides week14-15

Example• x = abcde ; y = bcduve.

• LCS(x,y) = bcde.• D(x,y) = |x| + |y| - 2|LCS(x,y)| = 5 + 6 –2*4 =

3.

• What left?• Normalize it in the range [0-1]. We will study

normalization formulas in next lecture.

Page 16: Lecture slides week14-15

Back to k-Nearest Neighbor (Pseudo-code)

• Missing values Imputation using k-NN.• Input: Dataset (D), size of K

• for each record (x) with at least on missing value in D.– for each data object (y) in D.

• Take the Distance (x,y)• Save the distance and y in array Similarity (S) array.

– Sort the array S in descending order– Pick the top K data objects from S

• Impute the missing attribute value (s) of x on the basic of known values of S (use Mean/Median or MOD).

Page 17: Lecture slides week14-15

K-Nearest Neighbor Drawbacks

• The major drawbacks of this approach are the– Choice of selecting exact distance functions.– Considering all attributes when attempting to

retrieve the similar type of examples.– Searching through all the dataset for finding the

same type of instances.– Algorithm Cost: ?

Page 18: Lecture slides week14-15

Noisy Data• Noise: Random error, Data Present but not correct.– Data Transmission error– Data Entry problem

• Removing noise– Data Smoothing (rounding, averaging within a window).– Clustering/merging and Detecting outliers.

• Data Smoothing– First sort the data and partition it into (equi-depth) bins.– Then the values in each bin using Smooth by Bin Means,

Smooth by Bin Median, Smooth by Bin Boundaries, etc.

Page 19: Lecture slides week14-15

Noisy Data (Binning Methods)Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34* Partition into (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34* Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29* Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34

Page 20: Lecture slides week14-15

Noisy Data (Clustering)• Outliers may be detected by clustering, where similar values are

organized into groups or “clusters”.

• Values which falls outside of the set of clusters may be considered outliers.

Page 21: Lecture slides week14-15

Data Discretization• The task of attribute (feature)-discretization techniques is to

discretize the values of continuous features into a small number of intervals, where each interval is mapped to a discrete symbol.

• Advantages:- – Simplified data description and easy-to-understand data and final data-

mining results.– Only Small interesting rules are mined.– End-results processing time decreased.– End-results accuracy improved.

Page 22: Lecture slides week14-15

Effect of Continuous Data on Results Accuracy

age income age buys_computer<=30 medium 9 ?<=30 medium 11 ?<=30 medium 13 ?

age income age buys_computer<=30 medium 9 no<=30 medium 10 no<=30 medium 11 no<=30 medium 12 no

Data Mining

• If ‘age <= 30’ and income = ‘medium’ and age = ‘9’ then buys_computer = ‘no’

• If ‘age <= 30’ and income = ‘medium’ and age = ‘10’ then buys_computer = ‘no’

• If ‘age <= 30’ and income = ‘medium’ and age = ‘11’ then buys_computer = ‘no’

• If ‘age <= 30’ and income = ‘medium’ and age = ‘12’ then buys_computer = ‘no’

Discover only those rules which contain support (frequency) greater >= 1

Due to the missing value in training dataset, the accuracy of prediction decreases and becomes “66.7%”

Page 23: Lecture slides week14-15

Entropy-Based Discretization• Given a set of samples S, if S is partitioned into two intervals S1

and S2 using boundary T, the entropy after partitioning is

• Where pi is the probability of class i in S1, determined by dividing the number of samples of class i in S1 by the total number of samples in S1.

Page 24: Lecture slides week14-15

Example 1IDID 1 2 3 4 5 6 7 8 9

AgeAge 21 22 24 25 27 27 27 35 41

GradeGrade F F P F P P P P P

• Let Grade be the class attribute. Use entropy-based discretization to divide the range of ages into different discrete intervals.

• There are 6 possible boundaries. They are 21.5, 23, 24.5, 26, 31, and 38.

• Let us consider the boundary at T = 21.5. Let S1 = {21} Let S2 = {22, 24, 25, 27, 27, 27, 35, 41}

(21+22) / 2 = 21.5

(22+24) / 2 = 23

Page 25: Lecture slides week14-15

Example 1 (cont’)

• The number of elements in S1 and S2 are:|S1| = 1|S2| = 8

• The entropy of S1 is

• The entropy of S2 is

ID 1 2 3 4 5 6 7 8 9

Age 21 22 24 25 27 27 27 35 41

Grade F F P F P P P P P

)0(log)0()1(log)1(

)P(log)P()F(log)F()(

22

221 GradePGradePGradePGradePSEnt

)6(log)6()2(log)2(

)P(log)P()F(log)F()(

22

222 GradePGradePGradePGradePSEnt

Page 26: Lecture slides week14-15

Example 1 (cont’)

• Hence, the entropy after partitioning at T = 21.5 is

...

)(|9|

|8|)(

|9|

|1|

)(||

||)(

||

||),(

21

22

11

SEntSEnt

SEntS

SSEnt

S

STSE

Page 27: Lecture slides week14-15

Example 1 (cont’)• The entropies after partitioning for all the boundaries are:

T = 21.5 = E(S,21.5) T = 23 = E(S,23) . . T = 38 = E(S,38)

Select the boundary with the smallest entropySuppose best is T = 23ID 1 2 3 4 5 6 7 8 9

Age 21 22 24 25 27 27 27 35 41

Grade F F P F P P P P P

Now recursively apply entropy discretization upon both partitions

Page 28: Lecture slides week14-15

References– G. Batista and M. Monard, “The study of K-Nearest

Neighbor as a Imputation Method”, 2002 . (I will placed at the course folder)

– “CS345 --- Lecture Notes”, by Jeff D Ullman at Stanford. http://www-db.stanford.edu/~ullman/cs345-notes.html

– Vipin Kumar’s course in data mining offered at University of Minnesota

– official text book slides of Jiawei Han and Micheline Kamber, “Data Mining: Concepts and Techniques”, Morgan Kaufmann Publishers, August 2000.