Top Banner
Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System: Using entity Training Table Patients(PID,Symptom1,Symptom2,Symptom3 ,Disease) and class label Disease, we can classify new patients based on symptoms using Nearest Neighbor, Model based (Decision Tree, Neural Network, SVM, SVD, etc.) classification. Netflix Contest: Using relationship TrainingTable, Rents(UID,MID,Rating,Date) and class label Rating, classify new (UID,MID,Date) tuples. How (since there are no feature columns really)? Movie User Rents UID CNAME AGE TranID Rating Date MID Mname Date Netflix Item Customer buys C# CNAME AGE TranID Count Date I# Iname Suppl Date Price Market Basket Cours e Student Enrollments S# SNAME GEN C# S# GR C# CNAME ST TERM Educational Medical Ward Patients Has is in PID PNAME Symptom1 Symptom2 Symptom3 Disease WID Wname Capaci t y We Cluster on a Table also, but usually just to "prepare" a classification TrainingTable (move up a semantic hierarchy, e.g., in Items, cluster on PriceRanges in 100 dollar intervals or to determine classes in the first place (each cluster is then declared a separate class) ). We do Association Rule Mining (ARM) on relationships (e.g., the Market Basket Research; ARM on "buys" relationship). We can let Near Neighbor Movies vote? What makes a movie, mid, "near" to MID? If mid is rated similarly to MID by users, uid 1 , ..., uid .. Here we use correlations for near instead of an actual distance once again. We can let Near Neighbor Users vote? What makes a user, uid, "near" to UID? If uid rates similarly to UID on movies, mid 1 , ..., mid . So we use correlations for near instead of distance. This is typical when classifying on a relationship table.
15

Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

Dec 22, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

Entity Tables,Relationship Tables

We Classify using any Table (as the Training Table) on any of its columns, the class label column.

Medical Expert System: Using entity Training Table

Patients(PID,Symptom1,Symptom2,Symptom3,Disease) and class label Disease, we can classify new patients based on symptoms using Nearest Neighbor, Model based (Decision Tree, Neural Network, SVM, SVD, etc.) classification.

Netflix Contest: Using relationship TrainingTable,

Rents(UID,MID,Rating,Date) and class label Rating, classify new (UID,MID,Date) tuples.

How (since there are no feature columns really)?

Movie User RentsUID

CNAME

AGE

TranID

Rating

Date

MID

Mname

Date

Netflix

ItemCustomer buysC#

CNAME

AGE

TranID

Count

Date

I#

Iname

Suppl

Date

Price

Market Basket

CourseStudent EnrollmentsS#

SNAME

GEN

C#

S#

GR

C#

CNAME

ST

TERM

Educational

Medical

WardPatients Hasis inPID

PNAME

Symptom1

Symptom2

Symptom3

Disease

WID

Wname

Capacity

We Cluster on a Table also, but usually just to "prepare" a classification TrainingTable (move up a semantic hierarchy, e.g., in Items, cluster on PriceRanges in 100 dollar intervals or to determine classes in the first place (each cluster is then declared a separate class) ).

We do Association Rule Mining (ARM) on relationships (e.g., the Market Basket Research; ARM on "buys" relationship).

We can let Near Neighbor Movies vote? What makes a movie, mid, "near" to MID? If mid is rated similarly to MID by users, uid1, ..., uid..

Here we use correlations for near instead of an actual distance once again.

We can let Near Neighbor Users vote? What makes a user, uid, "near" to UID? If uid rates similarly to UID on movies, mid1, ..., mid.

So we use correlations for near instead of distance. This is typical when classifying on a relationship table.

Page 2: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

Key a1 a2 a3 a4 a5 a6 a7 a8 a9 a10=C a11 a12 a13 a14 a15 a16 a17 a18 a19 a20

t12 1 0 1 0 0 0 1 1 0 1 0 1 1 0 1 1 0 0 0 1t13 1 0 1 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 1t15 1 0 1 0 0 0 1 1 0 1 0 1 0 1 0 0 1 1 0 0t16 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 0 0 0 1 0t21 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1t27 0 1 1 0 1 1 0 0 0 1 0 0 1 1 0 0 1 1 0 0t31 0 1 0 0 1 0 0 0 1 1 1 0 1 0 0 0 1 1 0 1t32 0 1 0 0 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1t33 0 1 0 0 1 0 0 0 1 1 0 1 0 0 1 0 0 0 1 1t35 0 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 1 1 0 0t51 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1t53 0 1 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1t55 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0t57 0 1 0 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 0 0t61 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 1t72 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1t75 0 0 1 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0

t12 0 0 1 0 1 1 0 2

t13 0 0 1 0 1 0 0 1

t15 0 0 1 0 1 0 1 2

a5 a6 a10=C a11 a12 a13 a14 distance

from 000000

Workspacefor the 3nearestneighborsso far

0 0 0 0 0 0

distance=2, don’t replace

0 0 0 0 0 0

distance=4, don’t replace

0 0 0 0 0 0

distance=4, don’t replace

0 0 0 0 0 0

distance=3, don’t replace

0 0 0 0 0 0

distance=3, don’t replace

0 0 0 0 0 0

distance=2, don’t replace

0 0 0 0 0 0

distance=3, don’t replace

0 0 0 0 0 0

distance=2, don’t replace

0 0 0 0 0 0

distance=1, replace

t53 0 0 0 0 1 0 0 1

0 0 0 0 0 0

distance=2, don’t replace

0 0 0 0 0 0

distance=2, don’t replace

0 0 0 0 0 0

distance=3, don’t replace

0 0 0 0 0 0

distance=2, don’t replace

0 1

C=1 wins!

WALK THRU a kNN classification example: 3NN CLASSIFICATION

of an unclassified sample, a=( a5 a6 a11 a12 a13 a14 ) = =( 0 0 0 0 0 0 ).

0 0 0 0 0 0

distance=2, don’t replace

Page 3: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

Key a1 a2 a3 a4 a5 a6 a7 a8 a9 a10=C a11 a12 a13 a14 a15 a16 a17 a18 a19 a20

t12 1 0 1 0 0 0 1 1 0 1 0 1 1 0 1 1 0 0 0 1t13 1 0 1 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 1t15 1 0 1 0 0 0 1 1 0 1 0 1 0 1 0 0 1 1 0 0t16 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 0 0 0 1 0t21 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1t27 0 1 1 0 1 1 0 0 0 1 0 0 1 1 0 0 1 1 0 0t31 0 1 0 0 1 0 0 0 1 1 1 0 1 0 0 0 1 1 0 1t32 0 1 0 0 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1t33 0 1 0 0 1 0 0 0 1 1 0 1 0 0 1 0 0 0 1 1t35 0 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 1 1 0 0t51 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1t53 0 1 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1t55 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0t57 0 1 0 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 0 0t61 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 1t72 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1t75 0 0 1 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0

WALK THRU of required 2nd scan to find Closed 3NN set. Does it change vote?

0 0 0 0 0 0

d=2, include it also

0 0 0 0 0 0

d=4, don’t include

0 0 0 0 0 0

d=4, don’t include

0 0 0 0 0 0

d=3, don’t include

0 0 0 0 0 0

d=3, don’t include

0 0 0 0 0 0

d=2, include it also

0 0 0 0 0 0

d=3, don’t include

0 0 0 0 0 0

d=2, include it also

0 0 0 0 0 0

d=1, already voted

0 0 0 0 0 0

d=2, include it also

0 0 0 0 0 0

d=2, include it also

0 0 0 0 0 0

d=3, don’t replace

0 0 0 0 0 0

d=2, include it also

0 0 0 0 0 0

d=2, already voted

0 0 0 0 0 0

d=1, already voted

0 1

Vote after 1st scan.YES! C=0 wins now!

t12 0 0 1 0 1 1 0 2

t13 0 0 1 0 1 0 0 1

a5 a6 a10=C a11 a12 a13 a14 distance

t53 0 0 0 0 1 0 0 1

Unclassified sample: 0 0 0 0 0 03NN set after 1st scan

0 0 0 0 0 0

d=2, include it also0 0 0 0 0 0

d=2, include it also

Page 4: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

C

00000000001111111

C

11111111110000000

WALK THRU: C3NN using P-trees

a20

11001011101100110

keyt12

t13

t15

t16

t21

t27

t31

t32

t33

t35

t51

t53

t55

t57

t61 t72

t75

a1

11110000000000100

a2 00001111111111000

a3

11111100000000111

a4

000000000 01111011

a5

00001111110000100

a6

00001100000000000

a7

11110000001111011

a8

11110000001111011

a9

00000011110000100

C11111111110000000

a11

00011010001000100

a12 11100001110110011

a13 10011111001001110

a14 00100100010011001

a15 110100011001000 10

a16

10000001000000010

a17 00101110011011101

a18

00101110011011101

a19

01010000100100000

Ps

00000000000000000

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

11100101110111011

a6

11110011111111111

a5

1111000000111 1011

No neighbors at distance=0

First let all training points at distance=0 vote, then distance=1, then distance=2, ... until 3 For distance=0 (exact matches) constructing the P-tree, Ps

then AND with PC and PC’ to compute the vote. (black denotes complement, red denotes uncomplemented

Page 5: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

C

00000000001111111

C

11111111110000000

a20

11001011101100110

keyt12

t13

t15

t16

t21

t27

t31

t32

t33

t35

t51

t53

t55

t57

t61 t72

t75

a1

11110000000000100

a2 00001111111111000

a3

11111100000000111

a4

000000000 01111011

a5

00001111110000100

a6

00001100000000000

a7

11110000001111011

a8

11110000001111011

a9

00000011110000100

a10 =C11111111110000000

a11

00011010001000100

a12 11100001110110011

a13 10011111001001110

a14 00100100010011001

a15 110100011001000 10

a16

10000001000000010

a17 00101110011011101

a18

00101110011011101

a19

01010000100100000

PD(s,1)

01000000000100000

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

11100101110111011

a6

11110011111111111

a5

00001111110000100

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

11100101110111011

a6

0 0001100000000000

a5

1111000000111 1011

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

00011010001000100

a6

11110011111111111

a5

1111000000111 1011

a14

11011011101100110

a13

01100000110110001

a12

1 1100001110110011

a11

11100101110111011

a6

11110011111111111

a5

1111000000111 1011

a14

11011011101100110

a13

1 0011111001001110

a12

00011110001001100

a11

11100101110111011

a6

11110011111111111

a5

1111000000111 1011

a14

0 0100100010011001

a13

01100000110110001

a12

00011110001001100

a11

11100101110111011

a6

11110011111111111

a5

1111000000111 1011

Construct Ptree, PS(s,1) = OR Pi = P|si-ti|=1; |sj-tj|=0, ji

= OR PS(si,1) S(sj,0)

OR

P5 P6 P11 P12 P13 P14

j{5,6,11,12,13,14}-{i}

0 1

i=5,6,11,12,13,14i=5,6,11,12,13,14

WALK THRU: C3NNCdistance=1 nbrs:

Page 6: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

keyt12

t13

t15

t16

t21

t27

t31

t32

t33

t35

t51

t53

t55

t57

t61 t72

t75

a5

00001111110000100

a6

00001100000000000

a10 C11111111110000000

a11

00011010001000100

a12 11100001110110011

a13 10011111001001110

a14 00100100010011001

0 1

OR{all double-dim interval-Ptrees}; PD(s,2) = OR Pi,j

Pi,j = PS(si,1) S(sj,1) S(sk,0) k{5,6,11,12,13,14}-{i,j}

i,j{5,6,11,12,13,14}

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

11100101110111011

a6

0 0001100000000000

a5

00001111110000100

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

00011010001000100

a6

11110011111111111

a5

00001111110000100

a14

11011011101100110

a13

01100000110110001

a12

1 1100001110110011

a11

11100101110111011

a6

11110011111111111

a5

00001111110000100

a14

11011011101100110

a13

1 0011111001001110

a12

00011110001001100

a11

11100101110111011

a6

11110011111111111

a5

00001111110000100

a14

0 0100100010011001

a13

01100000110110001

a12

00011110001001100

a11

11100101110111011

a6

11110011111111111

a5

00001111110000100

P5,6 P5,11 P5,12 P5,13 P5,14

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

00011010001000100

a6

0 0001100000000000

a5

1111000000111 1011

a14

11011011101100110

a13

01100000110110001

a12

11100001110110011

a11

11100101110111011

a6

0 0001100000000000

a5

1111000000111 1011

a14

11011011101100110

a13

11100001110110011

a12

00011110001001100

a11

11100101110111011

a6

0 0001100000000000

a5

1111000000111 1011

a14

00100100010011001

a13

01100000110110001

a12

00011110001001100

a11

11100101110111011

a6

0 0001100000000000

a5

1111000000111 1011

P6,11 P6,12 P6,13 P6,14

a14

11011011101100110

a13

01100000110110001

a12

11100001110110011

a11

00011010001000100

a6

11110011111111111

a5

1111000000111 1011

a14

11011011101100110

a13

10011111001001110

a12

00011110001001100

a11

00011010001000100

a6

11110011111111111

a5

1111000000111 1011

a14

11011011101100110

a13

01100000110110001

a12

00011110001001100

a11

00011010001000100

a6

11110011111111111

a5

1111000000111 1011

P11,12 P11,13 P11,14

a14

11011011101100110

a13

1 0011111001001110

a12

11100001110110011

a11

11100101110111011

a6

11110011111111111

a5

11110000001111011

a14

0 0100100010011001

a13

01100000110110001

a12

11100001110110011

a11

11100101110111011

a6

11110011111111111

a5

11110000001111011

P12,13 P12,14

We now have 3 nearest nbrs. We could quite and declare C=1 winner?

a14

0 0100100010011001

a13

10011111001001110

a12

00011110001001100

a11

11100101110111011

a6

11110011111111111

a5

11110000001111011

P13,14

We now have the C3NN set and we can declare C=0 the winner!

WALK THRU: C3NNC distance=2 nbrs:

Page 7: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

In this example, there were no exact matches (dis=0 nbrs or similarity=6 neighbors) for the sample.

There were two nbrs found at a distance of 1 (dis=1 or sim=5) and nine dis=2, sim=4 nbrs.

All 11 neighbors got an equal votes even though the two sim=5 are much closer neighbors than the nine sim=4. Also processing for the 9 is costly.

A better approach would be to weight each vote by the similarity of the voter to the sample (We will use a vote weight function which is linear in the similarity (admittedly, a better choice would be a function which is Gaussian in the similarity, but, so far, it has been too hard to compute).

As long as we are weighting votes by similarity, we might as well also weight attributes by relevance also (assuming some attributes are more relevant than others. e.g., the relevance weight of a feature attribute could be the correlation of that attribute to the class label).

P-trees accommodate this method very well (in fact, a variation on this theme won the KDD-cup competition in 02 ( http://www.biostat.wisc.edu/~craven/kddcup/ ) and is published in the so-called Podium Classification methods.

Notice though that the Ptree method (Horizontal Processing of Vertical Data or HPVD) really relies on a bonafide distance, in this case Hamming Distance or L1. However, the Vertical Processing of Horizontal Data (VPHD) doesn't.

VPHD therefore works even when we use a correlation as the notion of "near".

If you have been involve in the DataSURG Netflix Prize efforts, you will note that we used Ptree technology to calculate correlations but not to find near neighbors through correlations.

Page 8: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

Movie User RentsUID

CNAME

AGE

TID

Rating

Date

MID

Mname

Date

Again, Rents is also a Table ( Rents(TID, UID,MID,Rating,Date) ), using the class label column, Rating, we can classify potential new ratings. Using these predicted ratings, we can recommend Rating=5 rentals.

Does ARM require a binary relationship? YES! But since every Table gives a binary relationship between any two of its columns we can do ARM on them.

E.g., we could do ARM on Patients(symptom1, disease) to try to determine which diseases symptom1 alone implies (with high confidence) AND

in some cases, the columns of a Table are really instances of another entity.

E.g., Images, e.g., Landsat Satellite Images, LS(R, G, B, NIR, MIR, TIR), with wavelength intervals in micrometers: B=(.45, .52], G=(.52, .60], R=(.63, .69], NIR=(.76, .90], MIR=(2.08, 2.35], TIR=(10.4, 12.5]

Known recording instruments (that record the number of photons detected in a small space of time in any a given wavelength range) seem to be very limited in capability (eg, our eyes, CCD cameras...).

If we could record all consecutive bands (e.g., of radius .025 µm) then we would have a relationship between pixels (given by latitude-longitude) and wavelengths. Then we could do ARM on Imagery!!!!

Netflix

Page 9: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

Intro to Association Rule Mining (ARM)

Given any relationship between entities, T (e.g., a set of Transactions an enterprise performs) andI (e.g., a set of Items which are acted upon by those transactions).

e.g., in Market Basket Research (MBR) the transactions, T, are the checkout transactions (a customer going thru checkout) and the items, I, are the Items available for purchase in that store.

The itemset, T(I), associated with (or related to) a particular transaction, T, is the subset of the items found in the shopping cart or market basket that the customer is bringing through check out at that time).

an Association Rule, AC, associates two disjoint subsets of I (called Itemsets). (A is called the antecedent, C is called the consequent)

The support [set] of itemset A, supp(A), is the set of of t's that are related to every aA,

e.g., if A={i1,i2} and C={i4} then supp(A)={t2, t4} (support ratio = {t2,t4}| / |{t1,t2,t3,t4,t5}| = 2/5 )

Note: | | means set size or count of elements in the set.

The support [ratio] of rule AC, supp(AC), is the support of {A C}=|{t2,t4}|/|{t1,t2,t3,t4,t5}|=2/5

The confidence of rule AC, conf(AC), is supp(AC) / supp(A) = (2/5) / (2/5) = 1

Data Miners typically want to find all STRONG RULES, AC, with supp(AC) ≥ minsupp and conf(AC) ≥ minconf

(minsupp, minconf are threshold levels)

Note that conf(AC) is also just the conditional probability of t being related to C, given that t is related to A).

e.g., Horizontal TransT T(I) Tablet1 i1

t2 i1, i2, i4

t3 i1, i3

t4 i1, i2, i4

t5 i3, i4

T I

A

t1

t2

t3

t4

t5

i1

i2

i3

i4 C

Its graph

Page 10: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

APRIORI Association Rule Mining:

Given a Transaction-Item Relationship, the APRIORI algorithm for finding all Strong I-rules can be done:1. vertically processing of a Horizontal Transaction Table (HTT) or 2. horizontally processing of a Vertical Transaction Table (VTT).

In 1., a Horizontal Transaction Table (HTT) is processed through vertical scans to find all Frequent I-sets (I-sets with support minsupp, e.g., I-sets "frequently" found in transaction market baskets).

In 2. a Vertical Transaction Table (VTT) is processed thru horizontal operations to find all Frequent I-sets

Then each Frequent I-set found is analyzed to determine if it is the support set of a strong rule.

Finding all Frequent I-sets is the hard part. To do this efficiently, APRIORI Algorithm takes advantage of the "downward closure" property for Frequent I-sets: If an I-set is frequent, then all its subsets are also frequent.

E.g., in the Market Basket Example, If A is an I-subset of B and if all of B is in a given Transaction's basket, the certainly all of A is in that basket too. Therefore Supp(A) Supp(B) whenever AB.

First, APRIORI scans to determine all Frequent 1-item I-sets (contain 1 item; therefore called 1-Itemsets),next APRIORI uses downward closure to efficiently find candidates for Frequent 2-Itemsets,next APRIORI scans to determine which of those candidate 2-Itemsets is actually Frequent,next APRIORI uses downward closure to efficiently find candidates for Frequent 3-Itemsets,next APRIORI scans to determine which of those candidate 3-Itemsets is actually Frequent, ...Until there are no candidates remaining (on the next slide we walk through an example using both a HTT and a VTT)

Page 11: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

Horizontal Transaction Table (HTT)

minsupp is set by the querier at 1/2 and minconf at 3/4(note minsupp and minconf can expressed as counts rather than

as ratios. If so, since there are 4 transactions, then as counts, minsupp=2 and minconf=3):

or a Vertical Transaction Table (VTT)

(downward closure property of "frequent")

Any subset of a frequent itemset is frequent.

APRIORI METHOD: Iteratively find the Frequent k-itemsets, k=1,2,...

Find all strong association rules supported by each frequent Itemset.

(Ck will denote candidate k-itemsets generated at each step.

Fk will denote frequent k-itemsets).

ARM The relationship between Transactions and Items can be expressed in a

2 3 3 1 31-Iset supports

2 3 3 3Frequent (supp 2)

Start by finding Frequent 1-ItemSets.

2 1s, 3 2s, 3 3s, 1 4, 3 5s

TID Items100 1 3 4200 2 3 5300 1 2 3 5400 2 5

TID 1 2 3 4 5

100 1 0 1 1 0

200 0 1 1 0 1

300 1 1 1 0 1

400 0 1 0 0 1

Page 12: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

TID Items100 1 3 4200 2 3 5300 1 2 3 5400 2 5

HTTitemset sup.

{1} 2{2} 3{3} 3{4} 1{5} 3

Scan DScan D

C1

TID 1 2 3 4 5

100 1 0 1 1 0

200 0 1 1 0 1

300 1 1 1 0 1

400 0 1 0 0 1

itemset sup{1 2} 1{1 3} 2{1 5} 1{2 3} 2{2 5} 3{3 5} 2

C2

Scan DScan D

itemset{1 2}{1 3}{1 5}{2 3}{2 5}{3 5}

C2 F3 = L3

Scan DScan D itemset sup{2 3 5} 2

P1 2 //\\ 1010

P2 3 //\\

0111

P3 3 //\\ 1110

P4 1 //\\ 1000

P5 3 //\\ 0111

BuildPtrees:Scan DScan D

L1={1}{2}{3}{5}

P1^P2 1 //\\

0010

P1^P3 2 //\\

1010

P1^P5 1 //\\

0010

P2^P3 2 //\\

0110

P2^P5 3 //\\

0111

P3^P5 2 //\\

0110

L2={13}{23}{25}{35}

P1^P2^P3 1 //\\

0010

P1^P3 ^P5 1 //\\

0010

P2^P3 ^P5 2 //\\

0110

L3={235}

F1 = L1

itemset sup.{1} 2{2} 3{3} 3{5} 3

itemset sup{1 3} 2{2 3} 2{2 5} 3{3 5} 2

F2 = L2

{123} pruned since {12} not frequent{135} pruned since {15} not frequent

Example ARM using uncompressed Ptrees (note: I have placed the 1-count at the root of each Ptree)

C3

itemset{2 3 5}{1 2 3}{1,3,5}

ARM

Page 13: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

L3

itemset sup{2 3 5} 2

L1

itemset sup.{1} 2{2} 3{3} 3{5} 3

itemset sup{1 3} 2{2 3} 2{2 5} 3{3 5} 2

L2

1-ItemSets don’t support Association Rules (They will have no antecedent or no consequent).

Are there any Strong Rules supported byFrequent=Large 2-ItemSets (at minconf=.75)?

{1,3} conf{1}{3} = supp{1,3}/supp{1} = 2/2 = 1 ≥ .75 STRONGconf{3}{1} = supp{1,3}/supp{3} = 2/3 = .67 < .75

{2,3} conf{2}{3} = supp{2,3}/supp{2} = 2/3 = .67 < .75 conf{3}{2} = supp{2,3}/supp{3} = 2/3 = .67 < .75

{2,5} conf{2}{5} = supp{2,5}/supp{2} = 3/3 = 1 ≥ .75 STRONG!conf{5}{2} = supp{2,5}/supp{5} = 3/3 = 1 ≥ .75 STRONG!

{3,5} conf{3}{5} = supp{3,5}/supp{3} = 2/3 = .67 < .75 conf{5}{3} = supp{3,5}/supp{5} = 2/3 = .67 < .75

Are there any Strong Rules supported byFrequent or Large 3-ItemSets?{2,3,5} conf{2,3}{5} = supp{2,3,5}/supp{2,3} = 2/2 = 1 ≥ .75 STRONG!

conf{2,5}{3} = supp{2,3,5}/supp{2,5} = 2/3 = .67 < .75

conf{3,5}{2} = supp{2,3,5}/supp{3,5} = 2/3 = .67 < .75

No subset antecedent can yield a strong rule either (i.e., no need to check conf{2}{3,5} or conf{5}{2,3} since both denominators will be at least as large and therefore, both confidences will be at least as low.

No need to check conf{3}{2,5} or conf{5}{2,3} DONE!

2-Itemsets do support ARs.

ARM

Page 14: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

E.g., the $1,000,000 Netflix Contest was to develop a ratings prediction program that can beat the one Netflix currently uses (called Cinematch) by 10% in predicting what rating users gave to movies. I.e.,

predict rating(M,U) where (M,U) QUALIFYING(MovieID, UserID).

Netflix uses Cinematch to decide which movies a user will probably like next (based on all past rating history). All ratings are "5-star" ratings (5 is highest. 1 is lowest. Caution: 0 means “did not rate”).

Unfortunately rating=0 does not mean that the user "disliked" that movie, but that it wasn't rated at all. Most “ratings” are 0. Therefore, the ratings data sets are NOT vector spaces!

One can approach the Netflix contest problem as a data mining Classification/Prediction problem.

A "history of ratings given by users to movies“, TRAINING(MovieID, UserID, Rating, Date) is provided, with which to train your predictor, which will predict the ratings given to QUALIFYING movie-user pairs (Netflix knows the rating given to Qualifying pairs, but we don't.)

Since the TRAINING is very large, Netflix also provides a “smaller, but representative subset” of TRAINING,

PROBE(MovieID, UserID) (~2 orders of magnitude smaller than TRAINING).

Netflix gave 5 years to submit QUALIFYING predictions. That contest was won in the late summer of 2009, when the submission window was about 1/2 gone.

The Netflix Contest Problem is an example of the Collaborative Filtering Problem which is ubiquitous in the retail business world (How do you filter out what a customer will want to buy or rent next, based on similar customers?).

Collaborative Filtering is the prediction of likes and

dislikes (retail or rental) from the history of previous expressed purchase of rental satisfactions (filtering new likes

thru the historical filter of “collaborator” likes)

Page 15: Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System:

ARM in Netflix?Movie User RentsUID

CNAME

AGE

Rating

Date

MID

Mname

Date

ARM used for pattern mining in the Netflix data?

In general we look for rating patterns (RP) that have the property RP true ==> MIDr (movie, MID, is rated r).

Singleton Homogeneous Rating Patterns: RP = Nr

Multiple Homogeneous Rating Patterns: RP = N1 r & ... & Nk r

Multiple Heterogeneous Rating Patterns: RP = N1 r1 & ... & Nk rk