Fall 2004, CIS, Temple University CIS527: Data Warehousing, Filtering, and Mining Lecture 4 • Tutorial: Connecting SQL Server to Matlab using Database Matlab Toolbox • Association Rule MIning Lecture slides taken/modified from: – Jiawei Han (http://www-sal.cs.uiuc.edu/~hanj/DM_Book.html) – Vipin Kumar
44
Embed
Apriori and Eclat algorithm in Association Rule Mining
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Fall 2004, CIS, Temple University
CIS527: Data Warehousing, Filtering, and Mining
Lecture 4
• Tutorial: Connecting SQL Server to Matlab using Database Matlab Toolbox
• Association Rule MIning
Lecture slides taken/modified from:– Jiawei Han (http://www-sal.cs.uiuc.edu/~hanj/DM_Book.html)– Vipin Kumar (http://www-users.cs.umn.edu/~kumar/csci5980/index.html)
Motivation: Association Rule Mining
• Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction
• * Maintenance Agreement – What the store should do to boost Maintenance
Agreement sales
• Home Electronics * – What other products should the store stocks up?
• Attached mailing in direct marketing• Detecting “ping-ponging” of patients• Marketing and Sales Promotion• Supermarket shelf management
Definition: Frequent Itemset• Itemset
– A collection of one or more items• Example: {Milk, Bread, Diaper}
– k-itemset• An itemset that contains k items
• Support count ()– Frequency of occurrence of an itemset
– E.g. ({Milk, Bread,Diaper}) = 2
• Support– Fraction of transactions that contain an
itemset
– E.g. s({Milk, Bread, Diaper}) = 2/5
• Frequent Itemset– An itemset whose support is greater
than or equal to a minsup threshold
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Definition: Association Rule
Example:Beer}Diaper,Milk{
4.052
|T|)BeerDiaper,,Milk(
s
67.032
)Diaper,Milk()BeerDiaper,Milk,(
c
• Association Rule– An implication expression of the form
X Y, where X and Y are itemsets
– Example: {Milk, Diaper} {Beer}
• Rule Evaluation Metrics– Support (s)
• Fraction of transactions that contain both X and Y
– Confidence (c)• Measures how often items in Y
appear in transactions thatcontain X
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
Association Rule Mining Task
• Given a set of transactions T, the goal of association rule mining is to find all rules having – support ≥ minsup threshold– confidence ≥ minconf threshold
• Brute-force approach:– List all possible association rules– Compute the support and confidence for each rule– Prune rules that fail the minsup and minconf
thresholds Computationally prohibitive!
Computational Complexity
• Given d unique items:– Total number of itemsets = 2d
Pointers are used to assist frequent itemset generation
D:1
E:1
Transaction Database
Item PointerABCDE
Header table
FP-growth
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
E:1D:1
E:1
Build conditional pattern base for E: P = {(A:1,C:1,D:1),
(A:1,D:1), (B:1,C:1)}
Recursively apply FP-growth on P
E:1
D:1
FP-growth
null
A:2 B:1
C:1C:1
D:1
D:1
E:1
E:1
Conditional Pattern base for E: P = {(A:1,C:1,D:1,E:1),
(A:1,D:1,E:1), (B:1,C:1,E:1)}
Count for E is 3: {E} is frequent itemset
Recursively apply FP-growth on P
E:1
Conditional tree for E:
FP-growth
Conditional pattern base for D within conditional base for E: P = {(A:1,C:1,D:1),
(A:1,D:1)}
Count for D is 2: {D,E} is frequent itemset
Recursively apply FP-growth on P
Conditional tree for D within conditional tree for E:
null
A:2
C:1
D:1
D:1
FP-growth
Conditional pattern base for C within D within E: P = {(A:1,C:1)}
Count for C is 1: {C,D,E} is NOT frequent itemset
Conditional tree for C within D within E:
null
A:1
C:1
FP-growth
Count for A is 2: {A,D,E} is frequent itemset
Next step:
Construct conditional tree C within conditional tree E
Continue until exploring conditional tree for A (which has only node A)
Conditional tree for A within D within E:
null
A:2
Benefits of the FP-tree Structure
• Performance study shows– FP-growth is an order of
magnitude faster than Apriori, and is also faster than tree-projection
• Reasoning– No candidate generation,
no candidate test– Use compact data
structure– Eliminate repeated
database scan– Basic operation is counting
and FP-tree building
0
10
20
30
40
50
60
70
80
90
100
0 0.5 1 1.5 2 2.5 3
Support threshold(%)
Ru
n t
ime(s
ec.)
D1 FP-grow th runtime
D1 Apriori runtime
Complexity of Association Mining
• Choice of minimum support threshold– lowering support threshold results in more frequent itemsets– this may increase number of candidates and max length of
frequent itemsets• Dimensionality (number of items) of the data set
– more space is needed to store support count of each item– if number of frequent items also increases, both computation and
I/O costs may also increase• Size of database
– since Apriori makes multiple passes, run time of algorithm may increase with number of transactions
• Average transaction width– transaction width increases with denser data sets– This may increase max length of frequent itemsets and traversals
of hash tree (number of subsets in a transaction increases with its width)
Compact Representation of Frequent Itemsets
• Some itemsets are redundant because they have identical support as their supersets
• Given a frequent itemset L, find all non-empty subsets f L such that f L – f satisfies the minimum confidence requirement– If {A,B,C,D} is a frequent itemset, candidate rules:
ABC D, ABD C, ACD B, BCD A, A BCD, B ACD, C ABD, D ABCAB CD, AC BD, AD BC, BC AD, BD AC, CD AB,
• If |L| = k, then there are 2k – 2 candidate association rules (ignoring L and L)
Rule Generation
• How to efficiently generate rules from frequent itemsets?– In general, confidence does not have an anti-monotone
propertyc(ABC D) can be larger or smaller than c(AB D)
– But confidence of rules generated from the same itemset has an anti-monotone property
– e.g., L = {A,B,C,D}:
c(ABC D) c(AB CD) c(A BCD) • Confidence is anti-monotone w.r.t. number of items on the RHS of
the rule
Rule Generation
ABCD=>{ }
BCD=>A ACD=>B ABD=>C ABC=>D
BC=>ADBD=>ACCD=>AB AD=>BC AC=>BD AB=>CD
D=>ABC C=>ABD B=>ACD A=>BCD
Lattice of rulesABCD=>{ }
BCD=>A ACD=>B ABD=>C ABC=>D
BC=>ADBD=>ACCD=>AB AD=>BC AC=>BD AB=>CD
D=>ABC C=>ABD B=>ACD A=>BCD
Pruned Rules
Low Confidence Rule
Presentation of Association Rules (Table Form)
Visualization of Association Rule Using Plane Graph
Visualization of Association Rule Using Rule Graph