IRDM β15/16 Jilles Vreeken Chapter 8: Graph Mining 1 Dec 2015 Revision 1, December 4 th typoβs fixed: edge order
IRDM β15/16
Jilles Vreeken
Chapter 8: Graph Mining
1 Dec 2015
Revision 1, December 4th typoβs fixed: edge order
IRDM β15/16
IRDM Chapter 8, overview
1. The basics 2. Properties of Graphs 3. Frequent Subgraphs 4. Community Detection 5. Graph Clustering
Youβll find this covered in: Aggarwal, Ch. 17, 19 Zaki & Meira, Ch. 4, 11, 16
VIII-1: 2
IRDM β15/16
IRDM Chapter 8, today
1. The basics 2. Properties of Graphs 3. Frequent Subgraphs 4. Community Detection 5. Graph Clustering
Youβll find this covered in: Aggarwal, Ch. 17, 19 Zaki & Meira, Ch. 4, 11, 16
VIII-1: 3
IRDM β15/16
Chapter 7.1: The Basics
Aggarwal Ch. 17.1
VIII-1: 4
IRDM β15/16
Networks are everywhere!
VIII-1: 5
Human Disease Network [Barabasi 2007]
Gene Regulatory Network [Decourty 2008]
Facebook Network [2010]
The Internet [2005]
IRDM β15/16
The Internet
VIII-1: 6
Skewed Degrees Robustness
IRDM β15/16
High school dating network
(Bearman et. al. Am. Jnl. of Sociology, 2004. Image: Mark Newman) VIII-1: 7
Blue: Male Pink: Female
Interesting observations?
IRDM β15/16
Karate club network
VIII-1: 8
IRDM β15/16
Friends
How many of you think that your friends have more friends than you? A recent Facebook study examined all of FBβs users: 721 million people with 69 billion
friendships about 10% of the worldβs population
found that 93 percent of the time a userβs friend count was less than the average friend count of his or her friends,
users had an average of 190 friends, while their friends averaged 635 friends of their own
VIII-1: 9
IRDM β15/16
Reasons?
You are a loner? Your friends are extraverts? There are more extraverts than introverts in the world?
VIII-1: 10
IRDM β15/16
Example
Average number of friends?
= 1 + 3 + 2 + 2
4
= 2
(Strogatz, NYT 2012) VIII-1: 11
Average number of friends of friends?
= (3 + 1 + 2 + 2 + 3 + 2 + 3 + 2)/8
= ( 1 Γ 1 + 3 Γ 3 +
2 Γ 2 + (2 Γ 2))/8
= π.ππ
IRDM β15/16
Always true (almost)! Proof?
πΈ π = βπ₯π/π
πππ π = πΈ π β πΈ π 2 = πΈ π2 β πΈ π 2
πΈ π2
πΈ π = πΈ π +πππ ππΈ π
Essentially, itβs true if there is any spread in the number of friends (i.e. whenever thereβs a non-zero variance). VIII-1: 12
IRDM β15/16
Why graphs?
Many real-world data sets are in the forms of graphs social networks hyperlinks proteinβprotein interaction XML parse trees β¦
Many of these graphs are enormous humans cannot understand them β a task for data mining!
VIII-1: 13
IRDM β15/16
What is a graph? A graph πΊ is a pair (π,πΈ β π2) elements in π are vertices or nodes of the graph pairs π£, π’ β πΈ are edges or arcs of the graph
for undirected graphs pairs are unordered, for directed graphs pairs are ordered
The graphs can be labelled vertices can have labeling πΏ(π£) edges can have labeling πΏ(π£,π’)
A tree is a rooted, connected, and acyclic graph
Graphs can be represented using adjacency matrices |π| Γ |π| matrix π΄ with π΄ ππ = 1 if π£π , π£π β πΈ
VIII-1: 14
IRDM β15/16
Eccentricity, radius, and diameter The distance π(π£π , π£π) between two vertices is the (weighted) length of the shortest path between them
The eccentricity of a vertex π£π , π(π£π), is its maximum distance to any other vertex, max
π{π(π£π , π£π)}
The radius of a connected graph, π(πΊ), is the minimum eccentricity of any vertex, min
π{π(π£π)}
The diameter of a connected graph, π(πΊ), is the maximum eccentricity of any vertex, max
π{π(π£π)} = max
π,π{π(π£π , π£π)}
the effective diameter of a graph is smallest number that is larger than the eccentricity of a large fraction of the vertices in the graph VIII-1: 15
IRDM β15/16
Clustering Coefficient The clustering coefficient of vertex π£π , πΆ(π£π), tells how clique-like the neighbourhood of π£π is let ππ be the number of neighbours of π£π and ππ the number of edges
between the neighbours of π£π excluding π£π itself
πΆ π£π =ππππ2
=2ππ
ππ ππ β 1
well-defined only for π£π with at least two neighbours
for others, let πΆ(π£π) = 0
The clustering coefficient of the graph is the average clustering coefficient of the vertices:
πΆ(πΊ) = πβ1οΏ½πΆ(π£π)π
VIII-1: 16
IRDM β15/16
What do to with a graph?
There are many interesting data one can mine from graphs and sets of graphs cliques of friends from social networks hubs and authorities from link graphs who is the centre of the Hollywood subgraphs that appear frequently in (a set of) graph(s) areas with higher inter-connectivity than intra-connectivity β¦ Graph mining is perhaps the most popular topic in contemporary data mining research though not necessary called as suchβ¦
VIII-1: 17
IRDM β15/16
Chapter 7.2: Properties of Graphs
Aggarwal Ch. 17.1, 19.2; Zaki & Meira Ch 4
VIII-1: 18
IRDM β15/16
Centrality
Six degrees of Kevin Bacon βEvery actor is related to Kevin Bacon by no more than 6 hopsβ Kevin Bacon has acted with many, that have acted with many
others, that have acted with many others⦠this makes Kevin Bacon a centre of the co-acting graph
Kevin, however, is not the centre: the average distance to him is 2.998 but to Harvey Keitel it is only 2.848
(http://oracleofbacon.org) VIII-1: 19
IRDM β15/16
Degree and eccentricity Centrality
Centrality is a function π βΆ π β β inducing a total order in π the higher the centrality of a vertex, the more important it is
In degree centrality π(π£π) = π(π£π), the degree of the vertex In eccentricity centrality the least eccentric vertex is the most central one, π π£π = 1
π π£π
the least eccentric vertex is central the most eccentric vertex is peripheral
VIII-1: 20
IRDM β15/16
Closeness centrality In closeness centrality the vertex with least distance to all other vertices is the centre
π π£π = οΏ½π π£π ,π£ππ
β1
In eccentricity centrality we aim to minimize the maximum distance In closeness centrality we aim to minimize the average distance this is the distance used to measure the centre of Hollywood
VIII-1: 21
IRDM β15/16
Betweenness centrality
Betweenness centrality measures the number of shortest paths that travel through π£π measures the βmonitoringβ role of the vertex βall roads lead to Romeβ Let πππ be the number of shortest paths between π£π and π£π and let πππ(π£π) be the number of those that include π£π let πΎππ π£π = πππ(π£π)/πππ betweenness centrality is defined as
π π£π = οΏ½οΏ½πΎπππβ ππ>π
πβ π
VIII-1: 22
IRDM β15/16
Prestige
In prestige, the vertex is more central if it has many incoming edges from other vertices of high prestige π΄ is the adjacency matrix of the directed graph πΊ π is π-dimensional vector giving the prestige of the vertices π = π΄ππ starting from an initial prestige vector π0, we get ππ = π΄πππβ1 = π΄π π΄πππβ2 = π΄π 2ππβ2 = π΄π 3ππβ3 = β― = π΄π ππ0 Vector π converges to the dominant eigenvector of π΄π under some assumptions
(PageRank is based on (normalized) prestige) VIII-1: 23
IRDM β15/16
Graph properties
Several real-world graphs exhibit certain characteristics studying what these are and explaining why they appear is an
important area of network research As data miners, we need to understand the consequences of these characteristics finding a result that can be explained merely by one
of these characteristics is not interesting We also want to model graphs with these characteristics
VIII-1: 24
IRDM β15/16
Itβs a small world after all
A graph πΊ is said to exhibit a small-world property if its average path length scales logarithmically,
ππΏ β logπ six degrees of Kevin Bacon is based on this property similarly so for ErdΕs numbers
how far a mathematician is from Hungarian combinatorist Paul ErdΕs radius of a large, connected mathematical co-authorship network
(268K authors) is 12 and diameter 23
VIII-1: 25
IRDM β15/16
Scale-free property
The degree distribution of a graph is the distribution of its vertex degrees how many vertices with degree 1, how many with degree 2, etc. π(π) is the number of edges with degree π A graph πΊ is said to exhibit a scale-free property if π π β πβπΎ follows a so-called power-law distribution majority of vertices have low degree, few with very high degree scale-free: π ππ = πΌ ππ βπΎ = πΌπβπΎ πβπΎ β πβπΎ
VIII-1: 26
IRDM β15/16
Example: WWW links In-degree
π = 2.09
(Broder et al.,β00) VIII-1: 27
Out-degree
π = 2.72
IRDM β15/16
Clustering effect
A graph exhibits the clustering effect if the distribution of average clustering coefficient (per degree) follows a power law if πΆ(π) is the average clustering coefficient of all vertices of
degree π, then πΆ π β πβπΎ The vertices with small degrees are part of highly clustered areas (high clustering coefficient) while βhub verticesβ have smaller clustering coefficients
VIII-1: 28
IRDM β15/16
Chapter 7.3: Frequent Subgraph Mining
Aggarwal Ch. 17.2, 17.4; Zaki & Meira Ch 11
VIII-1: 29
IRDM β15/16
Subgraphs
Graph (πβ²,πΈβ²) is a subgraph of graph (π,πΈ) iff πβ² β π πΈβ² β πΈ Note that subgraphs donβt have to be connected today we consider only connected subgraphs
To check whether a graph is a subgraph of other is trivial but, in most real-world applications there are no direct subgraphs two graphs might be similar even if their vertex sets are disjoint
VIII-1: 30
IRDM β15/16
Graph isomorphism Graphs πΊ π,πΈ and πΊβ² = (πβ²,πΈβ²) are isomorphic if there exists a bijective function π:π β ππ such that π’,π£ β πΈ if and only if π π’ ,π π£ β πΈπ πΏ π£ = πΏ(π π£ ) for all π£ β π πΏ π’, π£ = πΏ π π’ ,π π£ for all π’, π£ β πΈ
Graph πΊπΊ is subgraph isomorphic to πΊ if there exists a subgraph of πΊ which is isomorphic to πΊπΊ
No polynomial-time algorithm is known for determining if πΊ and πΊπΊ are isomorphic
Determining if πΊπΊ is subgraph isomorphic to πΊ is NP-hard
VIII-1: 31
IRDM β15/16
Equivalence and canonical graphs
Isomorphism defines an equivalence class id: π β π, ππ(π£) = π£ shows πΊ is isomorphic to itself if πΊ is isomorphic to πΊπΊ via π, then πΊπΊ is isomorphic to πΊ via πβ1 if πΊ is isomorphic to π» via π, and π» to πΌ via π, then πΊ is
isomorphic to πΌ via π β π
A canonization of a graph πΊ, πππππ(πΊ) produces another graph πΆ such that if π» is a graph that is isomorphic to πΊ, πππππ(πΊ) = πππππ(π») two graphs are isomorphic if and only if
their canonical versions are the same
VIII-1: 32
IRDM β15/16
Example of isomorphic graphs
VIII-1: 33
a b
a
c
b
a
IRDM β15/16
Example of isomorphic graphs
VIII-1: 34
a
b a
c b
a
IRDM β15/16
Example of isomorphic graphs
VIII-1: 35
a b
a
c
b
a a
b a
c b
a
IRDM β15/16
Frequent subgraph mining Given a set π« of π graphs and a minimum support π, find all connected graphs that are subgraph isomorphic to at least π graphs in π« enormously complex problem For graphs that have π vertices there are 2π π2 subgraphs (not all are connected) If we have π labels for vertices and edges we have π 2π π π2 labelings of the different graphs Counting support means solving multiple NP-hard problems
VIII-1: 36
IRDM β15/16
An example
VIII-1: 37
a
b a
c b
a
a
c b
a
a
b
a
IRDM β15/16
Mining frequent subgraph patterns
Like for itemsets, the sub-graph definition of support is monotone we can employ level-wise search!
We can modify APRIORI to get to AGM (Inokuchi, Washio, Motoda, 2000)
ECLAT to get FFSM (Huan, Wang, Prins, 2003)
FP-GROWTH to get GSPAN (Pei et al., 2001)
VIII-1: 38
IRDM β15/16
GraphApriori
Algorithm GRAPHAPRIORI(graph db π«, minsup π) begin π β 1; β±π β {all frequent singleton graphs} while β±π is not empty do Generate ππ+1 by joining pairs of graphs in β±π that have in common a subgraph of size (π β 1) Prune subgraphs from ππ+1 that violate downward closure Determine β±π+1 by support counting on ππ+1,π« and retaining subgraphs from ππ+1 with support at least π π β π + 1 end return β β±ππ
π=1 end
(Inokuchi et al. 2000; Kuramochi & Karypis 2001) VII-2: 39
IRDM β15/16
GraphApriori
Algorithm GRAPHAPRIORI(graph db π«, minsup π) begin π β 1; β±π β {all frequent singleton graphs} while β±π is not empty do Generate ππ+1 by joining pairs of graphs in β±π that have in common a subgraph of size (π β 1) Prune subgraphs from ππ+1 that violate downward closure Determine β±π+1 by support counting on ππ+1,π« and retaining subgraphs from ππ+1 with support at least π π β π + 1 end return β β±ππ
π=1 end
(Inokuchi et al. 2000; Kuramochi & Karypis 2001) VII-2: 40
a c
b a
a
c b a
a c
b a
c
a c
b a
+
β β±π
β β±π
β ππ+1
β ππ+1
Candidate generation using edge-based join
a c
b a
a
c b a
a c
b a
c
a
b a
+
β β±π
β β±π
β ππ+1
β ππ+1
Candidate generation using node-based join
c
c
IRDM β15/16
Canonical codes
We can improve the running time of frequent subgraph mining by either speeding up the computation of support
lots of efforts in faster isomorphism checking but only little progress creating fewer candidates that we need to check
level-wise algorithms generate huge numbers of candidates, all of which must be checked for isomorphism with others
The gSPAN algorithm is the frequent subgraph mining equivalent of FP-growth; it uses a depth-first approach
(Zaki & Meira chapter 11 ; Yan & Han 2002) VIII-1: 41
IRDM β15/16
Depth-First Spanning tree A depth-first spanning (DFS) tree of a graph πΊ is a connected tree contains all the vertices of πΊ is built in depth-first order
selection between the siblings is e.g. based on the vertex index
Edges of the DFS tree are forward edges
Edges not in the DFS tree are backward edges
A rightmost path in the DFS tree is the path that travels from the root to the rightmost vertex by always taking the rightmost child (last added)
VIII-1: 42
IRDM β15/16
An example β DFS traversal
VIII-1: 43
a a
d c
c b
a
b π£1
π£4
π£6
π£2
π£3
π£5 π£7
π£8
IRDM β15/16
An example β the DFS tree
VIII-1: 44
a
a
d
c
c b a
b
π£1
π£4 π£6
π£2
π£3
π£5
π£7
π£8
The right most path
IRDM β15/16
Candidates from the DFS tree
Given graph πΊ, we extend it only from the vertices in the rightmost path we can add a backward edge from the rightmost vertex
to some other vertex in the rightmost path we can add a forward edge from any vertex in the rightmost path
this increases the number of vertices by 1 The order of generating the candidates is first backward extensions
first to root, then to rootβs child, β¦ then forward extensions
first from the leaf, then from leafβs father, β¦
VIII-1: 45
IRDM β15/16
An example β the DFS tree
VIII-1: 46
a
a
d
c
c b a
b
π£1
π£4 π£6
π£2
π£3
π£5
π£7
π£8
IRDM β15/16
DFS codes and their orders A DFS code is a sequence of tuples of type
β¨π£π ,π£π , πΏ(π£π), πΏ(π£π), πΏ(π£π , π£π)β© tuples are given in DFS order
backwards edges are listed before forward edges vertices are numbered in DFS order
A DFS code is canonical if it is the smallest of the codes in the ordering
β¨π£π , π£π , πΏ(π£π), πΏ(π£π , πΏ(π£π ,π£π)β© < β¨π£π₯, π£π¦, πΏ(π£π₯), πΏ(π£π¦), πΏ(π£π₯, π£π¦)β© if π£π ,π£π <π β¨π£π₯ ,π£π¦β©; or β¨π£π ,π£πβ© = β¨π£π₯,π£π¦β© and πΏ π£π , πΏ π£π , πΏ π£π , π£π <π β¨πΏ(π£π₯), πΏ(π£π¦), πΏ(π£π₯, π£π¦)β©
the ordering of the label tuples is lexicographical
VIII-1: 47
IRDM β15/16
Ordering the edges
Let πππ = π£π , π£π and ππ₯π¦ = π£π₯, π£π¦
πππ <π ππ₯π¦ if if πππ and ππ₯π¦ are both forward edges, then
π < π¦; or π = π¦ and π > π₯
if πππ and ππ₯π¦ are both backward edges, then π < π₯; or π = π₯ and π < π¦
if πππ is forward and ππ₯π¦ is backward, then π β€ π₯ if πππ is backward and ππ₯π¦ is forward, then π < π¦
(typo fixed, edge order now in sync with Zaki & Meira)
VIII-1: 48
IRDM β15/16
Example
VIII-1: 49
a
a
a
b
π£1
π£2
π£3 π£4
πΊ3
q
r
r r
a
a
a b
π£1
π£2
π£3 π£4
πΊ1
r r
q
r
a
a
a b
π£1
π£2
π£3 π£4
πΊ2
q
r r r
π‘11 = π£1,π£2,π, π, π π‘12 = π£2,π£3, π, π, π π‘13 = π£3, π£1,π, π, π π‘14 = π£2,π£4, π, π, π
π‘21 = π£1,π£2, π, π, π π‘22 = π£2, π£3, π, π, π π‘23 = π£2, π£4,π, π, π π‘24 = π£4,π£1, π, π, π
π‘31 = π£1,π£2, π, π, π π‘32 = π£2, π£3, π, π, π π‘33 = π£3,π£1, π, π, π π‘34 = π£1,π£4, π, π, π
IRDM β15/16
Example
VIII-1: 50
a
a
a
b
π£1
π£2
π£3 π£4
πΊ3
q
r
r r
a
a
a b
π£1
π£2
π£3 π£4
πΊ1
r r
q
r
a
a
a b
π£1
π£2
π£3 π£4
πΊ2
q
r r r
π‘11 = π£1,π£2,π, π, π π‘12 = π£2,π£3, π, π, π π‘13 = π£3, π£1,π, π, π π‘14 = π£2,π£4, π, π, π
π‘21 = π£1,π£2, π, π, π π‘22 = π£2, π£3, π, π, π π‘23 = π£2, π£4,π, π, π π‘24 = π£4,π£1, π, π, π
π‘31 = π£1,π£2, π, π, π π‘32 = π£2, π£3, π, π, π π‘33 = π£3,π£1, π, π, π π‘34 = π£1,π£4, π, π, π
First rows are identical
IRDM β15/16
Example
VIII-1: 51
a
a
a
b
π£1
π£2
π£3 π£4
πΊ3
q
r
r r
a
a
a b
π£1
π£2
π£3 π£4
πΊ1
r r
q
r
a
a
a b
π£1
π£2
π£3 π£4
πΊ2
q
r r r
π‘11 = π£1,π£2,π, π, π π‘12 = π£2,π£3, π, π, π π‘13 = π£3, π£1,π, π, π π‘14 = π£2,π£4, π, π, π
π‘21 = π£1,π£2, π, π, π π‘22 = π£2, π£3, π, π, π π‘23 = π£2, π£4,π, π, π π‘24 = π£4,π£1, π, π, π
π‘31 = π£1,π£2, π, π, π π‘32 = π£2, π£3, π, π, π π‘33 = π£3,π£1, π, π, π π‘34 = π£1,π£4, π, π, π
In second row, πΊ2 is bigger in label order
IRDM β15/16
Example
VIII-1: 52
a
a
a
b
π£1
π£2
π£3 π£4
πΊ3
q
r
r r
a
a
a b
π£1
π£2
π£3 π£4
πΊ1
r r
q
r
a
a
a b
π£1
π£2
π£3 π£4
πΊ2
q
r r r
π‘11 = π£1,π£2,π, π, π π‘12 = π£2,π£3, π, π, π π‘13 = π£3, π£1,π, π, π π‘14 = π£2,π£4, π, π, π
π‘21 = π£1,π£2, π, π, π π‘22 = π£2, π£3, π, π, π π‘23 = π£2, π£4,π, π, π π‘24 = π£4,π£1, π, π, π
π‘31 = π£1,π£2, π, π, π π‘32 = π£2, π£3, π, π, π π‘33 = π£3,π£1, π, π, π π‘34 = π£1,π£4, π, π, π
Last two rows are forward edges, and 4 = 4 but 2 > 1 β πΊ1 is smallest
IRDM β15/16
graph-based substructure pattern mining
The general idea use the DFS codes to create candidates
extend only canonical and frequent candidates
There can be very, very many extensions we need to see them all, and all of their isomorphisms,
to count their supports
(Yan VIII-1: 53
IRDM β15/16
Constructing candidates
The candidates are build in a DFS code tree a DFS code π is an ancestor of DFS code π if π is a proper prefix of π the siblings in the tree follow the DFS code order A graph can be frequent if and only if all of the graphs representing its ancestors in the DFS tree are frequent The DFS tree contains all the canonical codes for all subgraphs of the graphs in the data but not all vertices in the code tree correspond to canonical codes We (implicitly) traverse this tree
VIII-1: 54
IRDM β15/16
The gSPAN algorithm (sketch) GSPAN(graph πΊ, minsup π) for each frequent 1-edge graph do call subgrm to grow all nodes in the tree rooted in this edge-graph remove this edge from the graph SUBGRM(frequent subgraph π, minsup π) if the code is not canonical then return add π to the set of frequent graphs create all super-graphs π β π, extending π with one more edge compute frequencies of all πβs call SUBGRM for canonical representation of all frequent πβs
VIII-1: 55
IRDM β15/16
Computing frequencies
gSPAN merges extension generation and support computation For each graph in the data base gSPAN computes all isomorphisms of the current candidate
can mean solving NP-complete problems⦠for all isomorphisms, computes all backward and forward extensions
these extensions are stored together with the graph they appear in
The support of each extension is the number of times weβve stored it
VIII-1: 56
IRDM β15/16
Checking canonicity
Given a DFS code of an extension, we need to check if the code is canonical This can be done by re-creating the code at every step, choose the smallest of the right-most path extension
of the current code in the graph corresponding to the extension If at any step we get a code that is smaller than the suffix of the extensionβs code, we do not have a canonical code if after π steps we arrive at the extensions code, it is canonical
VIII-1: 57
IRDM β15/16
Easier problems
Much of the complexity of subgraph mining lies in (checking for) isomorphisms For some types of graphs isomorphism is easy different types of trees
ordered and unordered rooted and unrooted
graphs where every node has a distinct label
VIII-1: 58
IRDM β15/16
Conclusions Graphs are everywhere many interesting problems real graphs often exhibit power-law-like behaviour
Graphs generalise many data settings makes it possible to create general algorithms
Many problems in graphs are very difficult subgraph isomorphism
Frequent subgraph mining involves multiple NP-hard problems
VIII-1: 59
IRDM β15/16
Graphs are everywhere many interesting problems real graphs often exhibit power-law-like behaviour
Graphs generalise many data settings makes it possible to create general algorithms
Many problems in graphs are very difficult subgraph isomorphism
Frequent subgraph mining involves multiple NP-hard problems
VIII-1: 60