TOWARDS HIERARCHICAL CLUSTERING Mark Sh. Levin t. for Inform. Transm. Problems, Russian Acad. of S Email: [email protected]Http://www.iitp.ru/mslevin/ CSR’2007, Ural State University, Ekaterinburg, Russia, Sept. 4, 2007 PLAN: sic agglomerative algorithm for hierarchical cluste icriteria decision making (DM) approach to proximity of objects ation of objects into several groups/clusters clustering with intersection): algorithms & applic rds resultant performance (quality of results)
36
Embed
TOWARDS HIERARCHICAL CLUSTERING Mark Sh. Levin Inst. for Inform. Transm. Problems, Russian Acad. of Sci. Email: [email protected] Http:
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
TOWARDS HIERARCHICAL CLUSTERING
Mark Sh. LevinInst. for Inform. Transm. Problems, Russian Acad. of Sci.
Stage 1.Computing matrix D=| dil | (pair “distances”)
Stage 2.Revelation of the smallest pair “distance” (i.e., the minimal pair “distance”, the minimal element in matrix D)and integration of the corresponding elements (Ax, Ay)(objects) into a new joint (integrated) object A=Ax*Ay
Stage 3.Stopping the process or re-computing the matrix D and GOTO Stage 2.
First, Complexity of agglomerative algorithm:1.Number of stages (each stage – one integration):(n-1) stages2.Each stage:(a)computing “distances” (n2 * m operations) THUS:Operations: O(m n3 ) Memory: O(n(n+m))
Second, we have got the TREE-LIKE STRUCTURE
Hierarchical clustering: IMPROVEMENTS (to do better)
Question 1: What we can do better in the algorithm?
Hierarchical clustering: IMPROVEMENTS (to do better)
Question 1: What we can do better in the algorithm?
Question 2:
What is needed in practice (e.g., applications)? What we can do for applications?
Hierarchical clustering: IMPROVEMENTS (to do better)
Question 1: What we can do better?
1.Computing: pair “distance” (pair proximity)
Usage of more “correct” approaches from multicriteria decision making, e.g., Revelation of Pareto-layers and usage of an ordinal scale for pair proximity
2.Complexity: decrease the number of stages:
Integration of several pair of objects at each stage
Hierarchical clustering: IMPROVEMENTS (to do better)
2.Complexity: decrease the number of stages:
Integration of several pair of objects at each stage
Usage of an ordinal scale:
0 max{dxy}
Hierarchical clustering: IMPROVEMENTS (to do better)
2.Complexity: decrease the number of stages:
Integration of several pair of objects at each stage
Usage of an ordinal scale:
0 max{dxy}
To divide the interval [0,max{dxy}] to get an ordinal scale
Hierarchical clustering: IMPROVEMENTS (to do better)
2.Complexity: decrease the number of stages:
Integration of several pair of objects at each stage
Usage of an ordinal scale:
0 max{dxy}
To divide the interval [0,max{dxy}] to get an ordinal scale
interval 0 interval 1 interval k
Hierarchical clustering: IMPROVEMENTS (to do better)
2.Complexity: decrease the number of stages:
Integration of several pair of objects at each stage
Usage of an ordinal scale:
0 max{dxy}
To divide the interval [0,max{dxy}] to get an ordinal scale
interval 0 interval 1 interval k
dab duv dpq dgh
Example: pairs of objects: (a,b), (u,v), (p,q), (g,h)
Hierarchical clustering: IMPROVEMENTS (to do better)
2.Complexity: decrease the number of stages:
Integration of several pair of objects at each stage
Usage of an ordinal scale:
0 max{dxy}
To divide the interval [0,max{dxy}] to get an ordinal scale
interval 0 interval 1 interval k
dab duv dpq dgh
RESULT:
dab = 0duv = 0dpq = 1dgh = 1
Hierarchical clustering: IMPROVEMENTS (to do better)
1.Computing: pair “distance” (pair proximity)
Usage of more “correct” approaches from multicriteria decision making, e.g., Revelation of Pareto-layers and usage of an ordinal scale for pair proximity
Hierarchical clustering: IMPROVEMENTS (to do better)
1.Computing: pair “distance” (pair proximity)
Usage of more “correct” approaches from multicriteria decision making, e.g., Revelation of Pareto-layers and usage of an ordinal scale for pair proximity
Hierarchical clustering: IMPROVEMENTS (to do better)
1.Computing: pair “distance” (pair proximity)
Usage of more “correct” approaches from multicriteria decision making, e.g., Revelation of Pareto-layers and usage of an ordinal scale for pair proximity
Hierarchical clustering: IMPROVEMENTS (to do better)
1.Computing: pair “distance” (pair proximity)
Usage of more “correct” approaches from multicriteria decision making, e.g., Revelation of Pareto-layers and usage of an ordinal scale for pair proximity
Hierarchical clustering: IMPROVEMENTS (to do better)
1.Computing: pair “distance” (pair proximity)
Usage of more “correct” approaches from multicriteria decision making, e.g., Revelation of Pareto-layers and usage of an ordinal scale for pair proximity
Space of the vectors =>ordinal scale&ordinal proximity
Hierarchical clustering: IMPROVEMENTS (to do better)
1.Computing: pair “distance” (pair proximity)
Usage of more “correct” approaches from multicriteria decision making, e.g., Revelation of Pareto-layers and usage of an ordinal scale for pair proximity
Space of the vectors =>ordinal scale&ordinal proximity
C1
C2Ideal point(equal objects)
Pareto-effectiveLayer (1)
Layer 2
Hierarchical clustering: IMPROVEMENTS (practice)
Question 2:What is needed in practice (e.g., applications)? What we can do for applications?
Hierarchical clustering: IMPROVEMENTS (practice)
Question 2:What is needed in practice (e.g., applications)? What we can do for applications?
Integration of objects into several groups (clusters) to obtain more rich resultant structure (tree => hierarchy, i.e., clusters with intersection)
Examples of applied domains:1.Engineering: structures of complex systems2.CS: structures of software/hardware3.Communication networks (topology)4.Biology 5.Others
Hierarchical clustering: IMPROVEMENTS (practice)
Question 2:What is needed in practice (e.g., applications)? What we can do for applications?
Cluster F1
Cluster F2
Cluster F3
Cluster F4
Cluster F5Cluster F6
Clustering withintersection
Hierarchical clustering: IMPROVEMENTS (practice)
(1*2*3*4*5*6*7)
Stage 0:1 2 3 4 5 6 7
Stage 1:1 (2*3) (3*4) (5*6) (6*7)
Stage 2:1 (2*3*4) (3*4*5*6) (6*7)
Stage 3:(1*2*3*4) (3*4*5*6*7)
2
Stage 4:
Hierarchical clustering: IMPROVEMENTS (practice)
(1*2*3*4*5*6*7)
Stage 0:1 2 3 4 5 6 7
Stage 1:
Stage 2:
Stage 3:
Resultantstructure
Hierarchical clustering: IMPROVEMENTS (practice)
Example frombiology(evolution)
Traditionalevolution process as tree
Hierarchical clustering: IMPROVEMENTS (practice)
Example frombiology(evolution)
Hierarchicalstructure
Hierarchical clustering: IMPROVEMENTS (practice)
Algorithm 1. The number of inclusion for each object is not limited):(i)initial set of objects -> vertices(ii)”small” proximity -> edgesThus: a graphProblem: to reveal cliques in the graph (It is NP-hard problem)
Algorithm 2. The number of the inclusion is limited by t (e.g., t=2/3/4). Here complexity is polynomial.
Performance (i.e., quality) of clustering procedures:
1.Issues of complexity
2.Quality of results (??) Some traditional approaches:(a)computing a clustering quality InterCluster Distance / IntraCluster Distance(b)Coverage, Diversity
Our case: research procedure (for investigation and problem structuring)