= j X j 2 d j 2 +2j<k X j X k d j d k - " = j=1..n (X j 2 - X j 2 )d j 2 + +(2j=1..n<k=1..n (X j X k - X j X k )d j d k ) V(d)≡VarDPP d (X) = (Xod) 2 - ( Xod ) 2 = i=1..N (j=1..n x i,j d j ) 2 - ( j=1..n X j d j ) 2 N 1 - (j X j d j ) (k X k d k ) = i (j x i,j d j ) (k x i,k d k ) N 1 = i j x i,j 2 d j 2 + j<k x i,j x i,k d j d k - j X j 2 d j 2 +2j<k X j X k d j d k N 1 N 2 V(d)= 2a 11 d 1 +j1 a 1j d j 2a 22 d 2 +j2 a 2j d j : 2a nn d n +jn a nj d j Heuristic1: d o =e kk k s.t. a k max. Choose d 1 ≡(V(d 0 )) Choose d 2 ≡(V(d 1 )) til F(d k ) stable. + jk a jk d j d k V(d)=j a jj d j 2 subject to i=1..n d i 2 =1 d T o VX o d = VarDPP d X≡V V i X i X j -X i X ,j : d 1 ... d n V(d)=ij a ij d i d j = d 1 : d n ij a ij d i d j V(d) = 2a 11 a 12 ... a 1n a 21 2a 22 ... a 2n : ' a n1 ... 2a nn d 1 : d i : d n = A o d x 1 x 2 : x N x 1 od x 2 od x N od = X o d = F d (X)=DPP d (X d 1 d n FAUST CLUSTER Where are we at? Perfecting FAUST Clustering (using distance dominated functional gap analysis). Primary functional is DPP d (x). Sequence through grid d's until good gap is found? (expensive?) Heuristic for picking a "great" d? Optimizes the variance of F(X)? Why? Because if there is low dispersion, there can't be lots of large gaps. But, just because there is high dispersion, does not mean there IS a large gap (simple example follows). The best starting d would be one that maximizes the maximum consecutive difference within the [sorted] array, F(X). A candidate "good" heuristic is to find the d that maximizes | Mean(F(X)) - Median(F(X)) | but the latter (so far) seems difficult to calculate? We can estimate it with F(VectorOfMedians)=F(VOM) which we can calculate.
40
Embed
= j X j 2 d j 2 +2 j Publish Cody Fields, Modified 2 years ago
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
= jXj
2 dj2 +2
j<kXjXkdjdk - "
=
j=1..n(Xj
2
- Xj2)dj
2 ++(2j=1..n<k=1..n
(XjXk - XjXk)djdk )
V(d)≡VarDPPd(X) = (Xod)2 - ( Xod )2
= i=1..N
(j=1..n xi,jdj)
2 - ( j=1..n
Xj dj )2
N 1
- (jXj dj) (
kXk dk) =
i(
j xi,jdj) (
k xi,kdk) N 1
= i
jxi,j
2dj2 +
j<k xi,jxi,kdjdk
-
jXj
2dj2 +2
j<k XjXkdjdk N 1
N2
V(d)= 2a11d1 +j1a1jdj
2a22d2+j2a2jdj
:
2anndn +jnanjdj
Heuristic1: do=ekk k s.t. ak max.
Choose d1≡(V(d0))
Choose d2≡(V(d1)) til F(dk) stable.
+ jkajkdjdkV(d)=jajjdj2
subject to i=1..ndi2=1
dT o VX o d = VarDPPdX≡V
V
i XiXj-XiX,j
:
d1 ... dn V(d)=ijaijdidj=d1:
dn
ijaijdidjV(d) =
2a11 a12 ... a1n
a21 2a22 ... a2n
:' an1 ... 2ann
d1
:di
:dn
= A o d
x1
x2
:xN
x1odx2od
xNod=
X o d = Fd(X)=DPPd(X)d1
dn
FAUST CLUSTER
Where are we at?Perfecting FAUST Clustering (using distance dominated functional gap analysis).
Primary functional is DPPd(x).
Sequence through grid d's until good gap is found? (expensive?)
Heuristic for picking a "great" d?
Optimizes the variance of F(X)?Why? Because if there is low dispersion, there can't be lots of large gaps. But, just because there is high dispersion, does not mean there IS a large gap (simple example follows).
The best starting d would be one that maximizes the maximum consecutive difference within the [sorted] array, F(X).
A candidate "good" heuristic is to find the d that maximizes | Mean(F(X)) - Median(F(X)) | but the latter (so far) seems difficult to calculate? We can estimate it with F(VectorOfMedians)=F(VOM) which we can calculate.
Finding a good unit vector, d, for Dot Product functional, DPP. to maximize gaps
Mean(DPPdX) = (1/N)i=1..N
j=1..n xi,jdj
X X1...Xj ...Xn
x1
x2
.
.
.xi xi,j
xN
x1odx2od
xiod
xNod
d1
dn
=
X mm d = DPPd(X)
subjected to i=1..n di2= 1
Maximize wrt d, |Mean(DPPd(X)) - Median(DPPd(X)|
Mean(DPPdX) = j=1..n
( (1/N)i=1..N xi,j ) dj
But how do we compute Median(DPPd(X) ? We want to use only pTree processing.We want to end up with a formula involving d and numbers only (like the one above for the mean (involves only the vector d and the numbers X1 ,..., Xn )
A heuristic is to substitute the Vector of Medians (VOM) for Median(DPPd(X)???
Should we maximize variance?
MEAN-MEDIAN picks out the last two sequences, which have the best gaps (discounting outlier gaps at the extremes).
d=VOMMN DPP on IRIS_150_SEI_(SL,SW,PL,PW). CLUS1.1 F[0,39] 44_Virginica with 2_Versicolor errors; CLUS1.3 F[45,76] 42_Versicolor with 1_Virginica error; CLUS2 F[80,120] 50_Setosa with 2_Virginica errors; and CLUS1.2 F[39,44], if classified as 5_Versicolor has 3_Virginica errors. So the classification accuracy is 142/150 or 94.6%
CLUS2.2 F[49,69] 47_Virginica with 4_Versicolor errors; CLUS2.1 F[25,48] 46_Versicolor with 2_Virginica errors; CLUS1 F[0, 25] 50_Setosa with 1 Virginica error. So the classification accuracy is 143/150 or 95.3%
STDs=(1.9,9,23,1.2)maxSTD=23 for d=eTS on WN150hl(FA,FS,TS,AL
_________[0 .70) 57 low 80 high CLUS_1.1.1 [70,78) 0 low 2 high CLUS 1.1.2
_________[0 .58) 57 low 75 high CLUS_1.1.1.1 [58,70) 0 low 5 high CLUS 1.1.1.2
_________[0 . 31) 57 low 47 high CLUS_1.1.1.1.1 [31,58) 0 low 28 high CLUS 1.1.1.1.2
_________[0 . 16) 57 low 12 high CLUS 1.1.1.1.1 [16,31) 0 low 35 high CLUS 1.1.1.1.2
_________[0 . 10) 42 low 0 high CLUS_1.1.1.1.1.1 [10,16) 15 low 12 high CLUS 1.1.1.1.1.2
d=VOMMEAN DPP on WINE_150_HL_(FA,FSO2,TSO2,ALCOHOL). Some agglomeration required: CLUS1.1.1.1.1.1 is LOW_Quality F[0,10], else HIGH Quality F[13,119] with 15 LOW error.Classification accuracy = 90% (if it had been cut 13, 99.3% accuracy!)
_________[0.80) 57 low 89 high CLUS1.1 [80,95] 7 high CLUS1.2
_________[0.72) 57 low 89 high CLUS1.1.1 [72,80] 2 high CLUS1.1.2
_________[0 .60) 57 low 75 high CLUS_1.1.1.1 [60,72) 0 low 5 high CLUS 1.1.1.2
_________[0 . 33) 57 low 44 high CLUS_1.1.1.1.1 [33,60) 0 low 31 high CLUS 1.1.1.1.2
_________[0 . 22) 57 low 12 high CLUS_1.1.1.1.1 [22,33) 0 low 32 high CLUS 1.1.1.1.2
But that's the only thinning! Therefore, we are unable to separate Kama and Canada at all.
_________[0.19) 0 Kama 0 Rosa 33 Canada CLUS_1.1 [19,62) 50 Kama 16 Rosa 17 Canada CLUS_1.2_________[19,23) 0 Kama 0 Rosa 11 Canada CLUS_1.2.1 [23,62) 50 Kama 16 Rosa 6 Canada CLUS_1.2.2
_________[23,30) 6 Kama 0 Rosa 4 Canada CLUS_1.2.2.1 [30,62) 44 Kama 16 Rosa 2 Canada CLUS_1.2.2.2_________[30,33) 5 Kama 0 Rosa 1 Canada CLUS_1.2.2.2.1 [33,62) 39 Kama 16 Rosa 1 Canada CLUS_1.2.2.2.2_________[33,36) 6 Kama 0 Rosa 1 Canada CLUS_1.2.2.2.2.1 [36,62) 33 Kama 16 Rosa 0 Canada CLUS_1.2.2.2.2.2
_________[36,45) 18 Kama 2 Rosa 0 Canada CLUS_1.2.2.2.2.2.1 [45,62) 15 Kama 14 Rosa 0 Canada CLUS_1.2.2.2.2.2.2
_________[45,50) 8 Kama 1 Rosa 0 Canada CLUS_1.2.2.2.2.2.2.1_________ [50,52) 0 Kama 3 Rosa 0 Canada CLUS_1.2.2.2.2.2.2.2.1
_________ [52,55) 3 Kama 2 Rosa 0 Canada CLUS_1.2.2.2.2.2.2.2.2.1
_________ [55,58) 3 Kama 3 Rosa 0 Canada CLUS_1.2.2.2.2.2.2.2.2.2.1 [58,62) 1 Kama 5 Rosa 0 Canada CLUS 1.2.2.2.2.2.2.2.2.2.2
_________[0.17) 49 Kama 8 Rosa 50 Canada CLUS_1 [17,22) 1 Kama 42 Rosa 0 Canada CLUS_2
[0,14) 1 Kama 42 Canada .But no algorithm would pick 14 as a cut!
[13,14) 10 Kama 8 Canada .That's either 8 or 10 errorsand no algorithm would cut at 14.
[14,15) 18 Kama .But no algorithm would cut at 15.
[15,16) 13 Kama 2 Rosa no alg w cut.[16,17) 7 Kama 6 Risa
________ [0.32) 4 Low 24 Medium 8 High CLUS_1 [32,101) 39 Low 28 Medium 47 High CLUS_2
________ [0.9) 3 Low 16 Medium 2 High CLUS_1.1 [9,32) 1 Low 8 Medium 6 High CLUS_1.2
________ [32,55) 21 Low 12 Medium 28 High CLUS_2.1 [55,101) 1 Low 8 Medium 6 High CLUS_2.2
Inconclusive on e2!
0 17 1 11 3 12 6 3513 2522 2524 844 767 489 291 4
d=e4 Conc4150
________=67 0 Lo 4 Med 0 Hi CLUS_1________=89 0 Lo 4 Med 0 Hi CLUS_2________=91 0 Lo 4 Med 0 Hi CLUS_4
________=44 0 Lo 7 Med 0 Hi CLUS_3
_______ . =22 0 Lo 6 Med 19 Hi CLUS_6________=13 3 Lo 3 Med 19 Hi CLUS_7________=6 13 Lo 5 Med 17 Hi CLUS_8________=3 12 Lo 0 Med 0 Hi CLUS_9________=1 2 Lo 9 Med 0 Hi CLUS_10________=0 13 Lo 4 Med 0 Hi CLUS_11
Cut only at gaps 5 on first round. Then we iteratively repeat on each subcluster. C1 and C2 accuracy=100%, so we skip them and concentrate on C3,C4,C5 to see if a second round will purify them. Start with C5: (F-MN)/4
Another issue is: How can we follow this with an agglomeration step which might glue the intra-class subclusters back together?
Agglomerate after FAUST Gap Clustering using "separation of the subcluster medians" [or means?] as the measure?!?!
So there is but 1 error (in the C3 step) for an accuracy of 149/150=99.3%.However, I realized I am still cheating ;-( How would I know to do as the first round instead of ? VOM2,4MN2,4
Agglomerate (build dendogram) by iteratively gluing together clusters with min Median separation.Should I have normalize the rounds?Should I have used the same Fdivisor and made sure the range of values was the same in 2nd round as it was in the 1st round (on CLUS 4)?Can I normalize after the fact, I by multiplying 1st round values by 100/88=1.76?Agglomerate the 1st round clusters and then independently agglomerate 2nd round clusters?
C1 C2 C3 C4
_____________At this level, FinalClus1={17M} 0 errors
med=62
med=33
med=17
med=71
med=23
med=21
med=9
med=34
med=57
med=86
med=71
med=10
med=56
med=14
med=61
med=18
med=40
Let's review agglomerative clustering in general next (dendograms)
CONCRETE
Hierarchical Clustering
Any maximal anti-chain (maximal set of nodes in which no 2 are directly connected) is a clustering (a dendogram offers many).
A
B C
BC
D E
DE
F G
FG
DEFGABC
Hierarchical Clustering
But the “horizontal” anti-chains are the clusterings resulting from thetop down (or bottom up) method(s).
Agglomerate (build dendogram) by iteratively gluing together clusters with min Median separation.Should I have normalize the rounds?Should I have used the same Fdivisor and made sure the range of values was the same in 2nd round as it was in the 1st round (on CLUS 4)?Can I normalize after the fact, I by multiplying 1st round values by 100/88=1.76?Agglomerate the 1st round clusters and then independently agglomerate 2nd round clusters?C1 C2 C3 C4
_____________At this level, FinalClus1={17M} 0 errors
med=62
med=33
med=17
med=71
med=23
med=21
med=9
med=34
med=57
med=86
med=71
med=10
med=56
med=14
med=61
med=18
med=40
Suppose we know (or want) 3 clusters, Low, Medium and High Strength. Then we find Suppose we know that we want 3 strength clusters, Low, Medium and High. We can use an anti-chain that gives us exactly 3 subclusters two ways, one show in brown and the other in purpleWhich would we choose? The brown seems to give slightly more uniform subcluster sizes.Brown error count: Low (bottom) 11, Medium (middle) 0, High (top) 26, so 96/133=72% accurate. The Purple error count: Low 2, Medium 22, High 35, so 74/133=56% accurate.What about agglomerating using single link agglomeration (minimum pairwise distance?
Agglomerating using single link (min pairwise distance = min gap size! (glue min-gap adjacent clusters 1st)
The first thing we can notice is that outliers mess up agglomerations which are supervised by knowledge of the number of subclusters expected. Therefore we might remove outliers by backing away from all gap5 agglomerations, then looking for a 3 subcluster max anti-chains.
What we have done is to declare F<7 and F>84 as extreme tripleton outliers sets; and F=79. F=40 and F=47 as singleton outlier sets because they are F-gapped by at least 5 (which is actually 10) on either side.
The brown gives more uniform sizes. Brown errors: Low (bottom) 8, Medium (middle) 12 and High (top) 6, so 107/133=80% accurate.
The one decision to agglomerate C4.7.1 to C4.7.2 (gap=3) instead of C4.3.2 to C4.7.2 (gap=3) lots of error. C4.7.1 and C4.7.2 are problematic since they are separate out, but in increasing F order, it's H M L M L, so if we suspected this pattern we would look for 5 subclusters.
The 5 orange errors in increasing F-order are: 6, 2, 0, 0, 8 so 127/133=95% accurate.
If you have ever studied concrete, you know it is a very complex material. The fact that it clusters out with a F-order pattern of HMLML is just bizarre! So we should expect errors. CONCRETE
This uncovers the fact that repeated applications of meanVOM can be non-productive when each applications basically removes sets of outliers at the extremes of the F-value array (because when outliers are removed, the VOM may move toward the mean).
Xx1 x2 1 2 3 4 5 6 7 8 9 a b1 1 1 1=q3 1 2 32 2 3 2 43 3 45 2 5 59 3 615 1 7 f14 2 815 3 9 6 p d13 4 a b10 9 b c e1110 c9 11 d a1111 e 87 8 f 7 9
[000 0000, 000 1111]= [0,15]=[0,16) has 1 point, z1. This is a 24 thinning. z1od=11 is only 5 units from the right edge, so z1 is not declared an outlier)
Next, we check the min dis from the right edge of the next interval to see if z1's right-side gap is actually 24 (the calculation of the min is a pTree process - no x looping required!)
p=
Gap Revealer Width 24
so compute all pTree combinations down to p4 and p'4 d=M-p
[010 0000 , 010 1111] = [32,48).z4od=34 is within 2 of 32, so z4 is not declared an anomaly.
[011 0000, 011 1111] = [ 48, 64). z5od=53 is 19 from z4od=34 (>24) but 11 from 64. But the next int [64,80) is empty z5 is 27 from its right nbr. z5 is declared an outlier and we put a subcluster cut thru z5
[100 0000 , 100 1111]= [64, 80). This is clearly a 24 gap.
1 z1 z2 z72 z3 z5 z83 z4 z6 z94 za5 M 6 78 zf9 zba zcb zd zec0 1 2 3 4 5 6 7 8 9 a b c d e f
[001 0000, 001 1111] = [16,32). The minimum, z3od=23 is 7 units from the left edge, 16, so z1 has only a 5+7=12 unit gap on its right (not a 24 gap). So z1 is not declared a 24 (and is declared a 24 inlier).
[101 0000 , 101 1111]= [80, 96). z6od=80, zfod=83
[110 0000 , 110 1111]= [96,112). zbod=110, zdod=109. So both {z6,zf} declared outliers (gap16 both sides.
[111 0000 , 111 1111]= [112,128) z7od=118 z8od=114z9od=125 zaod=114zcod=121 zeod=125No 24 gaps. But we can consult SpS(d2(x,y) for actual distances:
Which reveals that there are no 24 gaps in this subcluster.
And, incidentally, it reveals a 5.8 gap between {7,8,9,a} and {b,c,d,e} but that analysis is messy and the gap would be revealed by the next xofM round on this sub-cluster anyway.
Barrel Clustering: (This method attempts to build barrel-shaped gaps around clusters)Allows for a better fit around convex clusters that are elongated in one direction (not round).
p
Gaps in dot product lengths [projections] on the line.
y
q
barrelcap gap
width
barrel radius gap width
Exhaustive Search for all barrel gaps:It takes two parameters for a pseudo- exhaustive search (exhaustive modulo a grid width).1. A StartPoint, p (an n-vector, so n dimensional)2. A UnitVector, d (a n-direction, so n-1 dimensional - grid on the surface of sphere in Rn).
Then for every choice of (p,d) (e.g., in a grid of points in R2n-1) two functionals are used to enclose subclusters in barrel shaped gaps.a. SquareBarrelRadius functional, SBR(y) = (y-p)o(y-p) - ((y-p)od)2 b. BarrelLength functional, BL(y) = (y-p)od
Given a p, do we need a full grid of ds (directions)? No! d and -d give the same BL-gaps.
Given d, do we need a full grid of p starting pts? No! All p' s.t. p'=p+cd give same gaps.Hill climb gap width from a good starting point and direction.
MATH: Need dot product projection length and dot product projection distance (in red).
For the dot product length projections (caps) we already needed:
= ( yo(M-p) - po M-p )|M-p| |M-p|
That is, we needed to compute the green constants and the blue and red dot product functionals in an optimal way (and then do the PTreeSet additions/subtractions/multiplications). What is optimal? (minimizing PTreeSet functional creations and PTreeSet operations.)
F=(y-M)o(x-M)/|x-M|-mn restricted to a cosine cone on IRIS
8 2 i22 i5010 211 2 i2812 4 i24 i27 i3413 214 415 316 817 418 719 320 521 122 123 134 1 i39 43/50 e so picks out e
w naaa-xaaacone=.95
12 113 214 115 216 117 118 419 320 221 322 523 6 i2124 525 127 128 129 230 2 i7 41/43 e so picks e
w aaan-aaaxcone=.54
7 3 i27 i28 8 1 9 310 12 i20 i3411 712 1313 514 315 719 120 121 722 723 2824 6100/104 s or e so 0 picks i
Corner points
Gap in dot product projections onto the cornerpoints line.
Cosine cone gap (over some angle)
Cosine conical gapping seems quick and easy (cosine = dot product divided by both lengths.
Length of the fixed vector, x-M, is a one-time calculation. Length y-M changes with y so build the PTreeSet.
Cone Clustering: (finding cone-shaped clusters)
r r vv r mR r v v v v r r v mV v r v v r v
FAUST ClassifierFAUST Classifier Separate classr, classv using midpoints of meansmidpoints of means:
a
Set a = (mR+(mV-mR)/2)od = (mR+mV)/2 o d ?
d
Training amounts to choosing the Cut hyperplane = (n-1)-dimensionl hyperplane (and thus cuts the space in two). Classify with one horizontal program (AND/OR) across the pTrees to get a mask pTree for each class (bulk classification).Improve accuracy? e.g., by considering the dispersion within classes. Use1. vector_of_medians: (vomv≡ (median(v1), median(v2),...)) instead of means; then use stdev ratio to place the cut.2. Cut at Midpt of Max{rod}, Min{vod}. If there is no gap, move Cut until r_errors + v_errors is minimized.3. Hill-climb d to maximize gap (or minimize errors when applied to the training set).4. Replace mr, mv with the avg of the margin points?5. Round classes expected? use SDmr
< |D|/2 for r-class and
SDmv <|D|/2 for v-class.
vomV
v1
v2
vomRdim 2
dim 1
d-line
Pr=P(xod)<a Pv=P(xod)a
where D≡ mRmV d=D/|D|
Datamining Big Data big data: up to trillions of rows (or more) and, possibly, thousands of columns (or many more). I structure data vertically (pTrees) and process it horizontally. Looping across thousands of columns can be orders of magnitude faster than looping down trillions of rows. So sometimes that means a task can be done in human time only if the data is vertically organized.Data mining is [largely] CLASSIFICATION or PREDICTION (assigning a class label to a row based on a training set of classified rows). What about clustering and ARM? They are important and related! Roughly clustering creates/improves training sets and ARM is used to data mine more complex data (e.g., relationship matrixes, etc.).
2/2/13
CLASSIFICATION is [largely] case-based reasoning. To make a decision we typically search our memory for similar situations (near neighbor cases) and base our decision on the decisions we made in those cases (we do what worked before for us or others). We let near neighbors vote. "The Magical Number Seven, Plus or Minus Two... Information"[2] cited to argue that the number of objects (contexts) an average human can hold in working memory is 7 ± 2. We can think of classification as providing a better 7 (so it's decision support, not decision making).One can say that all Classification methods (even model based ones) are a form of Near Neighbor Classification. E.g. in Decision Tree Induction (DTI) the classes at the bottom of a decision branch ARE the Near Neighbor set due to the fact that the sample arrived at that leaf.Rows of an entity table (e.g., Iris(SL,SW,PL,PW) or Image(R,G,B) describe instances of the entity (Irises or Image pixels). Columns are descriptive information on the row instances (e.g., Sepal Length, Sepal Width, Pedal Length, Pedal Width or Red, Green, Blue photon counts). If the table consists entirely of real numbers, then the row set can be viewed [as s subset of] a real vector space with dimension = # of columns.Then, the notion of "near" [in classification and clustering] can be defined using a dissimilarity (~distance) or a similarity. Two rows are near if the distance between them is low or their similarity is high. Near for columns can be defined using a correlation (e.g., Pearson's, Spearman's...)If the columns also describe instances of an entity then the table is really a matrix or relationship between instances of the row entity and the column entity. Each matrix cell measures some attribute of that relationship pair (The simplest: 1 if that row is related to that column, else 0. The most complex: an entire structure of data describing that pair (that row instance and that column instance).In Market Basket Research (MBR), the row entity is customers and the columnis items. Each cell: 1 iff that customer has that item in the basket. In Netflix Cinematch, the row entity is customers and column movies and each cell has the 5-star rating that customer gave to that movie.In Bioinformatics the row entity might be experiments and the column entity might be genes and each cell has the expression level of that gene in that experiment or the row and column entities might both be proteins and each cell has a 1-bit iff the two proteins interact in some way.In Facebook the rows might be people and the columns might also be people (and a cell has a one bit iff the row and column persons are friends)Even when the table appears to be a simple entity table with descriptive feature columns, it may be viewable as a relationship between 2 entities. E.g., Image(R,B,G) is a table of pixel instances with columns, R,G,B. The R-values count the photons in a "red" frequency range detected at that pixel over an interval of time. That red frequency range is determined more by the camera technology than by any scientific definition. If we had separate CCD cameras that could count photons in each of a million very thin adjacent frequency intervals, we could view the column values of that image as instances a frequency entity, Then the image would be a relationship matrix between the pixel and the frequency entities.So an entity table can often be usefully viewed as a relationship matrix. If so, it can also be rotated so that the former column entity is now viewed as the new row entity and the former row entity is now viewed as the new set of descriptive columns.The bottom line is that we can often do data mining on a table of data in many ways:as an entity table (classification and clustering), as a relationship matrix (ARM) or upon rotation that matrix, as another entity table.For a rotated entity table, the concepts of nearness that can be used also rotate (e.g., The cosine correlation of two columns morphs into the cosine of the angle between 2 vectors as a row similarity measure.)
Enabling Real-Time Computing SAP® In-Memory enables real-time computing by bringing together online transaction proc. OLTP (DB) and online analytical proc. OLAP (DW).
Combining advances in hardware technology with SAP InMemory Computing empowers business – from shop floor to boardroom – by giving real-time bus. proc. instantaneous access to data-eliminating today’s info lag for your business.
In-memory computing is already under way. The question isn’t if this revolution will impact businesses but when/ how.
In-memory computing won’t be introduced because a co. can afford the technology. It will be because a business cannot afford to allow its competitors to adopt the it first.
Here is sample of what in-memory computing can do for you:• Enable mixed workloads of analytics, operations, and performance management in a single software landscape.• Support smarter business decisions by providing increased visibility of very large volumes of business information• Enable users to react to business events more quickly through real-time analysis and reporting of operational data.• Deliver innovative real-time analysis and reporting.• Streamline IT landscape and reduce total cost of ownership.
In manufacturing enterprises, in-memory computing tech will connect the shop floor to the boardroom, and the shop floor associate will have instant access to the same data as the board [[shop floor = daily transaction processing. Boardroom = executive data mining]]. The shop floor will then see the results of their actions reflected immediately in the relevant Key Performance Indicators (KPI).
SAP BusinessObjects Event Insight software is key. In what used to be called exception reporting, the software deals with huge amounts of realtime data to determine immediate and appropriate action for a real-time situation.
Product managers will still look at inventory and point-of-sale data, but in the future they will also receive,eg., tell customers broadcast dissatisfaction with a product over Twitter.Or they might be alerted to a negative product review released online that highlights some unpleasant product features requiring immediate action.
From the other side, small businesses running real-time inventory reports will be able to announce to their Facebook and Twitter communities that a high demand product is available, how to order, and where to pick up.
Bad movies have been able to enjoy a great opening weekend before crashing 2nd weekend when negative word-of-mouth feedback cools enthusiasm. That week-long grace period is about to disappear for silver screen flops.
Consumer feedback won’t take a week, a day, or an hour.
The very second showing of a movie could suffer from a noticeable falloff in attendance due to consumer criticism piped instantaneously through the new technologies.
It will no longer be good enough to have weekend numbers ready for executives on Monday morning. Executives will run their own reports on revenue, Twitter their reviews, and by Monday morning have acted on their decisions.
The final example is from the utilities industry: The most expensive energy a utilities provides is energy to meet unexpected demand during peak periods of consumption. If the company could analyze trends in power consumption based on real-time meter reads, it could offer – in real time – extra low rates for the week or month if they reduce their consumption during the following few hours.
This advantage will become much more dramatic when we switch to electric cars; predictably, those cars are recharged the minute the owners return home from work.Hardware: blade servers and multicore CPUs and memory capacities measured in terabytes. Software: in-memory database with highly compressible row / column storage designed to maximize in-memory comp. tech. [[Both row and column storage! They convert to column-wise storage only for Long-Lived-High-Value data?]]Parallel processing takes place in the database layer rather than in the app layer - as it does in the client-server arch.
Total cost is 30% lower than traditional RDBMSs due to:• Leaner hardware, less system capacity req., as mixed workloads of analytics, operations, performance mgmt is in a single system, which also reduces redundant data storage. [[Back to a single DB rather than a DB for TP and a DW for boardroom dec. sup.]] • Less extract transform load (ETL) between systems and fewer prebuilt reports, reducing support required to run sofwr.
Report runtime improvements of up to 1000 times. Compression rates of up to a 10 times. Performance improvements expected even higher in SAP apps natively developed for inmemory DBs. Initial results: a reduction of computing time from hours to seconds.However, in-memory computing will not eliminate the need for data warehousing. Real-time reporting will solve old challenges and create new opportunities, but new challenges will arise. SAP HANA 1.0 software supports realtime database access to data from the SAP apps that support OLTP. Formerly, operational reporting functionality was transferred from OLTP applications to a data warehouse. With in-memory computing technology, this functionality is integrated back into the transaction system.
Adopting in-memory computing results in an uncluttered arch based on a few, tightly aligned core systems enabled by service-oriented architecture (SOA) to provide harmonized, valid metadata and master data across business processes. Some of the most salient shifts and trends in future enterprise architectures will be:• A shift to BI self-service apps like data exploration, instead of static report solutions.• Central metadata and masterdata repositories that define the data architecture, allowing data stewards to work across all business units and all platforms
Real-time in-memory computing technology will cause a decline Structured Query Language (SQL) satellite databases. The purpose of those databases as flexible, ad hoc, more business-oriented, less IT-static tools might still be required, but their offline status will be a disadvantage and will delay data updates. Some might argue that satellite systems with in-memory computing technology will take over from satellite SQL DBs.SAP Business Explorer tools that use in-memory computing technology represent a paradigm shift. Instead of waiting for IT to work on a long queue of support tickets to create new reports, business users can explore large data sets and define reports on the fly.
So s16,,i39,e49, e11 are outlier. {e8,e44} doubleton outlier. Separate at 17 and 23, giving CLUS1 F<17 ( CLUS1 =50 Setosa with s16,s42 declared as outliers).17<F CLUS2 F<23 (e8,e11,e44,e49,i39 all are already declared outliers)23<F CLUS3 ( 46 vers, 49 virg with i6,i10,i18,i19,i23,i32 declared as outliers)
To illustrate the DPP algorithm, we use IRIS to see how close it comes to separating into the 3 known classes (s=setosa, e=versicolor, i=virginica)We require a DPP-gap of at least 4. We also check any sparse ends of the DPP-range to find outliers (using a table of pairwise distances).We start with p=MinVector of the 4 column minimums and q=MaxVector of the 4 col. maxs. Then we replace some of those with the average.
CLUS3.2 = 39 virg, 2 vers (unable to separate the 2 vers from the 39 virg)
"Gap Hill Climbing": mathematical analysisOne way to increase the size of the functional gaps is to hill climb the standard deviation of the functional, F (hoping that a "rotation" of d toward a higher STDev would increase the likelihood that gaps would be larger ( more dispersion allows for more and/or larger gaps).
We can also try to grow one particular gap or thinning using support pairs as follows:
F-slices are hyperplanes (assuming F=dotd) so it would makes sense to try to "re-orient" d so that the gap grows.Instead of taking the "improved" p and q to be the means of the entire n-dimensional half-spaces which is cut by the gap (or thinning), take as p and q to be the means of the F-slice (n-1)-dimensional hyperplanes defining the gap or thinning.This is easy since our method produces the pTree mask of each F-slice ordered by increasing F-value (in fact it is the sequence of F-values and the sequence of counts of points that give us those value that we use to find large gaps in the first place.).
0 1 2 3 4 5 6 7 8 9 a b c d e ff 1 0e 2 3d 4 5 6c 7 8b 9a98765 a j k l m n4 b c q r s3 d e f o p2 g h1 i0
d1
d 1-gap
=p
q=
d2
d2-gap
The d2-gap is much larger than the d1=gap. It is still not the optimal gap though. Would it be better to use a weighted mean (weighted by the distance from the gap - that is weighted by the d-barrel radius (from the center of the gap) on which each point lies?)
0 1 2 3 4 5 6 7 8 9 a b c d e ff 1e 2 3d 4 5 6c 7 8b 9a98765 a j k 4 b c q 3 d e f 2 1 0
d1
d 1-gap
p
q
d2
d2-gap
In this example it seems to make for a larger gap, but what weightings should be used? (e.g., 1/radius2) (zero weighting after the first gap is identical to the previous). Also we really want to identify the Support vector pair of the gap (the pair, one from one side and the other from the other side which are closest together) as p and q (in this case, 9 and a but we were just lucky to draw our vector through them.) We could check the d-barrel radius of just these gap slice pairs and select the closest pair as p and q???
There is a thinning at 22 and it is the same one but it is not more prominent. Next we attempt to hill-climb the gap at 16 using the mean of the half-space boundary.(i.e., p is avg=14; q is avg=17.
Here, the gap between CLUS1 and CLUS2 is made more pronounced???? (Why?)But the thinning between CLUS2 and CLUS3 seems even more obscure???
Although this doesn't prove anything, it is not good news for the method!
It did not grow the gap we wanted to grow (between CLUSTER2 and CLUSTER3.
CAINE 2013 Call for Papers 26th International Conference on Computer Applications in Industry and Engineering September 25{27, 2013, Omni Hotel, Los Angles, Califorria, USA Sponsored by the International Society for Computers and Their Applications (ISCA) CAINE{2013 will feature contributed papers as well as workshops and special sessions. Papers will be accepted into oral presentation sessions. The topics will include, but are not limited to, the following areas:Agent-Based Systems Image/Signal Processing Autonomous Systems Information Assurance Big Data Analytics Information Systems/DatabasesBioinformatics, Biomedical Systems/Engineering Internet and Web-Based Systems Computer-Aided Design/Manufacturing Knowledge-based SystemsComputer Architecture/VLSI Mobile Computing Computer Graphics and Animation Multimedia Applications Computer Modeling/Simulation Neural NetworksComputer Security Pattern Recognition/Computer Vision Computers in Education Rough Set and Fuzzy Logic Computers in Healthcare RoboticsComputer Networks Fuzzy Logic Control Systems Sensor Networks Data Communication Scientic Computing Data Mining Software Engineering/CASEDistributed Systems Visualization Embedded Systems Wireless Networks and CommunicationImportant Dates: Workshop/special session proposal . . May 2.5,.2.013 Full Paper Submis . .June 5,.2013. Notice Accept ..July.5 , 2013.Pre-registration & Camera-Ready Paper Due . . . ..August 5, 2013. Event Dates . . .Sept 25-27, 2013
SEDE Conf is interested in gathering researchers and professionals in the domains of SE and DE to present and discuss high-quality research results and outcomes in their fields. SEDE 2013 aims at facilitating cross-fertilization of ideas in Software and Data Engineering, The conference topics include, but not limited to:. Requirements Engineering for Data Intensive Software Systems. Software Verification and Model of Checking. Model-Based Methodologies. Software Quality and Software Metrics. Architecture and Design of Data Intensive Software Systems. Software Testing. Service- and Aspect-Oriented Techniques. Adaptive Software Systems. Information System Development. Software and Data Visualization. Development Tools for Data Intensive. Software Systems. Software Processes. Software Project Mgnt. Applications and Case Studies. Engineering Distributed, Parallel, and Peer-to-Peer Databases. Cloud infrastructure, Mobile, Distributed, and Peer-to-Peer Data Management. Semi-Structured Data and XML Databases. Data Integration, Interoperability, and Metadata. Data Mining: Traditional, Large-Scale, and Parallel. Ubiquitous Data Management and Mobile Databases. Data Privacy and Security. Scientific and Biological Databases and Bioinformatics. Social networks, web, and personal information management. Data Grids, Data Warehousing, OLAP. Temporal, Spatial, Sensor, and Multimedia Databases. Taxonomy and Categorization. Pattern Recognition, Clustering, and Classification. Knowledge Management and Ontologies. Query Processing and Optimization. Database Applications and Experiences. Web Data Mgnt and Deep WebMay 23, 2013 Paper Submission Deadline June 30, 2013 Notification of AcceptanceJuly 20, 2013 Registration and Camera-Ready Manuscript Conference Website: http://theory.utdallas.edu/SEDE2013/
ACC-2013 provides an international forum for presentation and discussion of research on a variety of aspects of advanced computing and its applications, and communication and networking systems. Important Dates May 5, 2013 - Special Sessions Proposal June 5, 2013 - Full Paper Submission July 5, 2013 - Author Notification Aug. 5, 2013 - Advance Registration & Camera Ready Paper Due
CBR International Workshop Case-Based Reasoning CBR-MD 2013 July 19, 2013, New York/USA Topics of interest include (but are not limited to): CBR for signals, images, video, audio and text Similarity assessment Case representation and case mining Retrieval and indexing Conversational CBR Meta-learning for model improvement and parameter setting for processing with CBR Incremental model improvement by CBR Case base maintenance for systems Case authoring Life-time of a CBR system Measuring coverage of case bases Ontology learning with CBR Submission Deadline: March 20th, 2013 Notification Date: April 30th, 2013 Camera-Ready Deadline: May 12th, 2013
Workshop on Data Mining in Life Sciences DMLS Discovery of high-level structures, incl e.g. association networks Text mining from biomedical literatur Medical images mining Biomedical signals mining Temporal and sequential data mining Mining heterogeneous data Mining data from molecular biology, genomics, proteomics, pylogenetic classification With regard to different methodologies and case studies: Data mining project development methodology for biomedicine Integration of data mining in the clinic Ontology-driver data mining in life sciences Methodology for mining complex data, e.g. a combination of laboratory test results, images, signals, genomic and proteomic samples Data mining for personal disease management Utility considerations in DMLS, including e.g. cost-sensitive learning Submission Deadline: March 20th, 2013 Notification Date: April 30th, 2013 Camera-Ready Deadline: May 12th, 2013 Workshop date: July 19th, 2013
Workshop on Data Mining in Marketing DMM'2013 In business environment data warehousing - the practice of creating huge, central stores of customer data that can be used throughout the enterprise - is becoming more and more common practice and, as a consequence, the importance of data mining is growing stronger. Applications in Marketing Methods for User Profiling Mining Insurance Data E-Markteing with Data Mining Logfile Analysis Churn Management Association Rules for Marketing Applications Online Targeting and Controlling Behavioral Targeting Juridical Conditions of E-Marketing, Online Targeting and so one Controll of Online-Marketing Activities New Trends in Online Marketing Aspects of E-Mailing Activities and Newsletter Mailing Submission Deadline: March 20th, 2013 Notification Date: April 30th, 2013 Camera-Ready Deadline: May 12th, 2013 Workshop date: July 19th, 2013
Workshop Data Mining in Ag DMA 2013 Data Mining on Sensor and Spatial Data from Agricultural Applications Analysis of Remote Sensor Data Feature Selection on Agricultural Data Evaluation of Data Mining Experiments Spatial Autocorrelation in Agricultural Data Submission Deadline: March 20th, 2013 Notification Date: April 30th, 2013 Camera-Ready Deadline: May 12th, 2013 Workshop date: July 19th, 2013
X X1...Xj ...Xn
x1
x2
.
.
.xi xi,j
xN
x1odx2od
xiod
xNod
d1
dn
=
X mm d = DPPd(X)
VX o dd =VarDPPdXV X1... Xj ... Xn
X1
:Xi XiXj-XiX,j :XN
V d1... dj ... dn
d1
:di didj
:dN
V=
|d|=1 |dd|=1, so dd is a unit vector iff d is a unit vector).
If |d|=1 then |dd|=1|dd| = SQRT( i=1..ndi
2d12 + i=1..ndi
2d22 + ... + i=1..ndi
2dn2 )
|dd| = SQRT( j=1..n(i=1..ndi2)dj
2 )
|dd| = SQRT( j=1..n 1 dj2 )
|dd| = SQRT( j=1..n 1 dj2 ) = 1
If |dd|=1 then |d|=1
1=|dd| = SQRT( i=1..ndi2d1
2 + i=1..ndi2d2
2 + ... + i=1..ndi2dn
2 )
the if
the if
1=|dd| = SQRT( (i=1..ndi2) (j=1..ndj
2) )
1=|dd| = SQRT( (i=1..ndi2)
2 )
1=|dd| = SQRT( (i=1..ndi2)
2 )
Dot Product Projection: DPPd(y)≡ (y-p)od where the unit vector, d, can be obtained as d=(p-q)/|p-q| for points, p and q.Square Distance Functional: SDp(y) ≡ (y-p)o(y-p)
FAUST Functional-Gap clustering (FAUST=Functional Analytic Unsupervised and Supervised machine Teaching)
relies on choosing a distance dominating functional (map to R1 s.t. |F(x)-F(y)|Dis(x,y) x,y; so that any F-gap implies a linear cluster break.
Coordinate Projection is a the simplest DPP: ej(y) ≡ yj
Note: The same DPPd gaps are revealed by DPd(y)≡ yod since ((y-p)od=yod-pod and thus DP just shifts all DPP values by pod.Finding a good unit vector, d, for Dot Product functional, DPP. to maximize gaps
- ( j=1..n
Xj dj )2 = (1/N)
i=1..N( (
j=1..n xi,jdj)2 )
= (1/N)
i=1..N(
j=1..n xi,jdj) (k=1..n xi,kdk)
- (j=1..n
Xj dj) (k=1..n
Xk dk)
= (1/N)i=1..N
( j=1..n xi,j
2dj2 + 2
j<k xi,jxi,kdjdk )
- (j=1..n Xj
2dj2 + 2
j<k XjXkdjdk )
+(2
j=1..n<k XjXkdjdk )
= (1/N) j=1..n
Xj2 dj
2
- (j=1..n Xj
2dj2 - 2
j<k XjXkdjdk )
= (1/N)
j=1..nXj
2 dj2
- (j=1..n Xj
2dj2 + +(2
j=1..n<k XjXkdjdk )
- 2j<k XjXkdjdk )
= (1/N)
j=1..n( Xj
2 - Xj2 ) dj
2 + +(2j=1..n<k=1..n
(XjXk - XjXk ) djdk )
X X1...Xj ...Xn
x1
x2
.
.
.xi xi,j
xN
x1odx2od
xiod
xNod
d1
dn
=
X mm d = DPPd(X)
VX o dd =VarDPPdXV X1... Xj ... Xn
X1
:Xi XiXj-XiX,j :XN
V d1... dj ... dn
d1
:di didj
:dN
V=
Algorithm-3 (an optimum): Find d producing maximum VarianceDPPd(X)? View the nn matricies, VX, dd as n2-vectors. Then V=VXodd as n2vectors and the dd that gives the maximum V is VX/|VX|. So we want d such that dd forms the minimum angle (angle=0) with VX. Minimize F(d)=VXodd/|VM|. Since |VM| constant, minimize F(d)=VXodd.
subjected to i=1..n di2= 1
Method-1: Maximize VarianceDPPd(X) wrt d. Let upper bar mean column average. VarDPPd(X) =
(Xod)2 - ( Xod )2
Algoritm-2 (a heuristic): Find k s.t. is max. Set dk=1, dh=0 hk. We've already done this using ek with max stdev)
Xk2 - Xk
2
X12 - X1
2
Algorithm-1 (a heuristic): Compute the vector ( , ... , ). The unit vector (a1...an)≡A maximizing YoA is
A=Y/|Y|. So let D≡( √ ,...,√ ) and d≡D/|D| Remove outliers first?
This Satlog dataset is 150 rows (pixels) and 4 feature columns (R, G, IR1, IR2)There are 6 row-classes with row counts as follows:
Count Class# Class Description19 c=1 red soil32 c=2 cotton crop50 c=3 grey soil12 c=4 damp grey soil10 c=5 soil with vegetation stubble27 c=7 very damp grey soil
There are no significant gaps.
There is some localization of classes with respect to F, but in a strictly unsupervised setting, that would be impossible to detect.
This is somewhat expected since the changes in ground cover class are gradual and smooth (in general) so that classes butt up against one-another (no gaps between them).