The Polar Tensor and Hybrid Linear Modeling Gilad Lerman Mathematics, UMN Joint work with Guangliang Chen The Polar Tensor andHybrid Linear Modeling – p. 1/2
The Polar Tensor andHybrid Linear Modeling
Gilad Lerman
Mathematics, UMN
Joint work with Guangliang Chen
The Polar Tensor andHybrid Linear Modeling – p. 1/23
Outline
Background
The Polar Tensor andHybrid Linear Modeling – p. 2/23
Outline
BackgroundHybrid linear modeling
The Polar Tensor andHybrid Linear Modeling – p. 2/23
Outline
BackgroundHybrid linear modelingSpectral and Multi-way clustering
The Polar Tensor andHybrid Linear Modeling – p. 2/23
Outline
BackgroundHybrid linear modelingSpectral and Multi-way clustering
Spectral Curvature Clustering (SCC)
The Polar Tensor andHybrid Linear Modeling – p. 2/23
Outline
BackgroundHybrid linear modelingSpectral and Multi-way clustering
Spectral Curvature Clustering (SCC)Theory and Analysis
The Polar Tensor andHybrid Linear Modeling – p. 2/23
Outline
BackgroundHybrid linear modelingSpectral and Multi-way clustering
Spectral Curvature Clustering (SCC)Theory and AnalysisPractical techniques
The Polar Tensor andHybrid Linear Modeling – p. 2/23
Outline
BackgroundHybrid linear modelingSpectral and Multi-way clustering
Spectral Curvature Clustering (SCC)Theory and AnalysisPractical techniques
To Infinity and Beyond
The Polar Tensor andHybrid Linear Modeling – p. 2/23
Hybrid Linear Modeling
Given:N points sampled from K flats in R
D
The Polar Tensor andHybrid Linear Modeling – p. 3/23
Hybrid Linear Modeling
Given:N points sampled from K flats in R
D
Example
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8−101
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
The Polar Tensor andHybrid Linear Modeling – p. 3/23
Hybrid Linear Modeling
Given:N points sampled from K flats in R
D
Example
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8−101
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8−101
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Goal:Segment data into the K flats and model flats
The Polar Tensor andHybrid Linear Modeling – p. 3/23
HLM (continue)
Two simplifying assumptions:
The Polar Tensor andHybrid Linear Modeling – p. 4/23
HLM (continue)
Two simplifying assumptions:Number of clusters K is known
The Polar Tensor andHybrid Linear Modeling – p. 4/23
HLM (continue)
Two simplifying assumptions:Number of clusters K is knownDimensions of flats are equal and known (d)
The Polar Tensor andHybrid Linear Modeling – p. 4/23
HLM (continue)
Two simplifying assumptions:Number of clusters K is knownDimensions of flats are equal and known (d)
Increasing difficulty of HLM:
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−1.5 −1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
The Polar Tensor andHybrid Linear Modeling – p. 4/23
HLM (continue)
Two simplifying assumptions:Number of clusters K is knownDimensions of flats are equal and known (d)
Increasing difficulty of HLM:
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−1.5 −1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−1.5 −1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
The Polar Tensor andHybrid Linear Modeling – p. 4/23
Proximity Clustering
Example:
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
The Polar Tensor andHybrid Linear Modeling – p. 5/23
Proximity Clustering
Example:
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
Common approach: Spectral Clustering
The Polar Tensor andHybrid Linear Modeling – p. 5/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
Embedding:
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
Embedding:
1. Construct weights based on proximity:
Wij = e−‖xi−xj‖2/σ for i 6= j and 0 otherwise
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
Embedding:
1. Construct weights based on proximity:
Wij = e−‖xi−xj‖2/σ for i 6= j and 0 otherwise
2. Process the matrix W to obtain the embedding
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
Embedding:
1. Construct weights based on proximity:
Wij = e−‖xi−xj‖2/σ for i 6= j and 0 otherwise
2. Process the matrix W to obtain the embedding
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
Embedding:
1. Construct weights based on proximity:
Wij = e−‖xi−xj‖2/σ for i 6= j and 0 otherwise
2. Process the matrix W to obtain the embedding
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
−1.5 −1 −0.5 0
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
Embedding:
1. Construct weights based on proximity:
Wij = e−‖xi−xj‖2/σ for i 6= j and 0 otherwise
2. Process the matrix W to obtain the embedding
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
−1.5 −1 −0.5 0
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering (Sketch)
Idea: embed data smartly (so easy to cluster)
Embedding:
1. Construct weights based on proximity:
Wij = e−‖xi−xj‖2/σ for i 6= j and 0 otherwise
2. Process the matrix W to obtain the embedding
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
−1.5 −1 −0.5 0
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
The Polar Tensor andHybrid Linear Modeling – p. 6/23
Spectral Clustering for HLM
Consider the 2-lines clustering problem:
The Polar Tensor andHybrid Linear Modeling – p. 7/23
Spectral Clustering for HLM
Consider the 2-lines clustering problem:
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
The Polar Tensor andHybrid Linear Modeling – p. 7/23
Spectral Clustering for HLM
Clusters found by Spectral Clustering:
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
The Polar Tensor andHybrid Linear Modeling – p. 7/23
Spectral Clustering for HLM
Clusters found by Spectral Clustering:
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Two points, i.e., proximity, not enough for line clustering
The Polar Tensor andHybrid Linear Modeling – p. 7/23
Multi-way Clustering
If d = 1:
The Polar Tensor andHybrid Linear Modeling – p. 8/23
Multi-way Clustering
If d = 1:Use 3 points instead of 2
The Polar Tensor andHybrid Linear Modeling – p. 8/23
Multi-way Clustering
If d = 1:Use 3 points instead of 2Affinities instead of proximities
The Polar Tensor andHybrid Linear Modeling – p. 8/23
Multi-way Clustering
If d = 1:Use 3 points instead of 2Affinities instead of proximitiesProcess 3-way affinity tensor in order to cluster
The Polar Tensor andHybrid Linear Modeling – p. 8/23
Multi-way Clustering
If d = 1:Use 3 points instead of 2Affinities instead of proximitiesProcess 3-way affinity tensor in order to cluster
For general d ≥ 0: (d + 2)-points affinities
The Polar Tensor andHybrid Linear Modeling – p. 8/23
Multi-way Clustering
If d = 1:Use 3 points instead of 2Affinities instead of proximitiesProcess 3-way affinity tensor in order to cluster
For general d ≥ 0: (d + 2)-points affinities
Previous work:(Shashua 06’) Factor tensor into probability vectors(Govindu 05’, Agarwal 05’) Approximate tensor withmatrix and apply spectral clustering
The Polar Tensor andHybrid Linear Modeling – p. 8/23
Core Questions
What are good multiwise affinities?
The Polar Tensor andHybrid Linear Modeling – p. 9/23
Core Questions
What are good multiwise affinities?
How to rigorously justify such an algorithm?
The Polar Tensor andHybrid Linear Modeling – p. 9/23
Core Questions
What are good multiwise affinities?
How to rigorously justify such an algorithm?
Can we make it practical? (Nd+2 affinities!)
The Polar Tensor andHybrid Linear Modeling – p. 9/23
Polar Sines
For (d + 1)-simplex in RD, Z = {z1, . . . , zd+2}:
psinzi
(Z) =(d + 1)! · Vol(Z)∏
j 6=i ‖zj − zi‖
The Polar Tensor andHybrid Linear Modeling – p. 10/23
Polar Sines
For (d + 1)-simplex in RD, Z = {z1, . . . , zd+2}:
psinzi
(Z) =(d + 1)! · Vol(Z)∏
j 6=i ‖zj − zi‖
Example: d = 1 and Z = (z1, z2, z3)
psinz1
(Z) =2 · Area(Z)
‖z2 − z1‖ · ‖z3 − z1‖
z1
z2
z3
The Polar Tensor andHybrid Linear Modeling – p. 10/23
Polar Sines
For (d + 1)-simplex in RD, Z = {z1, . . . , zd+2}:
psinzi
(Z) =(d + 1)! · Vol(Z)∏
j 6=i ‖zj − zi‖
Example: d = 1 and Z = (z1, z2, z3)
psinz1
(Z) =2 · Area(Z)
‖z2 − z1‖ · ‖z3 − z1‖
z1
z2
z3
The Polar Tensor andHybrid Linear Modeling – p. 10/23
Polar Sines
For (d + 1)-simplex in RD, Z = {z1, . . . , zd+2}:
psinzi
(Z) =(d + 1)! · Vol(Z)∏
j 6=i ‖zj − zi‖
Example: d = 2 and Z = (z1, z2, z3, z4)
psinz1
(Z) =6 · Vol(Z)
‖z2 − z1‖ · ‖z3 − z1‖ · ‖z4 − z1‖
z4
z1
z2
z3
The Polar Tensor andHybrid Linear Modeling – p. 10/23
Polar Sines
For (d + 1)-simplex in RD, Z = {z1, . . . , zd+2}:
psinzi
(Z) =(d + 1)! · Vol(Z)∏
j 6=i ‖zj − zi‖
Example: d = 2 and Z = (z1, z2, z3, z4)
psinz1
(Z) =6 · Vol(Z)
‖z2 − z1‖ · ‖z3 − z1‖ · ‖z4 − z1‖
z4
z1
z2
z3
The Polar Tensor andHybrid Linear Modeling – p. 10/23
Polar Sines
For (d + 1)-simplex in RD, Z = {z1, . . . , zd+2}:
psinzi
(Z) =(d + 1)! · Vol(Z)∏
j 6=i ‖zj − zi‖
Polar sine = measure of flatness at a vertexindependent of scale
z4
z1
z2
z3
The Polar Tensor andHybrid Linear Modeling – p. 10/23
Polar Curvature
Polar curvature of the (d + 1)-simplex Z = {z1, . . . , zd+2}
cp(Z) = diam(Z) ·
√
√
√
√
1
d + 2
d+2∑
i=1
psin2zi
(Z)
The Polar Tensor andHybrid Linear Modeling – p. 11/23
Polar Curvature
Polar curvature of the (d + 1)-simplex Z = {z1, . . . , zd+2}
cp(Z) = diam(Z) ·
√
√
√
√
1
d + 2
d+2∑
i=1
psin2zi
(Z)
Polar curvature = measure of flatness of simplex,scales like diameter of simplex
The Polar Tensor andHybrid Linear Modeling – p. 11/23
Why Polar Curvatures?
Theorem (L, Whitehouse):
The Polar Tensor andHybrid Linear Modeling – p. 12/23
Why Polar Curvatures?
Theorem (L, Whitehouse):
If µ - probability measure on RD concentrated around
d-dimensional ball of diameter 1
The Polar Tensor andHybrid Linear Modeling – p. 12/23
Why Polar Curvatures?
Theorem (L, Whitehouse):
If µ - probability measure on RD concentrated around
d-dimensional ball of diameter 1
Then
LS error of µ ≈
√
∫
LE(λ)c2p(Z) dµd+2(Z)
The Polar Tensor andHybrid Linear Modeling – p. 12/23
Why Polar Curvatures?
Theorem (L, Whitehouse):
If µ - probability measure on RD concentrated around
d-dimensional ball of diameter 1
Then
LS error of µ ≈
√
∫
LE(λ)c2p(Z) dµd+2(Z)
LE(λ) - set of simplices of edge lengths between λ and 1
The Polar Tensor andHybrid Linear Modeling – p. 12/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
Or, average c2p(Z) over large simplices
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
Or, average c2p(Z) over large simplices
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
Or, average c2p(Z) over large simplices
The Polar Tensor andHybrid Linear Modeling – p. 13/23
Interpretation of Theorem
Two ways to calculate/approximate LS error
Find LS d-flat and integrate squared distances
Or, average c2p(Z) over large simplices
The Polar Tensor andHybrid Linear Modeling – p. 13/23
The Polar Tensor
Order d + 2 tensor Ap ∈ RN×···×N :
Ap(i1, ..., id+2) =
{
e−c2p (xi1
,...,xid+2)/σ if distinct
0 otherwise
The Polar Tensor andHybrid Linear Modeling – p. 14/23
The Polar Tensor
Order d + 2 tensor Ap ∈ RN×···×N :
Ap(i1, ..., id+2) =
{
e−c2p (xi1
,...,xid+2)/σ if distinct
0 otherwise
Larger affinity ⇒ more likely on a d-flat
The Polar Tensor andHybrid Linear Modeling – p. 14/23
TSCC for any Affinity Tensor
Given affinity tensor A ∈ RN×···×N
The Polar Tensor andHybrid Linear Modeling – p. 15/23
TSCC for any Affinity Tensor
Given affinity tensor A ∈ RN×···×N
Matricize A to get an affinity matrix A ∈ RN×Nd+1
:
A(i, :) = {A(i, j1, ..., jd+1) | ∀j1, ..., jd+1}
The Polar Tensor andHybrid Linear Modeling – p. 15/23
TSCC for any Affinity Tensor
Given affinity tensor A ∈ RN×···×N
Matricize A to get an affinity matrix A ∈ RN×Nd+1
:
A(i, :) = {A(i, j1, ..., jd+1) | ∀j1, ..., jd+1}
Construct pairwise weights
W = AA′
The Polar Tensor andHybrid Linear Modeling – p. 15/23
TSCC for any Affinity Tensor
Given affinity tensor A ∈ RN×···×N
Matricize A to get an affinity matrix A ∈ RN×Nd+1
:
A(i, :) = {A(i, j1, ..., jd+1) | ∀j1, ..., jd+1}
Construct pairwise weights
W = AA′
Apply spectral clustering with weights W
The Polar Tensor andHybrid Linear Modeling – p. 15/23
Justification
Ideal case - affinities = 1 for points of same cluster0 otherwise
The Polar Tensor andHybrid Linear Modeling – p. 16/23
Justification
Ideal case - affinities = 1 for points of same cluster0 otherwise
TSCC works perfectly for the ideal tensor AI
The Polar Tensor andHybrid Linear Modeling – p. 16/23
Justification
Ideal case - affinities = 1 for points of same cluster0 otherwise
TSCC works perfectly for the ideal tensor AI
More general cases -view polar tensor as perturbation of the ideal tensor
The Polar Tensor andHybrid Linear Modeling – p. 16/23
Justification
Ideal case - affinities = 1 for points of same cluster0 otherwise
TSCC works perfectly for the ideal tensor AI
More general cases -view polar tensor as perturbation of the ideal tensor
TSCC works well for polar tensor Ap with highprobability
The Polar Tensor andHybrid Linear Modeling – p. 16/23
Ideal vs. Perturbed
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
The Polar Tensor andHybrid Linear Modeling – p. 17/23
Ideal vs. Perturbed
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
0 10 20 30 40 500
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 10 20 30 40 500
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
The Polar Tensor andHybrid Linear Modeling – p. 17/23
Ideal vs. Perturbed
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
−0.8−0.6
−0.4−1 −0.5 0 0.5
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
0
0.5
1−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
The Polar Tensor andHybrid Linear Modeling – p. 17/23
Ideal vs. Perturbed
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
−0.5 0 0.5 1 1.5
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
−0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
The Polar Tensor andHybrid Linear Modeling – p. 17/23
Probabilistic Analysis
Theorem (Chen, L):
The Polar Tensor andHybrid Linear Modeling – p. 18/23
Probabilistic Analysis
Theorem (Chen, L):If N points sampled from HLM of K d-flatsand TSCC applied with σ and polar tensor, then
The Polar Tensor andHybrid Linear Modeling – p. 18/23
Probabilistic Analysis
Theorem (Chen, L):If N points sampled from HLM of K d-flatsand TSCC applied with σ and polar tensor, then
dist(Ap,AI) / α
with probability at least 1 − exp(
− 2N ·α2
(d+2)2
)
The Polar Tensor andHybrid Linear Modeling – p. 18/23
Probabilistic Analysis
Theorem (Chen, L):If N points sampled from HLM of K d-flatsand TSCC applied with σ and polar tensor, then
dist(Ap,AI) / α
with probability at least 1 − exp(
− 2N ·α2
(d+2)2
)
α =1
σ
∑
within-cluster errors+“between-cluster interaction”
The Polar Tensor andHybrid Linear Modeling – p. 18/23
TSCC is not practical
Almost impossible to compute/store A ∈ RN×Nd+1
The Polar Tensor andHybrid Linear Modeling – p. 19/23
TSCC is not practical
Almost impossible to compute/store A ∈ RN×Nd+1
Cannot multiply W = AA′
The Polar Tensor andHybrid Linear Modeling – p. 19/23
TSCC is not practical
Almost impossible to compute/store A ∈ RN×Nd+1
Cannot multiply W = AA′
Idea 1: Sample randomly only c columns of A
The Polar Tensor andHybrid Linear Modeling – p. 19/23
TSCC is not practical
Almost impossible to compute/store A ∈ RN×Nd+1
Cannot multiply W = AA′
Idea 1: Sample randomly only c columns of A
Drawback: Poor results for large N and moderate d
e.g., c ≈ N 7→ c/Nd+1 ≈ 1/Nd
The Polar Tensor andHybrid Linear Modeling – p. 19/23
TSCC is not practical
Almost impossible to compute/store A ∈ RN×Nd+1
Cannot multiply W = AA′
Idea 1: Sample randomly only c columns of A
Drawback: Poor results for large N and moderate d
e.g., c ≈ N 7→ c/Nd+1 ≈ 1/Nd
Idea 2: Iterate, sample d + 1 points (A columns) fromsame previous clusters
The Polar Tensor andHybrid Linear Modeling – p. 19/23
Uniform vs. Iterative Sampling
Experiment with K = 3, N = 300, model error = 0.05
Uniform sampling: c = 1 · N, . . . , 10 · N
Empirical Error (averaged over 500 experiments):
0 1 2 3 4 5 60.04
0.05
0.06
0.07
0.08
0.09
0.1
0.11
time (seconds)
e d
d=1,D=2d=2,D=3d=3,D=4d=4,D=5
The Polar Tensor andHybrid Linear Modeling – p. 20/23
Uniform vs. Iterative Sampling
Experiment with K = 3, N = 300, model error = 0.05
Uniform sampling: c = 1 · N, . . . , 10 · N
Iterative sampling: c = N is fixed each time
Empirical Error (averaged over 500 experiments):
0 1 2 3 4 5 60.04
0.05
0.06
0.07
0.08
0.09
0.1
0.11
time (seconds)
e d
d=1,D=2d=2,D=3d=3,D=4d=4,D=5
0 1 2 3 4 5 60.04
0.05
0.06
0.07
0.08
0.09
0.1
0.11
time (seconds)
e d
d=1,D=2d=2,D=3d=3,D=4d=4,D=5
The Polar Tensor andHybrid Linear Modeling – p. 20/23
Summary
Presented SCC for solving HLM:Polar curvatureTheoretical JustificationMaking it Practical
The Polar Tensor andHybrid Linear Modeling – p. 21/23
Summary
Presented SCC for solving HLM:Polar curvatureTheoretical JustificationMaking it Practical
Other advantages (not shown in talk):Can deal with heavy noiseRobust to outliersGood simulation resultsSuccessful applications
The Polar Tensor andHybrid Linear Modeling – p. 21/23
Future Projects
Mixed Dimensions
d-Flats Detection
General Shapes
General Geometries
Justifying robustness to outliers
Further exploration of sampling
The Polar Tensor andHybrid Linear Modeling – p. 22/23
Thanks
Joint work with Guangliang Chen([email protected])
Related work (curvatures) with J. Tyler Whitehouse
Related work (other geometries) with Teng Zhang
Contact: [email protected]
The Polar Tensor andHybrid Linear Modeling – p. 23/23