On the Interpolation Algorithm Ranking Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay 10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.
10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. On the Interpolation Algorithm Ranking. Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay. - PowerPoint PPT Presentation
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
On the Interpolation Algorithm Ranking
Carlos López-Vázquez
LatinGEO – Lab
SGM+Universidad ORT del Uruguay
10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil.
What is algorithm ranking?
There exist many interpolation algorithms
Which is the best? Is there a general answer?
Is there an answer for my particular dataset?
How to define the better-than relation between two given methods?
How confident should I be regarding such answer?
What has been done?
N points sampled somewhere Subdivide N in two sets: Training Set {A} and Test Set {B}
A∩B=Ø; N=#{A}+#{B}
Repeat for all available algorithms:
Define interpolant using {A};
Compare? Typically through RMSE/MAD
Better-Than is equivalent to lower-RMSE
Many papers so far
Permanent interest
How is a typical paper? Takes a dataset as an example
{A} {B}
blindly interpolate at locations of {B}
Compare known values at {B} with those interpolated ones
Is RMSE/MAD/etc. suitable as a metric?
Different interpolation algorithms lead to different look
RMSE might not be representative. Why?
Images from www.spatialanalysisonline.com
Let’s consider spectral properties
Some spectral metric of agreement
For example, ESAM metric
U=fft2d(measured error field), U(i,j)≥0
V=fft2d(interpolated error field), V(i,j)≥0
ideally, U=V
2 2
2( , ) arccos
i ii
i ii i
u vESAM U V
u v
0≤ESAM(U,V)≤1
ESAM(W,W)=1
Hint!: There might be better options than ESAM
How confident should I be regarding such answer?
Given {A} and {B}a deterministic answer
How to attach a confidence level? Or just some uncertainty? Perform Cross Validation (Falivene et al., 2010)
Set #{B}=1, and leave the rest with {A}
N possible choices (events) to select B
Evaluate RMSE for each method and event
Average for each method over N cases
Better-than is now Average-run-better-than
Simulate Sample {A} from N, #{A}=m, m<N
Evaluate RMSE for each method and event, and create rank(i)
Select confidence level, and apply Friedman’s Test to all rank(i)n wines judges each rank k different wines