Inequality Constrained Spline Interpolation Scott Kersey Workshop on Spline Approximation and Applications on Carl de Boor’s 80th Birthday Institute for Mathematical Sciences National University of Singapore December Dec 4–6, 2017 December 5, 2017 (IMS Spline Workshop) Spline Interpolation December 5, 2017 1 / 58
101
Embed
Inequality Constrained Spline InterpolationOverview of Talk 1 Some Variational Spline Problems 2 Quadratic Programming 3 Minimal Properties and Optimality for Inequality Splines 4
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Inequality Constrained Spline Interpolation
Scott Kersey
Workshop on Spline Approximation and Applicationson Carl de Boor’s 80th Birthday
Institute for Mathematical SciencesNational University of Singapore
The variational problem of best inequality constrained spline interpolation offers ageneralization of spline interpolation that smooths rough data while keeping precisecontrol on tolerances (error) at data points. While first studied in the 1960s, thisproblem has received just modest attention compared to it’s contemporaries – bestspline interpolation, the smoothing spline, least squares splines, splines in tension andquasi-interpolation – resulting in just a handful of papers and specializedimplementations. While no commercial implementations seem readily available (suchas IMSL), the problem is justifiably nonlinear and easily handled by methods ofoptimization. However, we think that by exploiting specific structure of splines onemay produce a simpler and/or more efficient implementation. It is the aim of this workto attempt this.In this talk we describe an implementation based on the active set method inoptimization combined with solutions to the problem of best spline interpolation. Wealso show how inequality constrained splines can be used to produce good or optimalknots, and we describe applications to parametric curves and surfaces in CAGD.
Immediate Benefits of the Inequality-Constrained Spline
Benefits:
The inequality spline can smooth rough data (like a smoothing spline and leastsquares spline).
The error is precisely controlled at the data sites (unlike smoothing and least squaressplines). This may be useful when designing parts with given tolerances, or whendata is inaccurate.
The spline is determined by the “active” knots. Other knots fall away naturally,leaving a sparser set of optimal knots.
Immediate Benefits of the Inequality-Constrained Spline
Benefits:
The inequality spline can smooth rough data (like a smoothing spline and leastsquares spline).
The error is precisely controlled at the data sites (unlike smoothing and least squaressplines). This may be useful when designing parts with given tolerances, or whendata is inaccurate.
The spline is determined by the “active” knots. Other knots fall away naturally,leaving a sparser set of optimal knots.
Immediate Benefits of the Inequality-Constrained Spline
Benefits:
The inequality spline can smooth rough data (like a smoothing spline and leastsquares spline).
The error is precisely controlled at the data sites (unlike smoothing and least squaressplines). This may be useful when designing parts with given tolerances, or whendata is inaccurate.
The spline is determined by the “active” knots. Other knots fall away naturally,leaving a sparser set of optimal knots.
Immediate Benefits of the Inequality-Constrained Spline
Benefits:
The inequality spline can smooth rough data (like a smoothing spline and leastsquares spline).
The error is precisely controlled at the data sites (unlike smoothing and least squaressplines). This may be useful when designing parts with given tolerances, or whendata is inaccurate.
The spline is determined by the “active” knots. Other knots fall away naturally,leaving a sparser set of optimal knots.
Immediate Benefits of the Inequality-Constrained Spline
Benefits:
The inequality spline can smooth rough data (like a smoothing spline and leastsquares spline).
The error is precisely controlled at the data sites (unlike smoothing and least squaressplines). This may be useful when designing parts with given tolerances, or whendata is inaccurate.
The spline is determined by the “active” knots. Other knots fall away naturally,leaving a sparser set of optimal knots.
The Matlab function csapne() is very fast with linear growth. It can handle 100, 000interpolation points without a problem.
The function csapni() is faster than a standard active set method, even after doingtricks with the factorizations. However, the growth rate is still greater than linear.
Since there are approximately n/2 active constraints, there are going to be thatmany iterations on the outside loop. Since csape() has linear growth, we expectcsapni() to be order n2.
Claim
This problem can be solved in O(n) time.
Our only hope to decrease the growth rate is to handle the the interpolation problemwith constant time. As we remove or add a single knot (constraint), we caninterpolate on a fixed window (say of 100 points) rather than all n points. Due tothe exponential decay in spline interpolation, we expect that we can achieve thiswithout loss of precision, giving us an order n implementation.
The Matlab function csapne() is very fast with linear growth. It can handle 100, 000interpolation points without a problem.
The function csapni() is faster than a standard active set method, even after doingtricks with the factorizations. However, the growth rate is still greater than linear.
Since there are approximately n/2 active constraints, there are going to be thatmany iterations on the outside loop. Since csape() has linear growth, we expectcsapni() to be order n2.
Claim
This problem can be solved in O(n) time.
Our only hope to decrease the growth rate is to handle the the interpolation problemwith constant time. As we remove or add a single knot (constraint), we caninterpolate on a fixed window (say of 100 points) rather than all n points. Due tothe exponential decay in spline interpolation, we expect that we can achieve thiswithout loss of precision, giving us an order n implementation.
The Matlab function csapne() is very fast with linear growth. It can handle 100, 000interpolation points without a problem.
The function csapni() is faster than a standard active set method, even after doingtricks with the factorizations. However, the growth rate is still greater than linear.
Since there are approximately n/2 active constraints, there are going to be thatmany iterations on the outside loop. Since csape() has linear growth, we expectcsapni() to be order n2.
Claim
This problem can be solved in O(n) time.
Our only hope to decrease the growth rate is to handle the the interpolation problemwith constant time. As we remove or add a single knot (constraint), we caninterpolate on a fixed window (say of 100 points) rather than all n points. Due tothe exponential decay in spline interpolation, we expect that we can achieve thiswithout loss of precision, giving us an order n implementation.
The Matlab function csapne() is very fast with linear growth. It can handle 100, 000interpolation points without a problem.
The function csapni() is faster than a standard active set method, even after doingtricks with the factorizations. However, the growth rate is still greater than linear.
Since there are approximately n/2 active constraints, there are going to be thatmany iterations on the outside loop. Since csape() has linear growth, we expectcsapni() to be order n2.
Claim
This problem can be solved in O(n) time.
Our only hope to decrease the growth rate is to handle the the interpolation problemwith constant time. As we remove or add a single knot (constraint), we caninterpolate on a fixed window (say of 100 points) rather than all n points. Due tothe exponential decay in spline interpolation, we expect that we can achieve thiswithout loss of precision, giving us an order n implementation.
The Matlab function csapne() is very fast with linear growth. It can handle 100, 000interpolation points without a problem.
The function csapni() is faster than a standard active set method, even after doingtricks with the factorizations. However, the growth rate is still greater than linear.
Since there are approximately n/2 active constraints, there are going to be thatmany iterations on the outside loop. Since csape() has linear growth, we expectcsapni() to be order n2.
Claim
This problem can be solved in O(n) time.
Our only hope to decrease the growth rate is to handle the the interpolation problemwith constant time. As we remove or add a single knot (constraint), we caninterpolate on a fixed window (say of 100 points) rather than all n points. Due tothe exponential decay in spline interpolation, we expect that we can achieve thiswithout loss of precision, giving us an order n implementation.
Tpyically we have more knots than we want in interpolation or near-interpolation.Hence, we want to remove those “less important”. The questions is, how do wedecide this?
What we do is choose the m most “influential” knots, and remove the rest. How dowe decide who survives?
Based on experience with near-interpolation and the smoothing spline, we guess thatthe points with larger lagrange multipliers (i.e., larger jumps in jmpti D
2k−1f ) shouldbe more valuable, so we eliminate those with smaller multipliers. Indeed, zeromultipliers can be eliminated automatically without effect on the spline.
Keeping knots corresponding to large lagrange multipliers leads to a high concentrationof knots in areas where just a couple would suffice. Hence, we need to consider a betterrank function.
Theorem (Kahane61, Best Approximation by Piecewise Constants)
Let f ∈ C([0, 1]). Then
σn,1(f )∞ ≤M
2n
for n = 1, 2, 3, . . . iff f ∈ BV [0, 1], with M := Var[0,1](f ), with
var[a,b] := sup{|T |∑i=1
|f (ti+1)− f (ti )| : partitions T}.
Proof of one direction of Kahane’s theorem, from D98.
Suppose that f ∈ BV [0, 1] with M := Var[0,1](f ). Since f is continuous, we can find apartition T = [0 = t0, . . . , tn < tn+1 = 1] such that Var[ti−1,ti ](f ) = M/n. Lets =
∑i αiχ[ti−1,ti ) with αi = (f (ti−1) + f (ti ))/2. Then,
Let Σn,r be the space of all splines of order r with n + 1 knots (at most n intervals)
Σn,r :=⋃{
Sr (T ) : T = [0 = t0 < t1 ≤ t2 · · · ≤ tn = 1]}.
Remarks:
This space is nonlinear! (I.e., The space is not closed under addition, since addingtwo splines in Σn,r with different knots typically results in a spline with 2n knots.)
We want to find the “best” knots, if possible, but usually settle for “good” knots,which is usually good enough.
The best (good) knots depend on the function.
How to choose the knots for piecewise constants?
Want bounds on approximation, and theorems of existence and uniquenss.
We can often get the same approximation error with far fewer knots.
The following theorem generalizes Kahane’s theorem. The statement of the theoremis taken from [D93], Chap. 12, Theorem 4.5.
Note that while in fixed knot approximation we usually see the (maximal) meshspacing h in the estimates. For free-knot splines we don’t know the mesh spacing,but we do have a term 1/n which is h for a uniform parametrization.
Theorem (Freud and Popov (1969), Subbotin and Chernykh (1970))
If r = 1, 2, . . . and f (r−1) is of bounded variation on [0, 1], then
The key aspect from the previous discussion relevant to the remainder of this talk isthe balancing of intervals. In particular, we have showed for the simplest case ofpiecewise constant approximation that balanced intervals provide the best bound forapproximation.
3 For the third corollary, we note that since s0 is a piecewise cubic spline, then s ′′′0 ispiecewise constant. Hence, the result follows since λi is the jump in the thirdderivative across the knot.
1 Solve the inequality constrained program to find a good initial spline approximations0 to f such that ||s0 − f ||∞ ≤ ε/2 with active knots ti and Lagrange multipliers λi .
2 Set n, the number of intervals (n + 1 knots)
3 Rank the n − 1 interior knots according to:
ri = |λi |(ti+1 − ti−1)3.
4 Using the highest rank n − 1 interior knots and two end knots, find the least squaresspline fit sf .
5 If ||s0 − sf ||∞ 6< ε/2, increase n and go to step 3.
The proposed variational approach begins with a very large (dense) sampling ofdata, and hence a very large dense set of knots. Our method is to remove knots.
The first set of knots fall out easily where the constraints are inactive. This does notchange the spline fit.
We then delete more that are ranked lower according to our rank measure.
After choosing the most influential knots, we find a least square spline fit. Inpractice, it maintains the good error estimate, as well as the near-interpolant would.
We don’t feel it necessary to work harder to move the final knots to “optimal”positions, because the original set of knots was already from a dense subset. Thismay not always be acceptable, such as for the square root function where optimalknots are on the order of e−5 in magnitudes.
Example: Bad Parametrizations for Spline Curve Interpolation
Best spline interpolation with free data sites (or knots) is a difficult problem. In theexample, we have badly chosen sites. We choose balls with centers at the data points.
While the interpolant is a bad fit, the near-interpolant improves things.
where f is a spline curve that solves the problem of best near-interpolation. Hence, f (ti )is constrained to lie in a closed ball of radius εi . Optimal data sites satisfy the condition
f (ti )− zi ⊥ f ′(ti ).
Hence, we want a zero ofF (ti ) := (f (ti )− zi ) · f ′(ti ).
By Newton’s method, we update ti to ti + ∆ti with
∆ti := − F (ti )
F ′(ti )= − (f (ti )− zi ) · f ′(ti )
(f (ti )− zi ) · f ′′(ti ) + f ′(ti ) · f ′(ti ). (0.1)
Example: Best Spline Interpolation with Free Data Sites
Repeating the Process: Solve the problem of near-interpolation, update knots, shrink theconstraints Ki , and repeat until close to interpolation.
Now we have a simple way to solve the problem of best spline curve interpolation withfree data sites for basic configurations. If the data sites and knots coincide, we also getthe optimal knots.