Talk:Monotone cubic interpolation

Clarification request
In the formula : finterpolated(x) = ylowerh00(t) + hmlowerh10(t) + yupperh01(t) + hmupperh11(t) is ylower < yupper or is the ylower the y value correspond to the xlower and vis-versa?

Also is it possible to impose the condition $$f (1)=0$$ and $$f (n)=0$$ (2nd derivative at the first point and the last point equal to zero)as in natural cubic spline. In the current state beond the end points the curve goes hay way. Tapas ks (talk) 10:10, 10 January 2009 (UTC)

i want a f(x,y) that can cover a square of 1*1 sizs? 77.237.172.1 (talk) 20:35, 10 February 2008 (UTC)
 * Look at Multivariate interpolation. --Berland (talk) 07:20, 22 February 2008 (UTC)

Correctness of step 5
I reverted this edit, because it contradicts the referenced paper by Fritsch and Carlson, and because the claims put forward by the edit were unreferenced. The introduced comment may be correct, but if so it has to be referenced. --Berland (talk) 13:58, 20 February 2008 (UTC)

Re: Correctness of step 5
The Step 5, as it was originally put down, was misinterpreted. The original article by Fritsch & Carlson (1980) states the following Step 2A: find such subset $$L^*$$ of set $$L$$ of all feasible parameters $$(\alpha,\beta)$$, which is defined by the conditions in Step 5, such that for any $$(\alpha,\beta)\in L^*$$ and $$0\leq\alpha^*\leq\alpha$$, $$0\leq\beta^*\leq\beta$$, we have $$(\alpha^*,\beta^*)\in L^*$$. To preserve monotonicity of the previous interval, while updating values $$(\alpha,\beta)$$ in the current interval, we need to check that set $$L^*$$, in our case defined by $$\alpha_k^2 + \beta_k^2 \leq 9$$, satisfies this condition. Note that we need to check conditions of this subset in Step 5, as well as normalizing the $$(\alpha,\beta)$$ when they fall outside of it. The original Wiki article showed inconsistent application of this argument.76.194.82.134 (talk) 01:57, 22 February 2008 (UTC)
 * Glad you cared to elaborate here. By rereading the paper, I see your point, and as far as I can see, in order for the wiki article to be correct in terms of the article, we should replace step 5 by:
 * 5. Now, if $$\alpha_k^2 + \beta_k^2 > 9$$, then set $$m_k = \tau_k \alpha_k \Delta_k$$ and $$m_{k+1} = \tau_k \beta_k \Delta_k$$ where $$\tau_k = 3(\alpha_k^2 + \beta_k^2)^{-1/2}$$.
 * --Berland (talk) 07:17, 22 February 2008 (UTC)
 * Thanks, Berland. Just corrected the article.159.53.110.143 (talk) 15:43, 22 February 2008 (UTC)

Higham 1992 suggests a possibly better alternative
Higham, D. J. (1992) Monotonic piecewise cubic interpolation, with applications to ODE plotting. Journal of Computational and Applied Mathematics, 39 (3). pp. 287-294. ISSN 0377-0427

They point out that if you *know* your derivatives and data at every point, there is a better way to ensure monotonicity than forcing the interpolant to have the wrong derivative at a data point: they introduce extra knots.

Might be worth mentioning. — Preceding unsigned comment added by 132.206.92.211 (talk) 02:58, 19 April 2008 (UTC)

Step 4
In step 4 it says

In such cases, piecewise monotone curves can still be generated by choosing $$m_{k}=m_{k+1}=0$$, although global strict monotonicity is not possible.

I implemented this algorithm, and it gives much better results, if I only set $$m_{k}=0$$. I don't have access to the paper, so I can't check, but it doesn't seem reasonable to set the m of any other than the actual local minimum/maximum to zero.

Tdieb (talk) 15:22, 20 June 2012 (UTC)

Be careful with step 3
Step 3 says
 * For k=1,\dots,n-1, if \Delta_k = 0 (if two successive y_k=y_{k+1} are equal), then set m_k = m_{k+1} = 0, as the spline connecting these points must be flat to preserve monotonicity. Ignore step 4 and 5 for those k.

However, comparison of real (floating point) numbers for equality makes no sense with applied algorithm, and makes little sense with mathematics if the equality is not proved. I would suggest some wording suggesting the idea instead of this wording saying it is computed exactly like this. --Hibou57 (talk) 20:45, 27 June 2013 (UTC)

JavaScript Implementation
The sample JavaScript implementation would be much more readable and portable if it used array indices for assignment instead of 'push'. For example, instead of writing "dxs.push(dx)" use "dxs[i] = dx". — Preceding unsigned comment added by 109.145.139.99 (talk) 10:20, 31 October 2013 (UTC)

I went ahead and did this for all but one of them. There's 1 case where an array element is assigned to outside of an initializer loop where I felt it would be more immediately readable to leave the .push as is than to use "c1s[c1s.length] = ms[ms.length - 1]);", at least to me. Jmortiger (talk) 17:37, 25 January 2023 (UTC)

JavaScript Implementation is not Fritsch-Carlson method
The sample JavaScript implementation is a useful resource for monotone cubic interpolation, and it is appropriate to include it in this article, but it does not implement the Fritsch-Carlson method as claimed: "The following JavaScript implementation takes a data set and produces a Fritsch-Carlson cubic spline interpolant function". The usage example (extended by one point) produces this output: 0 squared is about 0 0.5 squared is about 0.4375 1 squared is about 1 1.5 squared is about 2.21875 2 squared is about 4 2.5 squared is about 6.239583333333333 3 squared is about 9 3.5 squared is about 12.354166666666666 4 squared is about 16 The Fritsch-Carlson method (which in this case is a no-op as the constraints are always satisfied) produces this output: 0 squared is about 0 0.5 squared is about 0.375 1 squared is about 1 1.5 squared is about 2.25 2 squared is about 4 2.5 squared is about 6.25 3 squared is about 9 3.5 squared is about 12.375 4 squared is about 16 Note in particular that for x=1.5 and x=2.5 the interpolant is exact, which is not true of the Javascript implementation.

I suggest that the article be edited to change the quoted line to: "The following JavaScript implementation takes a data set and produces a monotone cubic spline interpolant function"

Tuello (talk) 14:08, 15 August 2014 (UTC)

Redundant Step in Checking for Local Extrema
In my eyes, step 4 is currently redundant: From step 2: "$$ \Delta _ $$ and $$ \Delta _{k} $$ have different sign, set $$ m_{k}=0$$."

In step 4, we check wether $$\alpha _{k}$$ or $$\beta _{k-1}$$ are less then zero. Therefore:

$$\alpha _{k} < 0 \Leftrightarrow m _{k} / \Delta _{k} < 0 \Leftrightarrow \frac{\Delta_{k-1}+\Delta_k}{2\Delta _{k} } < 0 \Leftrightarrow \frac{\Delta_{k-1}}{2\Delta _{k} } + \frac{1}{2} < 0 \Leftrightarrow \frac{\Delta_{k-1}}{\Delta _{k}} + 1 < 0 \Leftrightarrow \frac{\Delta_{k-1}}{\Delta _{k}} < -1$$

For $$\beta _{k-1}$$, the result is $$\frac{\Delta_{k}}{\Delta _{k-1}} < -1$$

Therefore, this step also just checks wether $$ \Delta _ $$ and $$ \Delta _{k} $$ have different sign; since if they do, either $$\frac{\Delta_{k-1}}{\Delta _{k}} $$ or $$\frac{\Delta_{k}}{\Delta _{k-1}} $$ will be less than $$-1$$ (and the other one will be bigger).

One of these steps is therefore redundant; I would suggest removing the check in step 4; since if so, $$\alpha _{k}$$ or $$\beta _{k}$$ won't be used twice, but only for enforcing monotonicity.

--MariusLambacher (talk) 15:29, 17 June 2017 (UTC)