Talk:Whittaker–Shannon interpolation formula

Boasting claim / suggestive phrasing
"Since a random process does not have a Fourier transform, the condition under which the sum converges to the original function must also be different."

This sentence is both incorrect and on top of that, unclear. "must also be different": different from what?

It is even more striking, that in the next sentence, the autocorrelation is invoked.

To me this is yet again an example of scientific urban legends that infect the minds of our students.

Both Fourier and autocorrelation are just straightforward transforms, subject to a fairly limited set of assumptions. Of course a signal sampled from a random process can be represented in the Fourier domain. Whether that transform yields a meaningful representation is a completely different issue. Note that a process cannot have a transform but the signal, that is sampled from it, can. I am not changing the original text. If there are folks who would agree with me, please take my suggestion as a careful adhortation. — Preceding unsigned comment added by 129.125.178.72 (talk) 15:56, 11 April 2013 (UTC)

Excessive dependent parameters

 * 1) $$\frac{1}{T} = f_s = 2W\,$$     Less is more.  E.g., the article could easily do without $$W\,$$.
 * 2) Also, the figure introduces $$f_N\,$$, which is never mentioned in the text.
 * 3) The formula is actually derived (two different ways) at Nyquist–Shannon sampling theorem.
 * --Bob K 08:44, 22 March 2006 (UTC)

Incorrect formula
At [Interpolation_as_convolution_sum], the result:


 * $${T \over 2} \mathrm{rect}(fT)\,$$

should be:


 * $$T \mathrm{rect}(fT)\,$$

--Bob K 18:56, 22 March 2006 (UTC)

What's it called?
Nobody calls it the Nyquist–Shannon interpolation formula, and Nyquist had nothing to do with it. Since Whittaker published it before Shannon, he should share the credit. People DO actually call it Whittaker's interpolation formula and the Whittaker–Shannon interpolation formula, though just using Shannon is most common. So, I moved it. Dicklyon 04:48, 3 July 2006 (UTC)


 * References? — Omegatron 06:18, 3 July 2006 (UTC)
 * C. E. Shannon, "Communication in the presence of noise", Proc. Institute of Radio Engineers, vol. 37, no.1, pp. 10-21, Jan. 1949. Reprint in: Proc. IEEE, Vol. 86, No. 2, (Feb 1998)
 * At the time, Shannon described this formula as "common knowlegde in the communication art" and specifically mentions it J.M. Whittaker (1935) among other unnamed mathemathicians. One of them should be G. H. Hardy. On the other hand, Nyqyist knew and published about the critical frequency, but never gave this reconstruction formula.
 * J. R. Higgins: Five short stories about the cardinal series, Bulletin of the AMS 12(1985)
 * Among other things, such as generalizations of the sampling formula and applications to harmonic analysis and approximation theory, Higgins traces the roots of the sampling formula as far back as to Gauss, Cauchy, Poisson and Borel. He explains that the sampling theorem in both aspects of characterization and reconstruction is first formulated by E.T.Whittaker (1915) and first formally proven by G.H. Hardy (1941). He called the space of bandlimited functions "Paley Wiener space". Since Shannon was a student and collaborator of Wiener, he certainly knew about it.
 * Michael Unser: Sampling-50 Years after Shannon, Proc. IEEE, vol. 88, no. 4, pp. 569-587, April 2000
 * A modern recount of sampling and reconstruction in approximation theory.
 * --LutzL 12:05, 3 July 2006 (UTC)
 * As to what people call it, do a search for various terms on . I see 27 "Shannon interpolation formula", and only a few of anything else beyond "interpolation formula".  The "tie breaker" for the "also rans" should be who actually published the formula.  See references above.  As fas as I can tell Nyquist only used a sinc-like formula for the frequency-domain description of a square pulse or sampling aperture.  He was concerned about reconstruction an analog waveform, since the problem he was tackling was recovering the discrete code pulses that were input to a bandwidth-limited channel. Dicklyon 16:13, 4 July 2006 (UTC)


 * Here's another proposal: let's change it to just the Shannon interpolation formula, since it called that in books about 10X more than anything else.  I've just been reading up and found an interesting tidbit about E. T. Whittaker's use of it as his "Cardinal function", in Advances in Shannon's Sampling Theory, by Ahmed I Zayed and Zayed I Zayed.  They point out that Whittaker did NOT show this formula or function converges to an original f(t), but rather that it converges to a simplest function consistent with the samples.  They say "nowhere in his paper did E. T. Whittaker mention that the cardinal function...was equal to f(t)."  To put it more bluntly, he did NOT use to "reconstruct" a function from its samples, which is exactly what Shannon did.  So, since Whittaker's name is also rare as a prefix to "interpolation formula" or "reconstruction formula", maybe we should move the article again, to just Shannon's interpolation formula.  Or, since it says interpolation, not reconstruction, leave Whittaker's name on it?  Opinions? Dicklyon 23:06, 4 July 2006 (UTC)

Too technical?
The section entitled "Stationary Random Processes" is too technical, and should be updated. Definition of (or link to) $$\ell^p$$ should be included. --Chrismurf 19:58, 3 December 2006 (UTC)


 * I removed the "technical" tag after adding the requested link and a few words of explanation. More suggestions for things to try to clarify are welcome. Dicklyon 03:08, 4 December 2006 (UTC)


 * Thanks - definitely better. It's still pretty technical/dense, but such is the field.  I'm afraid I'm not qualified/capable of making it any better than it is now (if in fact there's even anything wrong with how it is now).  The added link helps.

Chrismurf 04:52, 8 December 2006 (UTC)

If I remember, sampling at Nyquist rate results in uncorrelated samples. Could anyone find a proof for it and add it as property.--Royi A (talk) 17:46, 29 October 2009 (UTC)


 * Not true. Dicklyon (talk) 21:48, 29 October 2009 (UTC)

Revised the lead
The lead was confusing the interpolation formula with the sampling theorem. The interpolation formula stands on its own, and will always give a bandlimited function from a set of samples. The sampling theorem is a major application of the formula. Let's not mix them up. Dicklyon (talk) 19:46, 7 March 2008 (UTC)

Calculating The Coefficients
Would anyone would like to show another approach for it? Using Sinc orthogonal properties and showing X(nT) as their inner dot product result? --Drazick (talk) 08:16, 21 May 2009 (UTC)
 * Do it. The orthogonality is shortly mentioned at sinc function, but not the consequence of this formula being an orthogonal development of bandlimited functions or the shifted sinc's being a Hilbert basis of this subspace of bandlimited functions.--LutzL (talk) 08:53, 23 May 2009 (UTC)
 * I would happily do it. I know the process well. Yet I'm not a Mathematician hence it would be better if someone will run over my work to make sure everything is as accurate as it should be.--Royi A (talk) 16:44, 4 June 2009 (UTC)

Mathematical Basis
The Whittaker–Shannon interpolation formula is, similar to the Fourier series, a function series over an orthogonal function family. In the case of the Fourier series, the family is a Hilbert basis of the space of periodic functions. In the same manner, the family of shifted sinc functions is a Hilbert basis of the (sub)space of band-limited functions.

The basis functions, using the sinc function with integer zeros:
 * $$f(t)=sinc( \tfrac{t}{T}-n)$$

The inner product is:
 * $$\langle f(t), g(t) \rangle= \frac{1}{T} \int_{- \infty}^{\infty}f(t)g(t)dt$$

where $$T$$ is defined as:
 * $$T= \frac{2\pi}{{2 \omega}_{max}}$$ - equivalent of Nyquist rate.


 * Easily can be shown that the Basis Function is Orthonormal basis under the defined Inner Product:
 * $$\langle sinc( \frac{t-nT}{T}), sinc( \frac{t-mT}{T}) \rangle= \frac{1}{T} \int_{- \infty}^{\infty}sinc( \frac{t-nT}{T})sinc( \frac{t-mT}{T})dt$$
 * $$= \frac{1}{T} \int_{- \infty}^{\infty} \left\{  \frac{1}{2\pi}\int_{- \infty}^{\infty}T \cdot rect\left(  \frac{ \omega}{{ 2\omega}_{max}}\right){e}^{-j \omega nT} \cdot{e}^{j \omega t}d \omega\right\}sinc( \frac{x-mT}{T})dt$$
 * $$= \frac{1}{2\pi} \int_{- \infty}^{\infty}rect\left( \frac{ \omega}{{ 2\omega}_{max}}\right){e}^{-j \omega nT}  \left\{  \int_{- \infty}^{\infty} sinc( \frac{x-mT}{T})\cdot{e}^{j \omega t}dt\right\}d \omega$$
 * $$= \frac{T}{2\pi} \int_{- \infty}^{\infty}rect\left( \frac{ \omega}{{ 2\omega}_{max}}\right){e}^{-j \omega nT} \cdot rect\left(  \frac{ -\omega}{{ 2\omega}_{max}}\right){e}^{j \omega mT} d \omega$$
 * $$= \frac{T}{2\pi} \int_{- {\omega}_{max}}^{{\omega}_{max}}{e}^{j \omega \left( m-n\right)T} d \omega$$
 * $$= \begin{cases}

1 & \text{ if } m=n \\ 0 & \text{ if } m \neq n \end{cases}$$

One should recall that the basic reconstruction process can be defined as:
 * The Projection Process:
 * $$g(t)= \sum_{n=-\infty}^{\infty} {a}_{n} {f}_{n}(x)$$

Where $${a}_{n}$$ it the Nth projection of $$g(t)$$ over the Nth Orthonormal Basis Vector $${f}_{n}(t)$$, which means the inner product of the two. Calculating "Generic" $${a}_{n}$$ in our basis of the Band Limited Functions. Shall be a $${g(t)}$$ - Band Limited Function with $$G(\omega)$$ as its Fourier Trnasform which vanishes at frequency $${\omega}_{max}$$.

The inner product
 * $$\langle f(t), g(t) \rangle= \frac{1}{T} \int_{- \infty}^{\infty}f(t)g(t)dt$$
 * $$=\frac{1}{T} \int_{- \infty}^{\infty} \left( \frac{1}{2 \pi} \int_{a}^{b} G( \omega) {e}^{j \omega t}d \omega\right)sinc( \frac{t-nT}{T})dt$$
 * $$=\frac{1}{2 \pi T} \int_{- \infty}^{\infty} \left( \int_{- \infty}^{\infty} sinc( \frac{t-nT}{T}){e}^{j \omega t}dt\right)G( \omega) d \omega$$
 * $$=\frac{1}{2 \pi T} \int_{- \infty}^{\infty} \left( T \cdot rect( \frac{- \omega}{2 { \omega}_{max}}){e}^{j \omega nT}\right)G( \omega) d \omega$$
 * $$=\frac{1}{2 \pi } \int_{- { \omega}_{max}}^{{ \omega}_{max}}G( \omega){e}^{j \omega nT} d \omega=g(nT)$$

Finished the Projection process. All left is applying it to get the formula. --Royi A (talk) 18:21, 4 October 2009 (UTC)

This is a Draft. --Royi A (talk) 19:53, 4 June 2009 (UTC)


 * Certain sinc functions are orthogonal. So what? Certain rect functions are orthogonal too.
 * --Bob K (talk) 23:41, 4 June 2009 (UTC)
 * It's something which will end up as development of the interpolation.
 * It's only the beginning, I'm new to LateX which make things a little slow.
 * --Royi A (talk) 07:15, 5 June 2009 (UTC) —Preceding unsigned comment added by Drazick (talk • contribs) 07:13, 5 June 2009 (UTC)

There's really no need in this article to derive the orthonormality; it can just be stated. Dicklyon (talk) 13:10, 5 June 2009 (UTC)
 * I will add the Proff in the Sinc Function article. Here I'll just use it.--Royi A (talk) 12:16, 6 June 2009 (UTC)
 * The question remains: What is it exactly that you want to express with this section? Because the projection calculation is the 1001st proof of the sampling theorem, which has no place here. The only reasonable remark is that the function family in the series is orthogonal and that therefore the coefficients of the series are recoverable via scalar products, as is true for all generalized Fourier coefficients.--LutzL (talk) 09:37, 5 October 2009 (UTC)

Conditions were flaky
The "conditions" were stated as a pair of two necessary conditions, in a form that didn't make any sense. I changed it to one sufficient condition (express with the AND of two parts), which is what the sampling theorem implies. If more than this is provable, we change it. I think it's the case that there's a condition where no such bandlimit B exists, yet the reconstruction is exact. For example, if the spectrum is a triangle that goes to zero at W, and the function is sampled at fs = 2W, I believe the reconstruction will be exact, even though there's no bandlimit B less than W. Dicklyon (talk) 05:41, 25 May 2010 (UTC)

Examples
Someone should provide some specific examples of using this process. —Preceding unsigned comment added by 68.255.77.61 (talk) 15:47, 5 November 2010 (UTC)

form of the formula
User Mathnerd314159 claims this version of the formula:


 * $$x(t) = \sum_{n=-\infty}^{\infty} x[n] \, {\rm sinc}\left(\frac{t}{T} - n\right)\,$$

"corresponds better to the convolution definition and is used in a number of websites and books".

The convolution defintion is:


 * $$ x(t) = \left( \sum_{n=-\infty}^{\infty} x[n]\cdot \delta \left( t - nT \right) \right) *

{\rm sinc}\left(\frac{t}{T}\right). $$

which leads directly to


 * $$x(t) = \sum_{n=-\infty}^{\infty} x[n] \, {\rm sinc}\left(\frac{t - nT}{T}\right)\,$$

This form also makes it clear that the sinc functions are all centered on the original sampling times.

And for website examples:


 * https://dsp.stackexchange.com/questions/37480/formulating-a-function-on-matlab-for-the-shannon-interpolation-formula
 * https://www.sciencedirect.com/topics/computer-science/sinc-interpolation

--Bob K (talk) 14:23, 23 August 2020 (UTC)


 * My sources:
 * http://links.uwaterloo.ca/amath391w13docs/set7.pdf (bottom of page 6 / 217)
 * https://dsp.stackexchange.com/questions/31709/whittaker-shannon-mathrmsinc-interpolation-for-a-finite-number-of-samples (question)
 * https://marksmannet.com/RobertMarks/REPRINTS/1999_IntroductionToShannonSamplingAndInterpolationTheory.pdf (page 41 / beginning of chapter 3)
 * I'd say the two stack exchange links cancel out, and I don't see any usages of sinc on the ScienceDirect page. Regarding the convolution, I simply meant that isolating out t/T visually is easier than trying to match up (t-nT)/T and t/T. The authors of the course notes and textbook agreed.
 * As a compromise, would you accept listing both forms? --Mathnerd314159 (talk) 17:40, 23 August 2020 (UTC)

Thank you for the sources. Regarding mine, the ScienceDirect page, I assume your point is that eqs 7.19 and 8.19 say


 * $$\frac{\sin(\pi(t - nT)/T)}{\pi(t - nT)/T},$$ instead of  $${\rm sinc}\left(\frac{t - nT}{T}\right),$$  and my point is that what they do not say is  $$\frac{\sin(\pi(t/T - n))}{\pi(t/T - n)}$$.

Regarding yours, https://dsp.stackexchange.com/questions/31709/whittaker-shannon-mathrmsinc-interpolation-for-a-finite-number-of-samples, scroll down to the last post (Mar 11, 2017 4:48), and you will see three instances of $${\rm sinc}\left(\frac{t - nT}{T}\right).$$

Regarding "isolating out t/T visually is easier than trying to match up (t-nT)/T and t/T", does that seem important to you because it makes the width of the sinc function slightly more apparent? And regarding "The authors of the course notes and textbook agreed", I don't know what they're thinking, because they don't offer any rationale. My rationale is that (t-nT) immediately tells me the locations of the sinc functions, whereas (t/T - n) only immediately tells me the sample number, which I don't even care about.

Regarding "corresponds better to the convolution definition and is used in a number of websites and books", the latter part applies to both forms of the formula. But the form that "corresponds better" is $${\rm sinc}\left(\frac{t - nT}{T}\right),$$ as I've already said.

Regarding "As a compromise, would you accept listing both forms?", I would rather not. I just don't think it helps. But if you feel strongly, I would recommend adding it as a footnote.

--Bob K (talk) 16:54, 24 August 2020 (UTC)