Talk:Hilbert transform

Practical Uses
The article should say what the Hilbert transform is used for; currently is does not, and this is a significant omission. I suggest a section be added entitled 'Applications of the Hilbert transform' which gives practical examples of what the transform can be used for. —Preceding unsigned comment added by 81.168.113.121 (talk) 08:53, 27 February 2008 (UTC)

Introduction
It is not true that $$\mathcal{F}\{h\}(\omega) = -i\cdot \sgn(\omega)$$ the integral defining the fourier transform of $$ h $$ diverges. It is not even possible to consider $$ h $$ as a tempered distribution and thereby get the result. On the other it is possible to define a tempered distribution out of $$ h $$ by cutting off near 0 and taking a limit. But this is closely related to the fact we need a principle value. And it is correct to say the $$ \mathcal{H} $$ as an operator is the multiplier operator with multiplier $$ m(\omega) = -i\cdot \sgn(\omega)$$. We may want to find a way to rephrase this part.

Hat notation
The notation $$\hat{s}$$ should be removed. It is not standard across mathematics and signal processing. And conflicts with the notation more standard notation for the fourier transform. Seeing how the Hilbert transform almost always involves a discussion of the Fourier transform I believe this to be a poor choice of notation.

M. Pinsky "Introduction to Fourier Analysis and wavelets" B. Hubbard "The world according to wavelets" C. Blatter "Wavelets: A primer" W. Rudin "Real and Complex Analysis" Conte and de Boor "Elementary Numerical Analysis An algorithmic Approach" E. Stien and R. Shakarchi "Fourier Analysis: An introduction" And the list continues, see also papers published by R. and C. Fefferman, T. Tao, E. Stein, etc. Now I don't doubt that there is plenty of evidence for the Hilbert Transform. But the notation definitely conflicts, and we can agree that H(f) is a reasonable way to denote it. We can definitely add comments about different notations in different fields.
 * Please sign your entries with "~".
 * Regarding your suggestion, the argument that it is "not standard across mathematics and signal processing" is not a strong argument, since very few notations of any kind enjoy that unique status. The hat notation is quite common in signal processing, at least.  It's no accident that it appears here.  Furthermore, I for one have seen quite a few Fourier transforms, and I don't recall any that used the hat notation.  I am not saying it doesn't exist, but it is certainly far from being the standard you claim it to be.  --Bob K 05:21, 14 July 2006 (UTC)
 * Regarding my suggestion. Here are a list of references I might mention that use a hat to denote the Fourier transform.

Thenub314 20:24, 17 July 2006 (UTC)


 * That's a good list. (Thanks.)  It's not an encyclopedia's job to suppress any of the common notations.  On the contrary, its job is to try to describe the real world in all its ugly complexity and inconsistency.  The really hard part, of course, is trying to maximize its own internal consistency at the same time.  Maybe some day each reader will have a "profile", of his own choosing... and articles will be automatically translated through that profile to create the view he wants to see.  --Bob K 03:41, 18 July 2006 (UTC)

MathWorld & Matlab
Math world has the plus and minus reversed in $${1 \over \pi} ln \left | {t+{1 \over 2} \over t-{1 \over 2}} \right |$$

The way it is now coincides with the MATLAB function when plotted, and also coincides with the diagram in my book (going from &minus;&infin; to +&infin; during a positive side of the square wave), and the equation in my book $${A \over \pi} ln \left | {t \over t-\tau} \right |$$ for a pulse of width &tau; delayed by &tau;/2, so I'm leaving it this way. - Omegatron July 2, 2005 17:03 (UTC)

Very strange. In the mathworld we see that hilbert transform is the convolution with -1/t*pi function, not 1/t*pi. But the table with hilbert transform is nearly the same. Somebody just have written it down without thinking.

I will correct it as soon as I create an account.

--83.25.155.136 16:41, 18 September 2005 (UTC)

Discrete HT
I'm trying to figure out what is going on with the discrete HT. That section now says that there is an ideal discrete HT, so and so, but this operation cannot be realized in the signal domain. Then, it presents a filter which seems to do the job, derived from the DFT. This seems contradictory. --KYN 21:41, 10 November 2005 (UTC)


 * I think it's not contradictory, because his $$h[n]\,$$ is non-realizable, as written. But maybe what you are saying is that n does not go to ± inf, because it is just a DFT.  Therefore it can be made realizable, with sufficient delay.  And the "ideal" filter would not have that characteristic.  Is that the problem? --Bob K 09:30, 7 December 2005 (UTC)

I guess what is said is that from the ideal filter in the Z-domain it is not possible to derive a filter in the signal domain by means of the inverse Z-transform? Maybe then it is better not to involve the Z-domain in the discussion? Is it possible to present it in the following way? First define a "Hilbert filter" in the Fourier domain as

H(u)= -i for even integer < u < odd integer H(u)= +i for odd integer < u < even integer
 * Please clarify this definition, if you can. I'm not getting it yet.  --Bob K 20:36, 7 December 2005 (UTC)
 * Hopefully it's all OBE now, since I just re-wrote that section of the article. --Bob K 01:52, 8 December 2005 (UTC)

ie an oscillating square wave. The inverse DFT of this function will be precisely the discrete filter presented in the article. Then maybe continue to say that there is an ideal version of this filter in the Z-domain, with the given expression, but there is no formal relation to the discrete filter via the Z-transform or its inverse.

KYN 21:41, 10 November 2005 (UTC)

About the discrete algorithm I read a trick : it work better with a 0 for the first point. So, thanks to the fast fourier transform the algorithm may be written

X(f) = FFT( x(t) ) if f == 0 : H(f) := 0 if f > 0 : H(f) := - i * X(f) if f < 0 : H(f) := +i * X(f) h(t) = iFFT( H(f) )

but with almost all the implementation of the FFT the spectrum unfolding (the negative part is stored after the positive one) imply one more 0 as you can see in this dirting matlab script :

len = length(wave_in); fft_in = fft(wave_in); fft_quad = [ 0 ; - 1i * fft_in(2 : len / 2); 0 ; 1i * fft_in(len / 2 + 2 :len)]; wave_out = real(ifft(fft_quad));

btw very useful to get the envelope ^^ enveloppe(t) = sqrt ( x(t)^2 * h(t)^2 ) for all t.

Practical implementations
Forgor to write something in the "comment" but I noticed the comment about the discrete ideal filter being non-causal, and realized that this is a general propery of the HT, both continuous and discrete. I tried to formulate something about this.

Discrete HT again
I don't really have an experience with the DHT and therefore I must ask the following questions, some of which hopefully can work their way into the article to make it more understandable.


 * Exactly how is the DHT related to the continuous HT? I can see that there are similarities, but not that the DHT follows straightforward from the CHT.  The CHT is defined as it is simply because it make sense in a number of applications.  I guess that it can be shown that the DHT also has this property.  Examples?
 * >>>Probably its main use is to convert real-valued discrete signals into the analytic representation. --Bob K 06:21, 4 January 2006 (UTC)


 * I understand that there is a problem in defining the discrete equivalent of the filter h(t) since it would have to be infinite at n=0. OK, let's set h[n]=0.  But why should also all other even samples of h[n] vanish?  Why is not a valid approximation to define h[n]=1/(pi t) for n not 0 and h[0]=0?  For example, the CHT of a bandlimited signal is again bandlimited.  Consequently, the computation of the CHT could be made in terms of an operation on a discrete version of the signal, if appropriately sampled.
 * >>>But the discrete version of the signal is no longer band-limited. The DTFT is periodically extended (to infinity).  A CHT would shift all the positive freq components by -90 deg (and all the negative freq components by +90 deg).  But (as the article states) what's actually needed is to shift (0,π] by -90, (π,2π] by +90, (2π,3π] by -90, etc.  The derivation of the corresponding h[n] sequence is indicated at slide 21.  Note that the $$\sin^2\,$$ function takes on alternating 1,0,1,0,1,0,... values for odd and even values of n. --Bob K 06:21, 4 January 2006 (UTC)


 * Would the corresponding operation be a discrete convolution by the h[n]-filter as it is defined in this section?
 * >>>Yes, except for the fact that it has to be approximated by something with finite duration. --Bob K 06:21, 4 January 2006 (UTC)


 * If so, this fact is worth mentioning since it makes the definition of h[n] much clearer. If not, I am lost.


 * What exactly does "usual filter design tradeoffs" refer to? I can guess, but some readers will not understand this.  Either be a little more specific or expand the subject on a page of its own.
 * >>>Filter-order vs. frequency-response and latency is always a tradeoff. Another issue is truncation vs. numerous methods of gradual tapering off to zero-valued coefficients.  These topics are of general enough interest that they should be treated elsewhere and only referenced here.  --Bob K 06:21, 4 January 2006 (UTC)


 * There is a filter design article. Are the tradeoffs described sufficiently well there so that it is reasonable to use a link?
 * >>>Sorry, I haven't investigated that myself. --Bob K 06:21, 4 January 2006 (UTC)
 * >>>I just took a peek, and the treatment is indeed brief. But filter-design is a well-studied and dauntingly large subject to cover in detail.  And excellent tools are widely available so that most practitioners don't really have to understand all the issues.  So I doubt that wikipedia will take on that challenge anytime soon, if ever.  All most readers will need to know is that there are tradeoffs, so they should seek the help of a decent tool if they want to do a good job.  It's like telling people that their car needs its oil changed, rather than detailing how to change the oil in every kind of car.  I think that is valid.  --Bob K 06:55, 4 January 2006 (UTC)


 * What exactly does "DFT approximation of it" mean. "it" seems to refer to h[n]. It what sense are we dealing with an approximation?  Of what?
 * >>>Perhaps a second reading would help. But here is the short version:  Instead of the convolution, people often do a DFT, modify the coefficients in the obvious way, and then do an inverse DFT.  That is equivalent to a circular convolution with the approximation shown in the article.  --Bob K 06:21, 4 January 2006 (UTC)


 * The operation which is referred to as "fast convolution" is not really described to a sufficient detail to make it understandable. I don't see it before me how this operation is implemented in practice.  What is the relation between fast convolution and cyclic convolution?  Maybe fast convolution needs a page of its own, rather than to have to be described in detail in this article?
 * >>>Yes. Due to the efficiency possible with the FFT algorithm, the DFT/IDFT approach is often faster than simple convolution, even when actual multiplications (by other than ±1 and 0) are required.  Again, that topic is of sufficiently general interest that this is not the right place for it.  And again, I have not searched for the reference.  If you have looked already, then you might try looking for the names overlap-save and overlap-add, which describe specific techniques for piecing together the outputs of block-processing, an essential part of the FFT approach with streaming data.  --Bob K 06:21, 4 January 2006 (UTC)

--KYN 23:51, 3 January 2006 (UTC)


 * I have summarized the first part of this discussion in the intro of the DHT section. Please have a look to see if it came out the right way.  --KYN 13:48, 4 January 2006 (UTC)
 * Looks fine. --Bob K 20:06, 4 January 2006 (UTC)

Now, I am still a bit concerned about the rest of that section. To me, it appears to discuss rather general principles of filter design, how to truncate and shift an infinitly extended filter, and how to implement the convolution operation in the frequency domain instead of the signal domain. The first part of this is more or less identical to the discussion for the continuous HT, isn't it? The latter, is something that relates to any discrete convolution operation, not just DHT. In that case, it could be moved to somewhere where the relation between signal domain convolution can be compared to "fast convolution" on a more general level. --KYN 13:48, 4 January 2006 (UTC)
 * The article does contain a link to circular convolution, which goes into more depth about fast convolution. But normally, one uses design techniques to choose a finite filter order.  Then one transforms that design into the frequency domain.  For some reason, when it comes to the DHT, there seems to be a propensity to skip that step, and just use the idealized $$H(e^{j \omega})\,$$, which makes circular convolution unavoidable.  Seems to me that it's like using an ideal rectangle for a lowpass filter, which nobody does.  Since this seems to be endemic to the DHT, I thought it was worth a little space at the expense of some redundancy.  --Bob K 23:28, 4 January 2006 (UTC)


 * Ok, I will look at the fast convolution section in the circular convolution articles which you pointed out, and try to make sense of all this. For some reason this page does not appear when I try to search the wikipedia for "fast convolution".  Any idea why?--KYN 00:42, 5 January 2006 (UTC)
 * Sorry, no. It fails for me also.  --Bob K 01:47, 5 January 2006 (UTC)
 * I just tried it again. It still failed, but now it knows something is wrong, and it recommends I use Google or Yahoo, with the "Wikipedia" option selected.  Those work just fine. --Bob K 02:34, 24 April 2006 (UTC)

Why named after David Hilbert?
Did he develop and define this concept? Whaa? 21:31, 23 April 2006 (UTC)
 * Excellent question. I don't exactly know myself, I suspect that either it came up during his study of integral equations.  But none of the biographies I have read explain exactly what he was thinking about in this respect.  As far as the history of the Hilbert transform, the first real theorems I know that were proven about it go back to M. Riesz.  It is also the prototypical example of a singular integral operator, and the Hilbert transform was what motivated Zygmund to study such opertators. 75.3.32.251 12:14, 18 July 2006 (UTC)

(h*s)(t) or h(t)*s(t) ?
This has undoubtedly been discussed somewhere else, so here we go again. If it comes to a vote, I prefer this convention:


 * $$\mathcal{H}\{s(t)\} = h(t)*s(t)\,$$

to this one:


 * $$\mathcal{H}\{s\}(t) = (h*s)(t)\,$$

--Bob K 14:50, 17 October 2006 (UTC)

If it comes to a vote I prefer $$\mathcal{H}\{s\}(t) = (h*s)(t)\,$$ Thenub314 16:15, 17 October 2006 (UTC)


 * I agree with Bob K. First Harmonic 19:39, 17 October 2006 (UTC)

One thought on the subject is that the current notation is consistant with the Convolution article. Thenub314 23:36, 18 October 2006 (UTC)


 * Clearly, both notations are common in the literature, and on WP. The notation that I prefer is commonly used in signal processing and communication systems engineering, wheras the notation preferred by Thenub314 is more common (I think) in mathematics.  So you could argue that either notation (or both) is acceptable.  That being said, it comes down to a question of taste and familiarity.  I prefer the one notation over the other not only because it is what I am familiar with, but also because I think it is easier to understand, clearly represents what is what, and is less ambiguous than the other.  First Harmonic 16:01, 26 October 2006 (UTC)


 * I'm with Bob K on the $$\mathcal{H}\{s(t)\} = h(t)*s(t)\,$$ notation. I've seen both on Wikipedia, but I find this one more clear. -Roger (talk) 17:03, 15 May 2009 (UTC)

PLEASE DEFINE YOUR VARIABLES!!!!!
I search in vain in each of several linked articles in this article including convolution, fourier transform, signal processing, etc., etc., etc., etc., finding the one thing all of these articles have in common is that not one defines ALL of the variables used. That is perfectly fine for those who are already experts, but makes it basically useless to those who are unfamiliar with the subject. If the unfamiliar reader isn't the targeted audience then what is the point of an encyclopedia anyway? Surely not just a book mark for all those equations you regularly use?

You don't need to break into the text, just add a section below where each variable is defined such as t =, omega =, theta=, etc.. I think that is not too much to ask. Drillerguy 14:42, 3 September 2007 (UTC)

Too much to ask?
What may be too much to ask is to describe the subject understandably to the general public in the introductory paragraph, before diving into the spagetti of equations. These readers may be very interested and very intelligent but still not familiar with the subject. Only the best technical writing achieves that and it may be too much to ask of Wiki but I hope is the goal. In the articles I have contributed to, I approach it as trying to explain the subject to my neighbor.Drillerguy 14:44, 3 September 2007 (UTC)


 * The way it makes sense to me is to start with the frequency domain and ask the question: "What happens to x(t) if I eliminate the negative frequency components of a symmetrical X(f); i.e. what is the inverse transform of X(f)·U(f)?"
 * X(f) = symmetrical implies x(t) = real. The inverse transform is x(t) plus an imaginary part, which is the Hilbert transform of x(t).  That makes it interesting enough to look at the mathematics, IMO.
 * If I had written the article, that's what I would have done. So "no", I do not think it is too much to ask.
 * --Bob K 16:45, 3 September 2007 (UTC)


 * Actually Analytic signal already does what I suggested. No need to do that again.  I think it is sufficient to mention that article in the intro.
 * --Bob K 13:07, 5 September 2007 (UTC)


 * That clarified it a lot to me. I may add your explanation to this page. —Ben FrantzDale (talk) 15:58, 16 January 2008 (UTC)

Issues with the Definition
The Hilbert transform is not defined by convolution. The integral stated does not converge for reasonable functions $$s(t)$$ (say $$ s(t) \in L^2 $$). I invite comments before I make any changes. Thenub314 (talk) 17:27, 15 April 2008 (UTC)


 * FYI, both Mathworld and "Digital communications" by J.G.Proakis introduce the Hilbert transform using the aforementioned convolution.  What would you say the definition should be?  Oli Filth(talk) 19:54, 15 April 2008 (UTC)


 * Well, it is a very tempting definition, it contains the right intuitive idea. This article used to mention the Cauchy Principal Value, as part of the definition.  I missed the foot note, which I just noticed now.  So I feel slightly better.  Nonetheless I think this is not a foot note to be added to the definition, but in fact the key part of the definition. While it has little impact to it's practical applications in signal processing, it is very important mathematically.  I do not mean this just because that is the rigorous definition.  According to his colleges it was the Riez's proof of the boundedness of the Hilbert transform that fascinated Zygmund.  Thus leading to him and Calderon studying other Singular Integral Operators.  Which has been some of the most fundamental work in analysis in the past 100 years. definition. To comment on your references, mathworld defines it correctly, with PV appearing in front of the integral.  I don't have the other reference to look at, but like I said for practical applications it is probably harmless to overlook it.Thenub314 (talk) 02:49, 17 April 2008 (UTC)

As a PS to my comments about definitions, why does the introductory sentence demand the function be real valued?Thenub314 (talk) 02:54, 17 April 2008 (UTC)


 * "demand" is an exaggeration. It is simply a valid statement of an important fact.  It is not every possible fact or the most general statement possible.  Nor does it claim to be.  Why not add a complementary statement regarding complex functions and what their value might be under Hilbert transform?
 * --Bob K (talk) 16:39, 17 April 2008 (UTC)


 * Well, maybe demand is to strong a word. But the definition given here, nor the changes to the definition I suggest, require the function to be real valued. I was just a bit surprised the very first sentence made an issue of it.  How do people feel about the sentence: "In mathematics and in signal processing, the Hilbert transform is an operator that takes a function, $$s(t),\,$$ to another function,$$\widehat s(t)\,$$, on the same domain."  Is it possible real-valued was referring to the parameter $$ t$$?  It would make sense in this article, as we do not mention Hilbert transforms on the circle, or Riesz Transforms in higher dimensions. Thenub314 (talk) 00:26, 18 April 2008 (UTC)


 * I can authoritatively (since I was the author) attest that "real-valued" does not refer to t. My thinking is this:


 * 1) The only thing I have ever seen the Hilbert transform used for is to create a function, $$i\cdot \widehat s(t),$$ that can be added to $$s(t)\,$$ to cancel all its negative frequencies while preserving all its positive ones.
 * 2) The only time I have seen people do that is when it is a reversible operation... no "information" is lost.
 * 3) One class of functions whose negative frequency components can be reconstructed from its positive frequency ones is the class of real valued functions.
 * 4) We can invent other classes, but AFAIK they don't occur naturally in practice.


 * If there is a whole nuther realm of Hilbert transform applications, we should of course include it in the article. Until I know what it is, I can't confidently judge whether the article should start with one general intro paragraph or with two more specific ones.
 * --Bob K (talk) 01:43, 18 April 2008 (UTC)


 * Those are good reasons. I hope I did not offend.  The Hilbert transform is also an important object to pure mathematics, where one does not necessarily assume the functions it operates on are real-valued.   For example it is useful when describing Hardy spaces on the real line.  It is the first basic example that motivated the study of Singular Integrals.  I would be happy to add a few paragraphs about this part of the theory. Thenub314 (talk) 02:25, 18 April 2008 (UTC)


 * No problem. Happy to have your contributions.  When I wrote the intro, I expected something like this, sooner or later.  As I recall, I am also the one who demoted your Cauchy Principal Value to a footnote.  It was all in response to complaints from other readers... the usual encyclopedia vs. math textbook debate.  My own bias is to divide and conquer, rather than try to please everyone in one all-encompassing article.  The exciting thing about the Wikipedia format is the ease with which several articles can be linked together, either vertically (hierarchically) or horizontally (web-like).
 * --Bob K (talk) 10:58, 18 April 2008 (UTC)

Last sentence in the intro
I have some issues with the sentence "Except for the $$\omega = 0\,$$  component, which is lost..." For the discrete Hilbert transform it is certainly true. But in the case we are considering it doesn't follow. Thenub314 (talk) 13:53, 20 April 2008 (UTC)


 * I don't agree. Can you explain why you think it doesn't follow?
 * --Bob K (talk) 14:19, 20 April 2008 (UTC)

Sure, let's take for example $$f(t)=\frac{1}{1+t^2}$$ then, let's denote the fourier transform by $$\widehat{f}(\omega)$$. We have $$\widehat{f}(0)=\pi$$, $$H\{f\}(t)=\frac{t}{1+t^2}$$ and if we calculate $$H^2\{f\}(t)=\frac{-1}{1+t^2}$$. But then $$\widehat{H^2\{f\}}(0)=-\pi$$.

More generally, if $$f(\omega)=g(\omega)$$ for all $$\omega\neq 0$$ then $$\int_{-\infty}^{\infty}f(\omega)e^{2\pi i x\omega}\,d\omega=\int_{-\infty}^{\infty}g(\omega)e^{2\pi i x\omega}\,d\omega$$ for all $$x$$ because changing the integrand at one point doesn't change the value of the integral. Thenub314 (talk) 16:29, 20 April 2008 (UTC)


 * Now I see what you mean. But on the other hand  $$1/(\pi t) * \left[a + \frac{1}{1+t^2}\right]$$  does not preserve the value of the DC component $$(a)\,.$$
 * --Bob K (talk) 18:21, 20 April 2008 (UTC)

True, but you only run into this trouble when the DC component is infinite. (Infact, that is not really enough, you really need to be dealing with something like the delta function.) It is a theorem that the square of the Hilbert transform is the the negative of the identity on the space of $$L^2$$ functions. (Yuck what an ugly sentence. I just mean $$\mathcal{H}^2=-I$$ on $$L^2(\mathbb{R})$$) So it is not the case the DC component is necessarily lost. Thenub314 (talk) 19:34, 20 April 2008 (UTC)


 * I think you make a good point, regardless of what happens to the DC component. I attempted to fix the offending statement.
 * --Bob K (talk) 19:55, 20 April 2008 (UTC)

I like the change. Just what I would have done. Thenub314 (talk) 20:29, 20 April 2008 (UTC)

Notation.
The recent edits by 69.247.68.82 are good information to have in the article. Unfortunately it calls attention to a bit of a conflict in notation. In this article (as is common in many applications) the Hilbert transfrom of s(t)is dentoted by a $$\widehat{s}(t)$$. This conflicts with the notation used for the Fourier transform by most professional mathematicians. I suggest we denote the Hilbert transform by $$\mathcal{H}$$ which is recognized by both groups, and appears in the definition. Thenub314 (talk) 15:12, 29 April 2008 (UTC)


 * FWIW, other than:


 * $$\widehat{Hf}(\xi) = -i \cdot \mbox{sgn}(\xi) \hat{f}(\xi),$$


 * which I don't even understand, I like the current useage. I would not like to see $$\widehat{s}(t)$$ represent the Fourier transform.  So it sounds like we should simply avoid using it for anything of such a general nature as a Hilbert or Fourier transform.  Just define it locally, as needed, for sundry things.
 * --Bob K (talk) 18:54, 29 April 2008 (UTC)


 * I am not familiar with the abbreviation FWIW. I am not suggesting that we start using $$\widehat{s}$$ to mean the Fourier transform of s in this article.  I am rather suggesting that we do not extensively use it for the Hilbert transform.  Thenub314 (talk) 20:03, 29 April 2008 (UTC)


 * Sorry, it means "For What It's Worth". It's the opinion of a person who does not claim to be a "professional mathematician".
 * --Bob K (talk) 23:00, 29 April 2008 (UTC)


 * :) FWIW, I don't claim to be a professional mathematician. I am just familiar with their notation.  I would just to like the article readable to everyone.  I know notation issues are small, but can be confusing when your starting out.  That is why I thought the neutral ground of using $$\mathcal{H}$$ would be good. Thenub314 (talk) 23:38, 29 April 2008 (UTC)


 * I have noticed that Bob K has switched the notation back to $$\widehat{u}$$ for the Hilbert transform. I have two objections to this.  First of all, it cannot be typeset inline, so the text becomes littered with ugly forced-PNG renderings of mathematical expressions.  More distressing, however, is the fact that this is more commonly used to denote the Fourier transform of a function.  I submit that the article should standardize on the ambiguity-free notation H(u).  There is ample precedent for this in the literature.   siℓℓy rabbit  (  talk  ) 13:40, 3 June 2008 (UTC)


 * Hold on a sec... I only switched in the "signal processing" section, and the article previously acknowledges: "In signal processing the Hilbert transform of u(t) is commonly denoted by $$\widehat u(t).\,$$"  I also defined what I was doing, for clarity.  We can use $$\tilde{u}(t)$$, if you insist.  My reason for shortening the notation is to avoid obfuscating the main points of the signal processing application of Hilbert transform.
 * --Bob K (talk) 15:08, 3 June 2008 (UTC)


 * Ok. I've looked at it again, and it's not as bad as I thought.  I would still prefer to keep the notation consistent in the article, but I don't feel so strongly about it anymore.  siℓℓy rabbit  (  talk  ) 15:19, 3 June 2008 (UTC)

Major Revision.
I did some reading about the history and tried to include some references. I hope people like this version, there are admittedly some omissions (generalizations of the H.T. for example). And places that are very awkward (discussion of other types of discrete Hilbert transforms.) But I think the info should make a nice addition. Thenub314 (talk) 15:36, 12 May 2008 (UTC)


 * Sorry... I like the previous version much better.
 * --Bob K (talk) 13:43, 13 May 2008 (UTC)


 * No need to appologize. I am sorry to hear it. What about the previous article did you like better then this version? Thenub314 (talk) 14:33, 13 May 2008 (UTC)


 * Your notation is strange to me, as perhaps mine is to you. And I like starting off with the frequency domain definition, because it is easy to understand, and IMO, it is the whole reason most of us care about the Hilbert transform at all.
 * --Bob K (talk) 15:01, 13 May 2008 (UTC)


 * I'll have to display my ignorance for a moment. What does IMO stand for? I understand issues with notation, I tried to be careful to avoid using a $$\widehat{\ \ }$$ anywhere for a Fourier transform because I recognize this would make the article very difficult to read for anyone who is used to that notation for the Hilbert transform.  As far as the frequency response definition, I do think it is important, but I think starting with the the definition via integration has two advantages.  First, it is requires sligthly less background.  Second, is that it is often how the transform is introduced. Thenub314 (talk) 16:09, 13 May 2008 (UTC)


 * IMO = "in my opinion". I do appreciate that we're not using $$\widehat{\ \ }$$ for a Fourier transform.  That is very helpful.  Why introduce $$\xi \,$$ for frequency?  Everywhere else we use $$f \,$$ and $$\omega \,$$, which is bad enough.  $$f \,$$ is best, because there is only one common form for the transform definition.  (not to mention that everyone understands "cycles per second")  With $$\omega \,$$ there are two common forms to worry about.  And with $$\xi \,$$, all three are in play.


 * And I think this is worth writing explicitly:


 * $$ i\cdot H(f) = \begin{cases}

\ \ -i^2 = +1, & \mbox{for } f > 0,\\ \ \ \ \ 0, & \mbox{for } f = 0,\\ \ \ +i^2 = -1, & \mbox{for } f < 0. \end{cases}$$


 * And we should stick to $$\mathcal{H}\,$$ for the transform operator and $$H\,$$ for the transfer function.


 * --Bob K (talk) 20:35, 13 May 2008 (UTC)


 * I changed the notation of the transform $$\mathcal{H}$$ to H for typographical reasons, since the page suffered from far too much inline PNG. I'm not married to any particular system of notation, but I think we should use one that is at the very least easy to typeset.  For the symbol of the operator, I have used the notation &sigma;H, which is standard in pseudodifferential operators, although admittedly there is no universal standard for this.   I did not have a chance to visit the section on the discrete transform until just now.  The &xi;'s should, I agree, be changed into &omega;'s (or vice versa, I don't really care).  silly rabbit  (  talk  ) 20:50, 13 May 2008 (UTC)


 * I was actually referring to other articles (that use $$f\,$$ and $$\omega\,$$), not to the discrete transform section. And you are confusing me.  $$\mathcal{H}$$ is (or was) the "the symbol of the operator".  So according to the paragraph above, you now have H and &sigma;H representing the same thing.  I don't like either one of them, and I'm surprised others aren't complaining with me.
 * --Bob K (talk) 22:53, 13 May 2008 (UTC)


 * Sorry, I meant "symbol" in the technical sense of Fourier multiplier (in analysis), or transfer function in signal processing. Hope that clarifies things.   silly rabbit  (  talk  ) 23:00, 13 May 2008 (UTC)


 * I have changed ξ to ω for the moment. I do not like to denote frequency variables by f.  The letter f is so commonly used a function it can be distracting if your not used to using it to denote frequency. Thenub314 (talk) 21:31, 13 May 2008 (UTC)


 * I avoid using $$f\,$$ for a function for the same reason.
 * --Bob K (talk) 22:53, 13 May 2008 (UTC)


 * Please don't take this as criticism, but I don't understand that attachment to f. I worked in signal processing for a few years, and the rule of thumb about notation was nothing was consistent.  I was just checking some of my more engineering slanted books that I still own, just to make sure grad school hasn't warped my memory.  In none of the books I have come across does the author use this notation.  I suppose in the end this part of the discussion belongs on the Fourier transform page though.  The reason I don't like it (beyond the fact that I am not use to it) is that the students I teach and the lay person in general is probably more accustomed to f as a function then f as a variable.  It is those readers I hope not to distract. Thenub314 (talk) 02:28, 14 May 2008 (UTC)


 * Lots of signal processing books represent frequency (in hertz) with the variable $$f.\,$$ One that I happen to have handy at the moment is:




 * In particular, I would draw your attention to chapter 6 (Fourier Transforms), chapter 11 (Sampling and the DTFT), and chapter 12 (The DFT and FFT). And in chapter 14 (Digital Filters), he uses $$F\,$$ to represent the ratio $$f/f_s,\,$$ where $$f_s\,$$ is the sample-rate.  No doubt you (or I) could also go web-surfing and find lots of examples.  There is no Wikipedia standard, and Wikipedia is not interested in choosing one, as far as I can tell.  As an encyclopedia, its job (ideally) is to document our world the way it really is, not the way we wish it was.  As evidenced by the Fourier transform article, people here seem to agree, in principle, with the concept of documenting multiple standards, when no single one exists.  However, no consistent way of doing that has been established.  It's a hard job, and I don't think the will exists.  So we are left to figure it out for ourselves, knowing that someone will probably come along later and rewrite the whole thing anyway.
 * --Bob K (talk) 04:06, 14 May 2008 (UTC)


 * Another book with plenty of examples of variable $$f\,$$ (for frequency) is:


 * --Bob K (talk) 04:47, 14 May 2008 (UTC)
 * --Bob K (talk) 04:47, 14 May 2008 (UTC)


 * I didn't mean to imply there are no books that use this notation. I have seen it before, it just didn't seem to me any more common than any other system.  My experience is that some authors use ω, but it is measured in hertz.  Others use f but it is an  angular frequency, many authors use neither of these letters. (If you'd like specific references I would be happy to find some.)  Your comment about not using f for a function is correct.  If we want to use f as a frequency we should avoid using it as a function.  But that struck me as a odd for the purposes of this article. Thenub314 (talk) 12:25, 14 May 2008 (UTC)


 * I cannot claim that $$f\,$$ is the most common notation for frequency. What I do claim is that whenever I have seen it, I could safely infer that the underlying transform definition is:


 * $$S(f) = \int_{-\infty}^{\infty} s(t)\cdot e^{-i2\pi f t} dt$$


 * Of course I didn't read your book where "Others use f but it is an angular frequency", which makes no sense to me.


 * When I see $$\omega$$ or $$\xi$$, then I have to look for the associated FT definition, which you guys aren't very conscientious about providing. If you want to be lazy, just use $$f.\,$$   (That's not an attack, I'm actually trying to make a serious point.)
 * --Bob K (talk) 14:51, 14 May 2008 (UTC)


 * It is an interesting point that this article doesn't make clear which choice of the Fourier transform it is using. I don't feel confident that just putting f for the variable would clarify the matter.  I have grown to realize that I always need to check what notation the author is using when reading a book or article that invokes the Fourier transform.  To clarify I included a note. Thenub314 (talk) 00:38, 15 May 2008 (UTC)


 * I'm ambivalent about the note. First of all, it's the "wrong" Fourier transform (i.e., the non-unitary one), but that is more of a personal preference.  Secondly, it doesn't actually matter which transform we use, since the Hilbert transform takes place entirely in the time domain.  So I'm not sure if it's a good thing or a bad thing to indicate explicitly which transform is used.  It could be a potential source of confusion.  silly rabbit  (  talk  ) 12:35, 15 May 2008 (UTC)


 * I thought the same thing about a note being unnecessary until I started calculating symbols. If the Fourier transform is defined as $$\hat f (\omega)=\frac{1}{\sqrt{2\pi}}\int f(x)e^{-i\omega x}\,dx$$, then the symbol of the Hilbert transform should change by a factor of $$1/\sqrt{2\pi}$$.  The fastest way to see this must be true is that with this definition we get $$\widehat{f*g}=\sqrt{2\pi}\hat f \cdot \hat g$$, instead of just $$\hat{f}\cdot\hat{g}$$.  Since we are claiming something specific about the symbol, we are at least not choosing this transform.  The other common choices would have been fine.  For Bob's sanity (and perhaps many others) I decided I should put the non-unitary version, or use the unitary one and change the frequency variable to ξ.  It was faster to just put the non-unitary version, but I also agree that it is the "wrong" one. Thenub314 (talk) 13:33, 15 May 2008 (UTC)


 * I don't think the symbol changes. If we are writing
 * $$H(u) = \mathcal{F}^{-1}(\sigma\mathcal{F}(u))$$
 * then surely &sigma; doesn't notice if we multiply $$\mathcal{F}$$ by a constant and divide $$\mathcal{F}^{-1}$$ by the same constant. To see this with convolution, suppose that $$\mathcal{F}_1$$ is the unitary transform and $$\mathcal{F}_2 = k\mathcal{F}_1$$ is some other transform (k is constant).  Then
 * $$\mathcal{F}_2(f*g) = \frac{\sqrt{2\pi}}{k}\mathcal{F}_2(f)\mathcal{F}_2(g)$$
 * so that the symbol of $$g\mapsto f*g$$ is
 * $$\frac{\sqrt{2\pi}}{k}\mathcal{F}_2(f) = \sqrt{2\pi}\mathcal{F}_1(f)$$
 * which is independent of the normalizing constant k. (The symbol will of course depend on how the convolution is defined, but this is a separate matter.)  silly rabbit  (  talk  ) 13:54, 16 May 2008 (UTC)


 * Your absolutely correct. My own comment should have shown me the mistake. The Fourier transform of $$\frac{1}{x}$$ changes by a factor of $$\frac{1}{\sqrt{2\pi}}$$, but exactly because of the convolution theorem, the symbol of a convolution operator under this definition of the Fourier transform are not just the the function you convolve with, thanks for staying sharp. Thenub314 (talk) 16:10, 16 May 2008 (UTC)


 * Of course the "symbol" changes, depending on the inverse transform, because $$\frac{1}{2 \pi} \int_{-\infty}^{\infty} \sigma_H(\omega) \ e^{i \omega t} \ d \omega \ne \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \sigma_H(\omega) \ e^{i \omega t} \ d \omega.$$
 * This unnecessary confusion is a great illustration of why I stick to ordinary frequency. Thanks for making my point.
 * --Bob K (talk) 14:59, 17 May 2008 (UTC)


 * The point being made above is that the symbol does not depend on which Fourier transform is being used. You seem to have neglected the fact that in defining the symbol both the inverse and forward transforms are used.  When they both come in, any arbitrary normalizations will cancel.   silly rabbit  (  talk  ) 15:09, 17 May 2008 (UTC)


 * OK, I agree with that. Another way to look at it is that $$\sigma_H(\omega)\,$$ includes the $$\sqrt{2\pi}\,$$  factor in:


 * $$ (f*g)(t) \quad \stackrel{\mathcal{F}}{\Longleftrightarrow}\quad

\sqrt{2\pi}\cdot F(\omega)\cdot G(\omega) \,$$


 * (from Fourier_transform). But many people will make the mistake of applying it again.  It is an unnecessary opportunity for mistakes, which is bad design.  And two of us have fallen into the trap right here.  We wouldn't even be having this conversation if someone hadn't changed ordinary frequency to radian frequency in this article.  And I don't agree with the minimalist approach of leaving out transform definitions.  If a formula applies to all three conventions, just say so.  Another minimalist example was when this:


 * "the negative and positive frequency components of $$s(t)\,$$  are shifted by +180° and −180°, respectively.  The result is  $$-s(t).\,$$  Or in other words:


 * $$H^2(f) = e^{\pm i \pi} = -1, \quad \mbox{for } f\ne 0.\,$$"


 * was truncated to just this:


 * "the phase of the negative and positive frequency components of u(t) are shifted by +180° and −180°, respectively."


 * --Bob K (talk) 21:23, 17 May 2008 (UTC)


 * Well, I don't like the characterization that I fell into a trap. I did a calculation with a form of the Fourier transform I was not used to and I forgot a factor of 2π.  It is perhaps notable that this is not a general property of multiplier operators that their symbol doesn't depend on which Fourier transform your using (Though comments about multiplying and dividing still apply.)  It has to do with the kernel is homogeneous of degree -1.  For example if we take the "ordinary frequency" definition then the symbol of Δ is -4π2 ν2, where as if we we take one of the "angular frequency" definitions then symbol would -ω2.  But I can't decide if this thought deserves comment. Thenub314 (talk) 00:49, 18 May 2008 (UTC)


 * I apologize for offending you. Call it what you wish.  My point is that "forgetting a factor" of $$2\pi\,$$ or $$\sqrt{2\pi}\,$$ is a fairly common error when working with angular frequency definitions.  You did it, and so did I.  Others will do it too.
 * I have no idea what your "symbol of Δ" is, but I doubt that it will help the article.
 * --Bob K (talk) 02:22, 18 May 2008 (UTC)

Iterated Hilbet transfrom.
The exact same text seems to have been added in the page Gianfelici Transform. This article has been nominated for deletion because it is not clear if this transform is notable. Maybe we should make sure this section belongs here. There doesn't seem to be a lot that comes up about it on Google besides the reference given. Thenub314 (talk) 17:33, 12 May 2008 (UTC)


 * I have once again removed reference to the Gianfelici transform. Also the editor who added the Gianfelici reference (which is not used in the text) removed a reference which is used in the text (by Charles Fefferman, one of the most influential harmonic analysts in the history of the subject.)  I would ask that any case to be made for inclusion of this material should be done on the talk page, preferable with multiple independent references supporting the assertion that there is such a thing as a Gianfelici transform.  Google scholar doesn't turn up anything.   siℓℓy rabbit  (  talk  ) 22:32, 6 August 2008 (UTC)

Improper Integration
Does the phrase "Improper Integration" imply that we are dealing with something other then the Lebesgue integral? Being an encyclopedia article I don't want to get too technical. Though if you follow the link to the Improper integration page it says something like: "For the Lebesgue integral, deals differently with unbounded domains and unbounded functions, and one does not distinguish an 'improper Lebesgue integral'...", but then it follows this with something (I think) is false so I don't know what to make of that statement. Anyways, it is a fine expression to have, maybe we could write the limit directly? Thenub314 (talk) 12:36, 16 May 2008 (UTC)


 * Whether it is a Lebesgue integral or not, it still needs to be improper. I have corrected the example at Improper integral: it now reads
 * $$\int_0^\infty \frac{\sin x}{x}\,dx$$
 * Obviously, by the divergence of the Harmonic series, in order to make sense of this integral as a Lebesgue integral, it needs to be regarded as an improper one. I'll add the limits in to the article (in particular, it is the limit at zero which is important, not the one at infinity).  silly rabbit  (  talk  ) 12:51, 16 May 2008 (UTC)

Thanks for fixing that example, I didn't quite have enough time before I left for work. It was mostly the phrase "improper integral" I was asking about. I thought it simply wasn't used when discussing the Lebesgue integrals. I am not sure why, it seems natural enough. This is what I thought the page improper integral meant by "...one does not distinguish...". Thenub314 (talk) 16:42, 16 May 2008 (UTC)

Comments about Hilbert transform table.
It is not quite true that "The Hilbert transform of the sin and cos functions are defined using the periodic form of the transform, since the integral defining them otherwise fails to converge absolutely." At some point I was bothered by these two being in the table aswell, but when I start with the definition (instead of thinking about it as an operator) this is what I get, if there are any mistakes please let me know.


 * $$\text{p.v.} \frac{1}{\pi}\int \frac{\sin(\tau-t)}{t}\,dt=\frac{1}{\pi}\lim_{\epsilon\to 0}\left(\int_\epsilon^{\frac{1}{\epsilon}} \frac{\sin(\tau-t)}{t}\,dt+\int_{\frac{-1}{\epsilon}}^{-\epsilon} \frac{\sin(\tau-t)}{t}\,dt\right)$$
 * $$=\frac{1}{\pi}\lim_{\epsilon\to 0}\int_{\epsilon}^{\frac{1}{\epsilon}}\frac{\sin(\tau-t)-\sin(\tau+t)}{t}\,dt$$
 * $$=\frac{1}{\pi}\lim_{\epsilon\to 0}\int_{\epsilon}^{\frac{1}{\epsilon}}

\frac{\sin(\tau)\cos(-t)+\cos(\tau)\sin(-t)-\sin(\tau)\cos(t)-\cos(\tau)\sin(t)}{t}\,dt$$
 * $$=\frac{-2\cos(\tau)}{\pi}\lim_{\epsilon\to 0}\int_{\epsilon}^{\frac{1}{\epsilon}} \frac{\sin(t)}{t}\,dt

=\frac{-2\cos(\tau)}{\pi}\frac{\pi}{2}=-\cos(\tau)$$.

As an operator it does map L&infin; to BMO, so the statement could also make sense in that way. But I feel those the above calculation justifies the entries in the table. Thenub314 (talk) 00:22, 18 May 2008 (UTC)


 * I think it depends on whether you take the principal value at infinity as well. I'm inclined to think of the integral as an ordinary Lebesgue integral "at infinity", with the improper part at zero.  This is certainly how things are done in the old folks like Riesz, Titchmarsh, and Zygmund.  (If an integral was improper at infinity, they indicated this explicitly.)  Anyway, I am more comfortable indicating that the integral may be problematic, rather than take for granted that the calculation works.   silly rabbit  (  talk  ) 00:34, 18 May 2008 (UTC)


 * L∞ and BMO would be a nice addition. But this is even more problematic from a practical point of view since BMO is not (exactly) a function space, so one needs to modify the definition of the transform.   silly rabbit  (  talk  ) 01:09, 18 May 2008 (UTC)


 * Well I think the general point of view back then regarding Cauchy Principal Values was to cut symmetrically around what ever singularities there were, but it was a little before my time so it is tough to say. I can say this Zygmund and Calderón in "Singular integrals and periodic functions" cut off basically the way I did in order to calculate the Fourier Transform of the kernel.  (Of course this paper was dealing with a whole class of kernels.) Though they have a very nice result that the Fourier series of the periodic kernel is the the the Fourier transform of the corresponding kernel sampled at the integers.  Maybe I will add something about L∞ and BMO but I would like to get the Riesz transforms mentioned here first. Thenub314 (talk) 02:19, 18 May 2008 (UTC)


 * I've changed the wording slightly. In case the conditional convergence bothers someone (like me), the periodic kernel is available to do it properly.  I have also added a section on BMO in one-dimension.  The Hilbert transform in several variables is probably a better addition than the Riesz transform, which is easily important enough to have its own article.   silly rabbit  (  talk  ) 13:22, 18 May 2008 (UTC)


 * I will take a look at the wording. I am ok with it, it would bother me too, except that it occasionally comes up, particularly in calculating the Fourier transform, so I suppose I have gotten used to it.  What definition of the Hilbert transform in several variables did you have in mind?   The only definitions I am familiar with are the directional Hilbert transform and the Riesz transforms. Thenub314 (talk) 13:45, 18 May 2008 (UTC)


 * I think the standard generalization is the naive one, which basically takes the Hilbert transform with respect to each variable separately:
 * $$Hf(\mathbf{x}) = \lim_{\epsilon\to 0}\frac{1}{\pi^n}\int_{\mathbb{R}^n\backslash B_\epsilon(0)} \frac{f(\mathbf{y})}{\pi_{i=1}^n(x_i-y_i)}d^n\mathbf{y}.$$
 * silly rabbit (  talk  ) 15:05, 18 May 2008 (UTC)


 * I see, I have not had an opportunity to give any deep thought to this operator. It seems it is a product of Hilbert transforms acting in each variable separately.  I mentioned the Riesz transforms because they are the operators that generalize the connection between the Hilbert transform  and conjugate functions. Thenub314 (talk) 19:28, 21 May 2008 (UTC)


 * It's basically the "Cauchy kernel" in several variables, making it seem likely that it might arise in studying edges of wedges. (Though I don't know much about the role, if any, it plays here.)  I think the article should address in more detail the connection with the Riemann-Hilbert problem (1D edge-of-the-wedge), as well as the connection with analytic functions.  For my own part, I'm still considering the proper way to go about this.   siℓℓy rabbit  (  talk  ) 03:50, 22 May 2008 (UTC)

Convolutions.
Just to make sure I am correct, consider the sentance... "However, a priori this may only be defined for u a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported distributions are dense in Lp."

Shouldn't this be functions of compact support? Distributions of compact support do not sit inside Lp, if they are dense you'd first want to intersect with Lp. Thenub314 (talk) 13:53, 18 May 2008 (UTC)


 * Yes, of course that is what I mean. The discussion is somewhat informal, but I will change it to make it more clear.  silly rabbit  (  talk  ) 14:01, 18 May 2008 (UTC)

Ok, cool, you can allow Schwartz functions if you really want, as you point out in the line above that $$\text{p.v.}\frac{1}{t}$$ is a tempered distribution. Thenub314 (talk) 14:06, 18 May 2008 (UTC)


 * I think compact support is needed in the next paragraph where I use the commutativity of convolution. The Schwartz space is not stable under convolution with tempered distributions, whereas compactly supported distributions are.   silly rabbit  (  talk  ) 14:15, 18 May 2008 (UTC)


 * Comment: I struck out the obviously false statement above.   silly rabbit  (  talk  ) 14:30, 18 May 2008 (UTC)


 * Correction: What I mean is that as long as all but one of the distributions in the convolution are compactly supported, then the convolution is a commutative and associative operation, so that formal manipulations can be performed with it.  silly rabbit  (  talk  ) 14:33, 18 May 2008 (UTC)

Changing notation
Would anyone object to me changing the current notation for the Hilbert transform of a function from $$H(u)(t)$$ to $$H\{u(t)\}$$? This is more consistent with the way Fourier transforms are defined in signal processing (in my experience), and less confusing since it's currently easy to confuse these expressions for a multiplication. The only potential downside is that the Hilbert domain variable isn't immediately obvious, but that isn't really a big deal. -Roger (talk) 16:54, 15 May 2009 (UTC)


 * Well it's been a year (!) and since I haven't heard any objections I plan to change the transform notation to $$\mathcal{H}\{u(t)\}$$. It'll be inline with the Laplace transform article (though the Fourier transform uses the notation currently in this article) and the explicit Hilbert domain variable (t) will be dropped. -Roger (talk) 03:38, 1 June 2010 (UTC)

Hilbert spectroscopy
I have added a sentence on the development of Hilbert spectroscopy to the History section, as I could not see a better place. I have added the redlinked topic at Requested articles/Natural sciences because I know next to nothing about it outside of reading the topic on BBC news, a couple of paper abstracts and the Terahertz article. My experience ended at narrowband spectral analysis for specific chemical bonds using infrared. If this is the wrong place, please move it to where it fits better. -84user (talk) 17:00, 20 October 2009 (UTC)

Relation to holomorphic functions
The introduction says: "[..] if f(z) is analytic in the plane Im z > 0 and u(t) = Re f(t + 0·i ) then Im f(t + 0·i ) = H(u)(t) up to an additive constant, provided this Hilbert transform exists." That can't be right, as it implies that both $$0=\Im(\cos(t))$$ and $$sin(t)=\Im(e^{it})$$ are Hilbert transforms of the cos(t) (since cos(z) and e^iz are both holomorphic on the plane). Basically the same claim is repeated under "Conjugate functions". I'm not sure what would be a correct version of the statement since I don't know much about Hilbert transforms (that's why I came here..), and don't have easy access to the cited source. I guess some condition like $$ lim_{y\rightarrow \infty} f(x+iy) < \infty $$ (or perhaps $$=0$$) is missing? --132.68.56.67 (talk) 13:42, 5 January 2012 (UTC)
 * The assumption in the article is that f is in L^p for p<infinity. Sławomir Biały  (talk) 17:29, 5 January 2012 (UTC)

Invariance
In sect 6.6 it says H = i (2P - I), but P has not been mantioned as far as I can see, Presumably the projection on the one component of the splitting mentioned above?Billlion (talk) 12:31, 12 April 2013 (UTC)


 * Yes, but that is already given in that same sentence: "...with P being the orthogonal projection from L2(R) onto (the Hardy space) H2(R)...". Mct mht (talk) 00:08, 13 April 2013 (UTC)
 * Thanks, I must have been working while too tired to miss that! Billlion (talk) 08:27, 14 April 2013 (UTC)
 * No! I added it after your post! Bdmy (talk) 08:35, 14 April 2013 (UTC)


 * Sorry, Billlion, should have checked first. Mct mht (talk) 17:47, 14 April 2013 (UTC)

History
In the history section it says the Hilbert transform may be defined for Lp with 1 ≤ p ≤ infinity. I think this is true, but the following sentence says that the Hilbert transform is bounded on the same range of p; I think the actual statement should exclude p=1 and p=infinity. — Preceding unsigned comment added by Mdn33 (talk • contribs) 22:04, 15 July 2014 (UTC)

TeX vs HTML
Hello Sławomir,

Regarding your 2 reverts, with these annotations:
 * Let's not change to a style that looks worse in most browsers. This edit also has problematic things like scriptstyle being used for inappropriate formatting. Use

instead, if necessary.
 * See WP:MSM. Under default settings (i.e., no mathjax), Wikipedia LaTeX code generates PNG images that do not align correctly with the surrounding text. This is also a problem with mobile interfaces.

I was unable to find your reference to "Under default settings (i.e., no mathjax)", but after consulting WP:MSM, my conclusion is that we are talking about a difference of personal preferences, not policy. And that conclusion is supported at Help:Displaying_a_formula, as you probably know. I in particular dislike the use of H(u)  vs  $$\mathcal{H}(u)$$  or (even better)  $$\mathcal{H}\{u\}$$  to denote the Hilbert transform operator (on function u).

Therefore, I would like to hear some other readers' comments regarding which "looks worse":
 * HTML version
 * LaTex version

That is why I am posting this new section. Thanks in advance for any ensuing comments.

--Bob K (talk) 21:09, 1 March 2015 (UTC)


 * Me, I prefer the consistent use of tags for all math, both inline and display, because consistent display of math symbols is more important than alignment to surrounding text. Mixing LaTex-derived PNG and HMTL italic is an abomination. But that is just a personal preference. The general principle is that the style of formatting should be consistent throughout the article, and because  ,

, and HTML styles are all acceptable for inline math, one shouldn't go around converting inline content to one's preferred style without a compelling reason. It looks like the rest of the article is inline HTML format, so we should stick with that in the lead. --Mark viking (talk) 13:24, 2 March 2015 (UTC)
 * I completely disagree, especially with the statement that is acceptable for inline; it is not. By default,  generates PNG images which are twice as big as the surrounding text (at default text sizes) and it is unsightly. Since we cannot force MathJax, math is the only suitable alternative for inline math. It was crafted for this very purpose (as far as simply formulas and variables are concerned), and designed to match the MathJax font (Times-based). HTML also scales with the text, where  does not. So my rule of thumb has always been:  for display, math for inline. So let me add a third option:
 * [//en.wikipedia.org/w/index.php?title=Hilbert_transform&oldid=649550431 HTML version with {math}]
 * 16:18, 2 March 2015 (UTC)
 * (Here from a pointer in WT:WPM) My own preference is that in an article such as this one where there are big displayed formulas that can only be adequately rendered in LaTeX, the inline formulas must also be rendered in LaTeX. Otherwise things that are supposed to be the same variable name look different in the two formats, and it confuses the reader who has to figure out whether the typographic difference indicates a semantic difference or whether it's just bad typography. In my opinion "which looks worse" is the wrong question. The meaning of the formula comes first. Only after that is settled should you consider aesthetics. —David Eppstein (talk) 16:30, 2 March 2015 (UTC)
 * Is there a genuine risk of semantic ambiguity when using the math template? As I understand it, the purpose of this template is to make inline math appear in the same font as the displayed  equations.  Is that problematic for some reason?   Sławomir Biały  (talk) 00:38, 3 March 2015 (UTC)
 * Bob K's proposed html version does not use math and looks very different inline vs displayed. I agree that the math template gets a lot closer, although there are still some minor variations (e.g. pi produces an upright pi symbol, but the LaTeX formatter I'm viewing this with, Chrome/OS X/MathJax, makes the pi italic). —David Eppstein (talk) 00:54, 3 March 2015 (UTC)


 * The glyphs for a math variable "a" in the three versions are : $$a$$,

All three are different fonts in my Chrome browser with the PNG option. In particular, the PNG version is a proper italic a, whereas the template and HTML versions are upright serif and slanted sans-serif roman a's, respectively. Guessing whether an italic "a" is the same object as a roman "a" is one example of the problem. --Mark viking (talk) 01:00, 3 March 2015 (UTC)
 * The math template still requires that you italicize the variable name (or use mvar) which italicizes it for you). So the proper template formatting is $a$ or $a$ (both should look the same). At least on my browser, both the templated version and the latex version are drawn as a glyph with a large loop and a serif, in contrast to the html (and un-italicized math template) which have a small loop and an open curve above it, so they look very different from each other. —David Eppstein (talk) 01:17, 3 March 2015 (UTC)
 * Thank you for the correction. Nonetheless, $a$ and $a$ look very different on my browser, with $a$ : $a$ at least being an italic "a", but with $a$ : $a$ being a slanted roman "a" that is larger than and a different font than the HTML: a. Five different markups for math variable "a" (albeit, one incorrect) result in five different fonts--it's a typographic disaster. --Mark viking (talk) 01:56, 3 March 2015 (UTC)
 * That should not happen... unless you have a different font assigned to . (Mvar uses CSS font-style to display italics, math uses .)  21:42, 5 March 2015 (UTC)


 * To reply to the OP, while it is true that there is a certain degree of personal preference, the MOS does say that "One should not change formatting boldly from LaTeX to HTML, nor from non-LaTeX to LaTeX without a clear improvement." As will all such stylistic preferences on Wikipedia, unless there is a clear reason in policy, we shouldn't change the style.  Regarding the edit summaries, see Rendering math for more details on the documented issues with PNG rendering.   Sławomir Biały  (talk) 14:32, 5 March 2015 (UTC)


 * I'll chip in here, not as an editor, nor as a Wikipedian, but as a general reader – one who expects some standards. Whenever a presentation of something, whether on screen, printed, or something else, doesn't even meet the basic demand of looking reasonably good, then this becomes my primary focus – unwillingly – but is true, hell, these guys don't even get the simplest things right – it looks foul. That foul look then sticks in the head for the rest of the presentation even though I know intellectually that I should focus on the substance of the content.
 * I don't think I'm particularly unusual in this respect. I may be unusual in the respect that even though I know what the problem is (now as an experienced Wikipedian), I find it hard to read math articles with even the slightest bit of inline LaTeX. All I see is that LaTeX. YohanN7 (talk) 13:38, 5 March 2015 (UTC)

Starting point
In my browsers, $$\mathcal{H}\{u\}$$ (and $$\scriptstyle \mathcal{H}\{u\}$$) is more clearly an operation on a function than is $a$, which looks like a function of variable u. Does anyone agree? If so, perhaps that should be the starting point for this discussion. --Bob K (talk) 06:35, 3 March 2015 (UTC)


 * Using scriptstyle for non-subscript mathematics is forbidden by the MOS. On mobile devices this is especially problematic.  So  $$\scriptstyle \mathcal{H}\{u\}$$ is not a good starting point for dicussion.   Sławomir Biały  (talk) 14:32, 5 March 2015 (UTC)


 * I don't think this is really related; the above discussion is primarily concerned with typesetting.  16:11, 5 March 2015 (UTC)


 * I'm sorry, but I don't understand the "non-subscript mathematics" problem. But I am willing to forego scriptstyle, because that is just a distraction.  (I actually viewed it as a concession to the non-Tex people.) So let's focus on Tex vs HTML.  And (alas) I also don't understand the problems with "mobile devices", because I don't use them to read technical Wikipedia articles.  And is that really more important than the fact that $H(u)$ looks like a function of variable u?  It's difficult/impossible to have this discussion when we can't all see what the others are seeing on their screens.  I will happily submit my screen shots... is there a logical place for that? --Bob K (talk) 20:23, 5 March 2015 (UTC)
 * What is wrong with using $H(u)$ ?  21:03, 5 March 2015 (UTC)
 * Functionals are usually written using a calligraphic font, at least in physics. --Mark viking (talk) 21:08, 5 March 2015 (UTC)
 * OK, {math} cannot handle that. But Bob K seemed to focus on the brackets, ie. why $H{u}$ vs. $H(u)$?  21:39, 5 March 2015 (UTC)
 * Why do you say that? My original posting was this:
 * I in particular dislike the use of H(u)  vs  $$\mathcal{H}(u)$$  or (even better)  $$\mathcal{H}\{u\}$$  to denote the Hilbert transform operator (on function u).
 * It indicates that I'd be happy with $$\mathcal{H}(u)$$ and $$\mathcal{H}\{u\}$$ (but not $$H\{u\}$$).
 * --Bob K (talk) 02:27, 6 March 2015 (UTC)
 * I was focused on you above statement In my browsers, $$\mathcal{H}\{u\}$$ (and $$\scriptstyle \mathcal{H}\{u\}$$) is more clearly an operation on a function than is $H{u}$.   09:17, 6 March 2015 (UTC)
 * Yes, that comment was moved from its original location (in the previous section) during the 11:11, 5 March good-faith edit.
 * --Bob K (talk) 13:13, 6 March 2015 (UTC)

Some of you are missing the point. The issue here isn't the exact correct typesetting of $H(u)$ or whatever. It is the fact that inline, compromises have to be made that clash with perfection. The presence of a single $$\mathcal{H}\{u\}$$ ruins the appearance of the whole article. I am not "pro-HTML". I'd love to see TeX supported 100% and could happily accept the whole articles being written in TeX. But as it stands now, we have to mix. Mixing with inline TeX simply ruins the appearance of the whole article on some systems. Mixing with inline HTML does not. It's as simple as that. YohanN7 (talk) 11:39, 6 March 2015 (UTC)

Two broken references
In the Introduction/Notation section, we have references to Brandwood 2003 and Bracewell 2000. No such reference is listed in the references section. Does anyone have any idea what these references are supposed to refer to? There is a Bracewell 1986, possibly a different edition of the same book, but if so that would need to be checked to make sure the page numbers are still correct. —David Eppstein (talk) 21:16, 5 March 2015 (UTC)


 * Google books shows the Bracewell book to have a 2000 edition; the same for the 2003 Brandwood book. Unfortunately I have neither book. --Mark viking (talk) 00:36, 6 March 2015 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 one external links on Hilbert transform. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20120205214945/http://w3.msi.vxu.se/exarb/mj_ex.pdf to http://w3.msi.vxu.se/exarb/mj_ex.pdf
 * Added archive https://web.archive.org/web/20120227061333/http://www.geol.ucsb.edu/faculty/toshiro/GS256_Lecture3.pdf to http://www.geol.ucsb.edu/faculty/toshiro/GS256_Lecture3.pdf

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.—cyberbot II  Talk to my owner :Online 01:13, 2 July 2016 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Hilbert transform. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20090226231356/http://ccrma-www.stanford.edu/~jos/r320/Analytic_Signals_Hilbert_Transform.html to http://ccrma-www.stanford.edu/~jos/r320/Analytic_Signals_Hilbert_Transform.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 22:34, 3 November 2017 (UTC)

Introduction?
It seems to me that the introductory paragraph of this article is not adequate. The first paragraph ought to be almost a dictionary definition, but this is only a very generic description:

In mathematics and in signal processing, the Hilbert transform is a linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t).

I'm not qualified to do so, but isn't there some two-sentence summary that describes what the transform actually is? Even if we changed it to "...another function of a real variable H(u)(t), given by the improper integral X," or ", essentially the convolution of u with 1/πt" it would be a better summary.

Maybe that's just not possible. In almost every online citation where people try to define the transform, they actually say what it's like, or what it can be used for. Does it simply defy definition?

Timrprobocom (talk) 19:20, 30 January 2018 (UTC)


 * Very good point... new eyes on an old problem. I just took a shot at an improvement, though I'm not sure it meets the high bar you set.--Bob K (talk) 22:01, 31 January 2018 (UTC)


 * I don't think the new lead addresses the problem. The Hilbert transform can be "intuitively" understood as multiplying positive-frequency functions by i and negative frequency functions by $$-i$$.  Perhaps this is a suitable basis for a first paragraph.   Sławomir Biały  (talk) 22:28, 31 January 2018 (UTC)


 * I don't dispute that my attempt falls short of a perfect "2-sentence summary". But I would dispute that your suggestion is an improvement.  Not an easy challenge!  Best of luck to y'all.  --Bob K (talk) 00:41, 1 February 2018 (UTC)
 * The new lead moves even further away from saying what it is, and towards saying something else about it. It's like we started our article about bicycles by saying "a bicycle is something you can ride to work" instead of that it's a two-wheeled self-powered vehicle. Yes, you can use the Hilbert transform to create analytic representations, but that's not what it is. —David Eppstein (talk) 00:52, 1 February 2018 (UTC)~

I've tried to tweak the latest to include both the integral expression and the phase shift. Sławomir Biały (talk) 02:19, 4 February 2018 (UTC)


 * This might have been asked and answered before, but I'm wondering about the phrase "function, u(t) of a real variable". Why do we need to point out that "t" is real?  And why don't we point out that u is real-valued (which seems more important to me in this context)?
 * --Bob K (talk) 12:41, 4 February 2018 (UTC)


 * Why does u need to be real-valued? The eigenfunctions of the Hilbert transform are complex.   Sławomir Biały  (talk) 14:58, 11 February 2018 (UTC)

What purpose is served by including $$ \exp(i t) $$ and $$ \exp(-i t) $$ in the Hilbert_transform? First of all, the transform of a complex function is the transforms of its real and imaginary parts, recombined as a complex function. But also, what is the application we have in mind? --Bob K (talk) 13:58, 11 February 2018 (UTC)


 * These are the eigenfunctions of the transform. As for applications, I'd say it's fundamental for boundary-value problems.  Functions of positive frequency extent to the upper half-plane.  Those of negative frequency extend to the lower half-plane.   Sławomir Biały  (talk) 14:58, 11 February 2018 (UTC)

Titchmarsh's theorem section
The following comment was inserted after the section heading of the Titchmarsh's theorem section of this article, by User:24.233.245.156 (talk • contribs) 08:23, 4 May 2018 (UTC): Warning: This entry needs corrections. The results below are due to Paley-Wiener (Fourier transform part) and Marcel Riesz (Hilbert transform part). See e.g. https://link.springer.com/article/10.1140/epjh/e2014-50021-1. --Pipetricker (talk) 09:45, 4 May 2018 (UTC)

Here is a partial list of problems with the section ‘Titchmarsh Theorem’

1.	As it stands now, the anonymous author(s) XX of this Wikipedia entry refer(s) to some results, attributing them to Titchmarsh in a 1948 book. In that case, a reference to the original 1937 edition should have been used. The 2nd edition is the same as first save some added (mostly irrelevant today) positions to the bibliography.

2.	 The review of 1937 edition in Zentralblatt fuer Mathematik is by Hille. In the recently (added by XX after my intervention) sentence with a citation to King’s book, Hille is (particularly! according to King) the author of some results in ‘Titchmarsh Theorem Section’ (Chapter V in the book). Here is the beginning of Hille’s review of the book: “A couple of introductory chapters bring the formal theory of Fourier, Laplace and Mellin transforms and the convergence and summability theory of the Fourier simple and double integrals. This is followed by a thorough discussion of the theory of Fourier transforms in the class L, L_2 (Plancherel), and L_p (Titchmarsh). In Chapter V we find the theory of conjugate functions and Hilbert transforms for the classes L_p, p=1 or 1< p, where the central theorems are due to M. Riesz. The rest of review follows (but is here irrelevant).

3.	In the so-called Titchmarsh Theorem, as written by XX, there are three dotted sentences (1) (2) (3) concerning H_2 Hardy space, and the equivalence of all three is claimed. Sentence (1) has no meaning, because it does not say in what convergence the limit is meant to be. The intended meaning is almost everywhere convergence, but then (1), as written, becomes false. The sentence (2) is obviously false.

4.	Both (1) and (2) can, of course, be corrected. Let us assume we did it. Then (1), after a minimal rephrasing, contains what is called ‘the upper half-plane analogue of Fatou’s theorem on radial limits’. Knowing that we are in H_2 and that (1) holds, the equivalence of (1) and (2) is due to Marcel Riesz and it is referred to Marcel Riesz in every book on the subject written by a mathematician. Similarly, the equivalence of (1) and (3) is then a theorem due to Paley and Wiener (Theorem 5 in their 1934 book) and, strictly speaking, has nothing to do with the Hilbert transform.

5.	The purpose of citation to books (as opposed to papers) in the Wikipedia articles is (or should be) to facilitate further study by the reader, who might be, for instance, interested in a proof. Here is Titchmarsh’s proof of the equivalence of (1) and (2): “The equivalence of (1) and (2) follows at once from the above theorems”. We are on the page 128 of the book!

6.	This is not the end of Titchmarsh’s pedagogical ‘achievements’. The title of Chapter V is “Conjugate integrals, Hilbert Transforms”. He never defines Hilbert transform. There are no ‘conjugate integrals’ in the text. He speaks about ‘conjugate functions’ instead, but there is no definition either. He never says what ‘regular analytic function’ (a term he uses for holomorphic function) is. There is quite a bit more, but I think it is already clear that this is not a book to be used for citations.

7.	So, corrections are needed. But before doing anything, I am awaiting further comments.Math45-oxford (talk) 15:20, 10 June 2018 (UTC)


 * I agree with the above. (I may even be the mysterious person XXX, for all I know ;-)  Sławomir Biały  (talk) 17:34, 10 June 2018 (UTC)

 Riemann-Hilbert problem. I also have a question concerning the Riemann-Hilbert problem section. What is it supposed to mean that F_+ and F_ solve the problem ‘formally’? Further, let us assume the function f indeed solves the problem (i.e. really as opposed to formally). Where precisely Pandey states the quoted result in Chapter 2? I’d like a precise reference because I just read that chapter twice and cannot find the result.Math45-oxford (talk) 21:17, 14 June 2018 (UTC)


 * Pandey is not a very good book. I think a better reference could be found, and precise conditions given.   Sławomir Biały  (talk) 11:37, 15 June 2018 (UTC)

Precise conditions would only be about a solution of the Riemann-Hilbert problem. But I already assumed that $F plus, minus$ form the solution. Where is the statement about the Hilbert transform? I do not believe it is true, so if I am right it might be rather difficult to find a reference (if I am wrong, the author XXX must have seen it in Pandey – so, again, where is it in Pandey?). By the way, (Titchmarsh’s version of) the Kolmogorov theorem is not correctly copied from Titchmarsh’s book and, as stated, is wrong. The previous Theorem, the one for $p>1$ is also not correct with our definition of Hilbert transform. Titchmarsh had the minus sign, because his Hilbert transform (that he never openly defined) was what for us in ‘inverse Hilbert transform’.Math45-oxford (talk) 16:24, 15 June 2018 (UTC)

Some more remarks about '''Section 8.2. Riemann-Hilbert problem'''. It is stated that if $$ F_+ - F_- = f $$, where $$ F_+ $$  is holomorphic in the upper half-plane and $$F_- $$ is holomorphic in the lower half-plane, then FORMALLY  $$ Hf(x) = (1/i) [F_+(x)+  F_-(x)] $$. The reference is to Pandey’s book Chapter 2. Chapter 2 in Pandey’s book gives a sketch of the ‘classical’ (i.e. no distributions) solution of the HP. This is done before Hilbert transform was even introduced. So it looks like the author of this entry might have thought about a reference to Section 6.2 in Pandey’s book. This, however, would not be very helpful, because Pandey’s discussion there is very imprecise (with wrong references to his own papers and to non-existing formulas, in particular). But the main issue is that "FORMALLY" presumably refers to some distributional equality. This would need to be stated in a precise way, otherwise the claimed equality is meaningless. Finally, even assuming things are corrected: Hilbert Problem is not about Hilbert Transform and has its own Wikipedia entry. Is this Section 8.2 needed at all? Math45-oxford (talk) — Preceding unsigned comment added by Math45-oxford (talk • contribs) 19:47, 22 July 2018 (UTC)

'''Section 5. Domain of definition'''.

1. In the two formulas there the minus sign in front of the integrals is an effect of Zygmund’s use of –H as the definition of the Hilbert transform and should be corrected into +. Further, what would be the reason for using this formula, instead of the defining one, at all? By the way, the formula is also used (and should be corrected) in the Introduction. There is no clear reason to state it in the Introduction either, because that formula is not more explicit than the defining one, once one understands what the principal value is.

2. The almost everywhere convergence happens to be true. As for the norm convergence, I do not know. At any rate, I did not see it stated for the norm convergence this way anywhere. Now, why the claimed convergences would be a consequence of Titchmarsh Theorem? What Titchmarsh Theorem? A few subsections later a so-called Titchmarsh Theorem is stated. It refers to p=2 only and does not provide the claimed implications even in the case of p=2. Math45-oxford (talk) 20:41, 22 July 2018 (UTC)

Incorrect Fourier multiplier given the current definition of the transform
The Hilbert transform can be defined in various ways, one of which leads to a change in sign of $$ H(u)(t)$$. The current definition with $$ {du(\tau) \over t-\tau} $$ actually leads to a Fourier multiplier of $$ i \text{sgn}(\omega)$$, not $$ ii \text{sgn}(\omega)$$. At least this is my understanding from Simon's "A comprehensive course in analysis." I do not know if it is easier to change the original definition or the Fourier multiplier later, since I think the original definition is actually the nonstandard one and the Fourier multiplier given here is the standard multiplier.

Jupsal (talk) 00:36, 26 December 2018 (UTC)

delete a sentence from lede paragraph?
Maybe it's just my ignorance, but the sentence that doesn't seem very helpful is:

Specifically, the Hilbert transform of u is its harmonic conjugate $v$, a function of the real variable t such that the complex-valued function $H(u)$ admits an extension to the complex upper half-plane satisfying the Cauchy–Riemann equations.

Perhaps it can be re-phrased or re-located to a more mathematical section.

--Bob K (talk) 20:42, 22 January 2021 (UTC)

Discrete Hilbert Transform
Maybe someone knowledgable in the Discrete Hilbert Transform could rework the section?

- There is one reference to MATLAB and two references to the same master thesis and nothing else

- FIR filter types are used and partially defined but no reference is given as to where these definitions come from or why this type classification is used or relevant

- a FIR approximation is mentioned and shown in Figure 1 but no reference is given where this is used or who uses this

- hilb(65) in Figure 2 is not defined

- the however bulletpoints seem to be blanket statements or specific to certain types of FIR, but no further sources or discussion is provided

- half of the section is about implementation details in MATLAB? 31.10.205.218 (talk) 04:31, 5 June 2024 (UTC)