Talk:Spectral theorem

I think the "Fundamental theorem of Linear Algebra" is normally taken to be the statement that connects kernel, row and column spaces of a matrix. AxelBoldt 02:25 Jan 24, 2003 (UTC)

Added a statement of the spectral theorem for bounded self-adjoint operators. Hopefully this should leave this article in a more-or-less definitive state.CSTAR 21:09, 13 May 2004 (UTC)

Request: The section on the Spectral theorem for unbounded operators is very vague - it more or less just mentions its existence. Since for many applications it is precisely unbounded (differential) operators that are of interest, it would be very useful to at the very least state the Spectral theorem for unbounded self-adjoint operators.

examples
The section "functional analysis" takes some pains to point out the shift operator, and a scaling operator on L^2[0,1] have no eigenvalues, and then immediately states a theorem that none-the-less, these are unitarily equivalent to some multiplication operator on some measure space. It sure would be nice to have a detailed example for these. linas 05:48, 19 March 2006 (UTC)
 * A good example would be kinda hard to come by. In the proofs I've seen for this theorem, the multiplication operator and the unitary equivalence are defined quite abstractly, so the constructions in the proofs don't lend themselves to practical calculations. I'm not aware of any method for finding "nice" multiplication operators and unitary equivalences that works in any generality.


 * However, for the scaling operator you mention (I presume you mean the operator A from the article, with $$[A \varphi ](t) = t \varphi (t)$$), the multiplication and unitary operators are quite simple: The multiplication operator is just the scaling operator itself, and the unitary operator is just the identity operator. This operator has a spectrum, but no eigenvalues. The spectrum is just $$[0,\ 1]$$, the range of f(t)=t. However, none of these spectral values are eigenvalues, as the eigenvectors they'd lead to would have to be functions that were zero except at one point (for an eigenvalue $$\lambda$$, that point would be $$f^{-1}(\lambda ) = \lambda$$). However, such functions would be zero almost everywhere, so are equivalent to the zero function, so can't be eigenvalues.


 * I hope this explains it. James pic 10:02, 3 July 2007 (UTC)

Recent edit
The point is that the sentence


 * Then&mdash;using the fact that $$A x = \lambda x$$ iff $$\overline{A} \overline{x} = \overline{\lambda} \overline{x}$$

is completely irrelevant. Please remove it.--CSTAR 21:12, 25 April 2006 (UTC)

the following was written immediately prior to the appearance of the above comment:

To CSTAR and any others who are interested: Archelon does not wish to argue about the notation for adjointness of linear operators (or complex|Hermitian conjugation, more accurately) (the notation was changed for the sake of the appearance when rendered, but this[*] is a notoriously volatile issue), but has returned the other things CSTAR removed from the proof that the eigenvalues are real (only some of which were originally added by Archelon, incidentally). Archelon 21:19, 25 April 2006 (UTC)

the following was written immediately thereafter:

The sentence is not irrelevant; it is explanatory. Also, a perusal of the page history will reveal that Archelon is not responsible for it (merely in favour of its retention). Archelon's advice to CSTAR: If you find the sentence intolerable, remove it yourself.

[*] (i.e., the latter) Archelon 21:25, 25 April 2006 (UTC)


 * Message to archelon (who seems to refer to her/himself in the third person). I did. --CSTAR 21:41, 25 April 2006 (UTC)


 * Message from Archelon (which does indeed refer to itself in the third person). As you like.  Archelon 22:55, 25 April 2006 (UTC)

Proof
Can anyone explain this line, it wasn't entirely clear for me:

This is finite-dimensional, and A has the property that it maps every vector w in K into K

How do we know that anything in K is mapped into K? Why can't it be mapped into Span(e)? Can anyone clarify, please? 216.7.201.43 13:47, 10 August 2006 (UTC)


 * Is that paragraph clearer now? --CSTAR 14:59, 10 August 2006 (UTC)


 * Perfect, that makes sense now. It was hard to follow before. 216.7.201.43 15:32, 10 August 2006 (UTC)

Target audience
This article is not easy to read for someone who's just interested in ordinary real or complex matrices and has never heard of bra-ket notation (why use it here, this article is not about quantum physics?) or Hilbert spaces. While generality is a good thing, wouldn't it be good to also give something comprehensible to the layperson? Even the singular value decomposition article is more readable, and it deals with a generalization, at least if you're only interested in real or complex matrices. -- Coffee2theorems 00:15, 18 July 2007 (UTC)

I agree. The bra-ket notation makes this page unnecessarily difficult to understand.


 * Certainly, the statement of the theorem should be possible without bracket notation (and incidentally, I'm not sure that the $$\langle \cdot \vert \cdot \rangle$$ notation from physics is appropriate in a maths article - in maths literature $$\langle \cdot, \cdot \rangle$$ is more common), so I've given an alternative characterisation of the theorem in more elementary terms.


 * However, the proof really does need bracket notation - bracket notation is the best means we have of working with orthonormal bases. It would be possible to rewrite the proof substituting $$\langle x \vert y \rangle$$ for x*y, but that would make the proof messier, and most likely make the article less comprehensible to more experienced mathematicians


 * I guess you could put a note at the beginning of the proof for the mathematical layman explaining that this is all the bracket notation means, but the proof is somewhat of a detour from the article as it is - it might make sense to move the proof to Spectral theorem/Proofs, as per WikiProject_Mathematics/Proofs. This also has the added benefit of moving the more technical aspects of the article out of the way - so people don't have to see them if they don't want to. James pic 09:54, 17 August 2007 (UTC)

James said: "and most likely make the article less comprehensible to more experienced mathematicians"

Ummm, is that who the article is for?

At least give the example from basic linear algebra with real matrices. I just took linear algebra and this page was useless when we covered the spectral decomposition. I don't have time now to change it but just for future reference: this page needs heavy work. —Preceding unsigned comment added by 152.16.225.228 (talk) 15:54, 20 February 2008 (UTC)

Generalization to non-symmetric matrices
I'm looking at this section and it seems to have some shortcomings. For one, it talks about non-symmetric matrices, when non-Hermitian or non-normal matrices would be more relevant. It also talks about orthonormal systems of eigenvectors, which would only possible for normal matrices, such as Hermitian or real-symmetric matrices - the fact that this doesn't hold for non-normal matrices is what makes this theory interesting, and differentiates it from the more general operator valued theory. It also makes much of this section wrong.

I noticed a recently created article called Eigendecomposition (matrix), which seems to address these issues in greater detail, and with fewer of these fairly elementary mistakes. I'm tempted to replace this section with a See:Eigendecomposition (matrix), or at least replace the section with a condensed version of the article, and a See main article. Thoughts? James pic 09:58, 11 October 2007 (UTC)


 * i agree that that section has problems, so does the article Eigendecomposition (matrix). it's not clear what the section is saying. for instance, i don't see where it talks about orthonormal systems of eigenvectors, as you do. seems to me it states that eigenvectors of a matrix and its transpose are orthogonal in general (which is not true). i suggest it be removed altogether.


 * in any case, it's not really a generalization of the spectral theorem, which should be concerned with decomposing an operator or matrix into parts. Mct mht 13:44, 11 October 2007 (UTC)


 * Ahh, yes. I should really have said orthonormal bases rather than orthonormal systems.


 * You're probably right about removing it altogether. If I'm reading the section right, it's trying to say roughly the same things the Eigendecomposition article says (although not succeeding). This is a generalisation of the matrix results discussed at the beginning of this article, but neither a special case nor a generalisation of the spectral theorem itself.


 * I feel I ought to stick up for the Eigendecomposition article here though. The article has some style and structure issues, certainly, but it seems to be fairly accurate for a newly created article, and it deals with a notable topic that's not covered elsewhere. James pic 15:22, 12 October 2007 (UTC)

Normal matrix - a detail I do not understand
Hi,

I am trying to understand the spectral theorem for normal matrices. I understand the thing about the Schur decomposition and that using that on the normality criterion implies that the upper triangular matrix in the Schur decomposed factorization of the normal matrix has to obey the relation $$\mathbf{TT}^* = \mathbf{T}^*\mathbf{T}$$, i.e., the upper triangular matrix has to be be normal as well. Then the article says: Therefore T must be diagonal (the whole point of the theorem). My problem is that I do not see very easily why that is so. I can easily understand that if it is diagonal the normality criterion is met, but I do not understand that it has to be diagonal. To try and understand it I then wrote down the expression for the diagonal elements of $$(\mathbf{TT}^*)_{i,i}$$ and found that this implies that
 * $$\sum_{k=1}^n\left|T_{i,k}\right|^2 = \sum_{k=1}^n\left|T_{k,i}\right|^2$$

where the span of the sums can be further reduced by taking advantage of the fact that T is upper triangular.

However, still, it is not evident for me why T has to be diagonal. Evidently, I am missing some (probably trivial) point in the line of argumentation here. Could this section be elaborated a bit in the article to make it more understandable? -- Slaunger (talk) 13:57, 12 February 2009 (UTC)


 * if you write down an arbitrary, say, 2 by 2 upper-triangular matrix T and impose the condition T*T = TT*, you will see immediately that the off-diagonal entry must be zero. Mct mht (talk) 04:50, 13 February 2009 (UTC)


 * Actually, I did just that as well prior to posting, and I do see that clearly for the n=2 case. I just find that somewhat primitive, and figured there should be a smarter argument valid for the general n-dimensional case. -- Slaunger (talk) 10:57, 13 February 2009 (UTC)


 * the same 2 by 2 argument, applied to an appropriate 2 by 2 operator matrix, yields the general case. Mct mht (talk) 10:48, 15 February 2009 (UTC)


 * which I think is becoming such a complicated argument that only a small fraction of readers would get that by themselves, which makes me think it could be a good thing to extend the explanation about this crucial point in the line of argumentation;-) I would rather not like to do it myself thaough, as I am not a native writer and my linear algebra is rusty... --Slaunger (talk) 20:10, 16 February 2009 (UTC)

Why the name?
The article doesn't say why it's called the spectral theorem. ᛭ LokiClock (talk) 02:17, 5 September 2011 (UTC)

"Principal axes of a Matrix"?
Principal axes of a Matrix redirects here from the principle axis disambiguation link. But principal axes aren't mentioned anywhere? Warrickball (talk) 09:04, 21 May 2012 (UTC)

Untidy notation
The notation is highly untidy:
 * (at least, before undid not only my try to unify the notation, but previous editors' work as well)
 * HTML-entities
 * Unicode-characters

both standalone and in diverse mixtures.
 * Wiki-formatting

I would like to unify this to, since the technology for appropriate display (via MathJax) is available for everyone: It is not 2004 anymore&hellip;

Anyone willing to join in? --&#42;thing goes (talk) 22:37, 14 May 2015 (UTC)


 * Inline math is now uniform.


 * It is not as simple as saying it displays okay for you. YohanN7 (talk) 22:46, 14 May 2015 (UTC)
 * Who in the year 2015 cannot make use of MathJax?
 * Besides: The notation is _still_ not uniform, e.g. A* vs. A*, HTML-entities, Unicode-chars, &hellip;--&#42;thing goes (talk) 23:02, 14 May 2015 (UTC)
 * I did put a notice at Wikipedia talk:WikiProject Mathematics. This is a recurring story. YohanN7 (talk) 23:12, 14 May 2015 (UTC)
 * I'm afraid that MathJax is unacceptable to some of us – I disabled it out of frustration because it was slowing the rendering many pages unacceptably, and at times failing to render at all. So until it is actually set as the default display, do not assume that "since the technology for appropriate display (via MathJax) is available for everyone".  —Quondum 00:11, 15 May 2015 (UTC)
 * This article is already a marvel of uniformity, only minor tweaks to be made (e.g. superscripted star). Aside from an integral sign, it is a candidate for the use of HTML everywhere (math). —Quondum 00:18, 15 May 2015 (UTC)

As well as the misplaced asterisks, we have mismatched fonts (the inline capital letters are more slanted than the display-math ones for me) and very bad spacing on the inline matrix products (much more spaced out than the displayed ones). It is only a "marvel of uniformity" if you're myopic. —David Eppstein (talk) 00:46, 15 May 2015 (UTC)
 * Well, conceded, uniformity of two styles: inline and standalone separately. The Template:math formatting does produce unduly strongly sloping italics, but maybe that is a separate issue? Until the inline/standalone norm is changed, this one is about as uniform as can be given that dichotomy, not so?  This article is not the place to fight that battle. —Quondum 03:07, 15 May 2015 (UTC)


 * I'd be happy to see Latex everywhere, including for ordinary text. But now we don't even have one version of TeX for mathematical formulas, we have three versions of PigTeX, none of which is satisfactory. MathJax, for instance, is unbearably slow, I refuse to use it. One would believe we were in 2004 1994, not 2015. How is it even possible to produce such crappy software with today's hardware technology? Besides, it is endowed with bugs that PNG rendering does not have. MathML is better and faster, but still buggy. People will make different choices (if they ever chose&mdash;if they don't they get PNG). As long as this is the case, inline LaTeX should be avoided because it totally $${\mathbf{RUINS}}$$ the appearance with PNG rendering, it is not merely a matter of something sloping a gazillionth of an inch too much or too little. YohanN7 (talk) 08:13, 15 May 2015 (UTC)


 * I think we ought to be forward-looking with this. People's ability to view MathJax or some future equivalent will improve over time, while the wiki text will stay the same or have to be tediously edited. More TeX is better in the long run. And by the way, I'm using MathML and find it perfectly pleasant. --Sammy1339 (talk) 00:20, 16 May 2015 (UTC)


 * I respectfully disagree with this argument. The vast majority of viewers are not logged in users, and they will be seeing the default PNG. We have been unable to move from that for many years, with no prospect of this changing in the next few years – essentially indefinitely. The Wikimedia Foundation has firmly indicated that no resources will be allocated to this change. The question of conversion (which already we will have to find a solution to once the default display changes) hardly counts as an argument at this stage; it is also a much smaller problem (mostly, bots could tackle this). —Quondum 01:18, 16 May 2015 (UTC)


 * Just out of curiosity, what resources would be needed to simply change the default option to MathML? --Sammy1339 (talk) 12:43, 16 May 2015 (UTC)


 * Neither of yours respective favorite options (MathJax, MathML) work universally, while PNG does to some extent. They are also both buggy (besides MathJax being unacceptably slow) and, like PNG, fail to align properly with surrounding text (this is unlike HTML) and fail to acquire the correct size. This "status quo" (sort of) has been maintained the last few years, and, as it looks, will be maintained the coming few years. It is pointless to debate this in the context of a single article. YohanN7 (talk) 13:48, 16 May 2015 (UTC)


 * Where would be the place? It's my understanding that MathML reverts to PNG in cases where it doesn't work, so there's no loss in setting it to the default. Unless I'm wrong about that it seems like a change worth considering - it would mostly end this recurring debate, and certainly improve the appearance of math article for most users. --Sammy1339 (talk) 14:28, 16 May 2015 (UTC)


 * The first place would be here. And no, MathML does not revert to PNG when it has bugs. It displays either incorrectly, like e.g. here, or an error message, like e.g here. Those are just examples from a single article. Your "most users" argument doesn't work, especially since you have to augment it with "most articles".
 * It is also an interesting exercise to display that same page (or any sizable article) using MathJax. Unless you have the computer power of Google, it takes forever to render. Face it, both the MathML and the MathJax implementations used are poopypants software that cannot be relied as a default. YohanN7 (talk) 14:59, 16 May 2015 (UTC)

Definition of "unitary operator"
This article refers to:
 * a unitary operator $U:H → L^{2}_{μ}(X)$

But the article titled unitary operator says a unitary operator is a certain kind of operator from a space into itself. Should another definition be used here instead? If so, it should be stated here. 160.94.28.231 (talk) 01:20, 30 April 2016 (UTC)

Hermitian condition and hermitian conjugate
The article claims the following: "An equivalent condition is that A∗ = A, where A∗ is the hermitian conjugate of A", refering to the conjugate of the operator A.

Is this in general true? With the usual inner product it indeed is, but for any? Is the definition of a scalar product so tight that really ANY inner product has the hermitian conjugate as the conjugate operator? — Preceding unsigned comment added by 77.0.166.225 (talk) 02:13, 27 January 2019 (UTC)

What about the spectral theorem for normal operators
I thought that perhaps the most compehensive spectral theorem was the one that applies to normal operators, at least on a Hilbert space. But that particular version of the spectral theorem seems to be missing from the article (except in the finite-dimensional case).50.205.142.50 (talk) 20:30, 13 March 2020 (UTC)

Normal operators
I hope someone knowledgeable about the subject can extend this excellent article to specifically include bounded normal operators on an infinite-dimensional Hilbert space.