Talk:Definite matrix

Old remarks
The statements about the complex/real case were incorrect. Here's a simple counterexample:

$$\begin{bmatrix} 1 & 2i \\ -2i & 1 \end{bmatrix}$$

This Hermitian matrix has $x^T A x > 0$ for all real $x$, but not for $x = [1, i]$.

Would it make any difference if x was in Cn?

shd: In that case it should be something like x*Mx where * refers to complex conjugate transpose


 * I added that to the article. AxelBoldt

Does this encompass all positive-definite matrices? The Mathworld page leads me to believe that there are positive-definite matrices that are not Hermitian. From the page: 'A necessary and sufficient condition for a complex matrix to be positive definite is that the Hermitian part ... be positive definite.' inferno

If you think about xtMx as a quadratic form in the vector x, and write the square matric M as a sum of a symmetric part M1 and an antisymmetric part M2, you see that the quadratic form is independent of M2. So if you look at a symmetric positive definite M1, you can add any  M2 you want, and still be positive definite. If that's useful ... And the same thing if you split up into a hermitian and anti-hermitian part, for complex x and introducing complex conjugation in the form.

Charles Matthews 20:11, 16 Jun 2004 (UTC)

Actually, from the point of view of applications, the restriction of the definition to symmetric matrices (in the real case) is unfortunate, since for instance discretizations of advection problems yield non-symmetric positive definite matrices. I'll change this unless I receive serious complaints. Unfortunately, the article Normal matrix becomes wrong then. -- Guido Kanschat 20:15, 3 November 2005 (UTC)


 * Why does a PD matrix A need to be Hermitian (or symmetric in real case)? This is very confusing and serious. eakbas 04:24, 2 October 2007 (UTC)

This page is linked to from Niemeier lattices. The usage there is a "positive definite unimodular lattice". So we should probably explain here what "positive definite" means in the context of a lattice. I don't know the answer. A5 06:02, 20 Jun 2005 (UTC)


 * I'm afraid I can't find the link. The first sentence in Niemeier lattices links to positive definite, which redirect to definite bilinear form. However, the best explanantion of "positive definite" in this context can be found on Lattice (group). -- Jitse Niesen 12:18, 20 Jun 2005 (UTC)

English
This is all very nice and I'm sure a "rigorous definition" is great for some people. However, for someone who just got redirected from positive-semidefinite and doesn't have a degree in pure math, this is not very usefull.

Agreed - can someone with a better understanding of the subject please try to give a more intuitive description somewhere? 70.93.249.46


 * I may be a mathematician, but I have no idea what could be made clearer. Of course, you'll likely encounter only real matrices for which the definition is simply that B is symmetric and x^T B x >= 0 for all vectors x. Geometrically, the set of n x n positive semidefinite matrices is a cone (you can add two together and multiply them with a nonnegative number), but that's rather trivial, and the definition of the barrier function that makes them useful in optimization is not helpful either. What sort of "useful" facts would you like to learn? Wandrer2 16:35, 2 March 2006 (UTC)


 * I've just expanded property 2 in the definition to give a spectral intuition about how positive definite M are special within the set of Hermitian operators. For me, this intuition is fundamental and was not quite there before.  Hope it helps.  Eclecticos 08:02, 20 September 2006 (UTC)

I completely agree with 70.93.249.46. I have a good background in Maths-for-Natural-Sciences and I currently want to explain what an invertible matrix is to a colleague who also has good MfNS but was never taught about matrices. This page doesn't help either of us very much. Could a link be added to a page about matrix operations at A-level/ US High School/ Baccalaureat level? (Is there one...?) OldSpot61 (talk) 13:41, 9 April 2008 (UTC)

Proofs of Positive Semidefiniteness
I would like to prove that the difference between two general matrices (each of a certain class) is a positive semidefinite matrix. I am not up to the task without some examples; would anybody mind posting examples of positive semidefinite (or definite) proofs?

Positive Eigenvalue?
What is meant by "A positive definite if and only if all eigenvalues are positive"?. Is all eigenvalues >0 or is all eigenvalues $$\geq0$$?

I have managed to prove the following: Let $A$ be a positive definite $$ n\times n $$-matrix with eigenvalues $$\lambda_1,\lambda_2,\ldots,\lambda_n$$

then $$\lambda_i\geq0, \quad i=1,2,3\ldots,n$$ and there exist a $$k\in\{1,2,3\ldots,n\}$$ such that $$\lambda_k>0$$.

But I havn't managed to prove the following:

Let $A$ be a positive definite $n\times n$-matrix with eigenvalues $$\lambda_1,\lambda_2,\ldots,\lambda_n$$ then $$\lambda_i>0,\quad i=1,2,3,\ldots,n.$$

To be clearer $$ A= \begin{pmatrix} 3&0&0\\ 0&2&0\\ 0&0&0\\ \end{pmatrix} $$ has the eigenvalues $$\lambda_1=3,\lambda_2=2,\lambda_3=0$$.

The matix is positive definite since $$ x^tAx=\sum_{i=1}^3\lambda_ix_i^2>0 $$ for all vectors $$x=(x_1,x_2,x_3)\in\mathbf{R^3},x\neq0$$.

Therefore A symmetric and positive definite doesn't imply that all eigenvalues of $A$ is positive (in the sence >0).

However maybe this might just be the case when the matrix contain a row $j$ and column $j$ that are both zerovectors.

Can anybody help me??? I don't get it.

/Tobias mathstudent


 * "A positive definite if and only if all eigenvalues are positive" means that all eigenvalues have to be > 0.
 * Your matrix A is not positive definite, because xTAx = 0 for x = (0,0,1), which is not the zero vector.
 * I hope this clarifies the matter. If not, feel free to ask. -- Jitse Niesen (talk) 10:53, 16 August 2006 (UTC)

Proof of a property mentioned?
If $$M$$ and $$N$$ are positive definite, then the sum $$M + N$$ and the products $$MNM$$ and $$NMN$$ are also positive definite; and if $$M N = N M$$, then $$MN$$ is also positive definite.

Can anyone provide a reference or a sketch of a proof to the $$MN$$ part? I have not found it in any googleable literature and I cannot prove it either. Thanks.


 * if MN = NM where M = M* and N = N*, then MN can be simultaneously diagonalized. from this the claim follows. Mct mht 13:04, 20 January 2007 (UTC)


 * OMG, I have been blind... However, is there an argument not requiring real analysis as in the positivity of eigenvectors criterion? I have hoped for something that uses criterii 4. or 5. Thanks anyway! —The preceding unsigned comment was added by 84.163.72.75 (talk) 13:32, 20 January 2007 (UTC).


 * well, the following can be shown: if MN = NM, then x is an eigenvector of N iff x is an eigenvector of M. that is essentially what i said above. also, there is no analysis involved at all here. a result more general than my first answer is that any commuting family of square normal matrices can be simultaneously diagonalized. Mct mht 13:44, 20 January 2007 (UTC)


 * if MN = NM, then x is an eigenvector of N iff x is an eigenvector of M. - This is clear. You can define eigenvectors/-values by field extension (eigenvalues are zeroes of the characteristic polynomial and eigenvectors are vectors from the nullspace of a certain matrix), but when speaking of positivity and eigenvalues/-vectors in one sentence (and that's what we do when applying your argument to prove that MN is positive definite), real analysis comes in because you can't define positivity in an algebraic field extension. I was wondering whether a proof of the above property was known which did not use real analysis. Anyway, this is rather a question for a math forum than for the Wikipedia, so sorry for going offtopic.


 * hm, don't think so, this has nothing to do at all with algebraic field extensions, or any analysis for that matter. MN = NM if and only if they have the same invariant subspaces. the claim that they have the same eigenvectors is an immediate corollary of this (under the given assumptions). Mct mht 14:23, 20 January 2007 (UTC)


 * Well, over a field like $$\mathbb{Q}$$, a matrix (particularly, a positive definite one) may have no nontrivial invariant subspaces at all, and such criteria fail, so we do need the analytic structures of $$\mathbb{R}$$ and $$\mathbb{C}$$ for the argument (precisely, we need the fundamental theorem of algebra in $$\mathbb{C}$$ for the MN = NM <==> equal invariant subspaces criterion and we need some analysis over $$\mathbb{R}$$ to show that a matrix is positive definite iff all its eigenvalues are positive). Anyway, as I said, this is offtopic here, it's just me searching for a proof matching my criteria of elegance... —The preceding unsigned comment was added by 84.163.72.75 (talk) 17:05, 20 January 2007 (UTC).


 * ok, fair enough. i thought the running assumption was that the matrix entries are real or complex. the article should have stated this. more precisely, the claim "MN = NM <==> equal invariant subspaces" is true regardless what field the entries are in. it's the existence of invariant subspaces that is in question. i do agree that some analysis would be needed in the case of, say, rational entries. you might wanna add this to the article. Mct mht 17:30, 20 January 2007 (UTC)


 * It is already stated there well enough. I didn't criticize the article, I just wanted to explain the reason that I am looking for a more elementary proof (but I still like yours!).

notation for positive definite matrices
Regarding the recent addition by Kkliger, could you provide a reference for this usage? That is, I'm much more familiar with $$M\geq 0$$ indicating a nonnegative matrix rather than a positive semi-definite one. Feel free to add it back, but it'd be nice if you also note the potential for confusion (and add a reference to support the usage). Cheers, Lunch 19:58, 14 February 2007 (UTC)


 * In the (pure mathematics) literature mostly with a positive matrix a positive (semi-) definite matrix is meant and nearly never a matrix which entries are positive real numbers (which are of nearly no use, perhaps in numerics?). This comes, as a positive definite matrix defines a positive linear operator. The corresponding Wikipedia article for "positive matrices" is therefore misleading and moreover inconsistent, as the corresponding Wikipedia article of "positive linear functional" is correct (in my sense). So in Wikipedia a positive matrix induces no positive operator which is not what most people want. I think this difference in the literature (which I was not aware about) should at least be mentioned somewhere.


 * You can find my notation in many books and articles, for example Rajendra Bhatia "Positive definite matrices" (whose first chapter is also in the internet), Eberhard Freitag "Siegel modular functions" (these are the only books I have at home today) but surely in all books about functional analysis and C^* algebras (e.g. the article of S. Sherman "Order in operator algebras" American Journal of Mathematics, Vol. 73, No. 1 (Jan., 1951)).

kkilger 21:46, 14 February 2007 (UTC)


 * Okey dokey. Like I said, you're welcome to put it back, but please put a note regarding the potential confusion.  A canonical reference would be nice, too.  The book by Bhatia might be nice except that it was only published this year (2007).  It isn't widely available (or read) just yet nor has it been reviewed.


 * In the (pure mathematics) literature... What field(s)?  which are of nearly no use, perhaps in numerics?  Yes, in numerical analysis, but also in probability and operations research.  There are countless others.  Mathematics is quite a broad subject.  Lunch 21:18, 14 February 2007 (UTC)


 * The notation A > 0 for positive operators is indeed common in functional analysis, at least in my experience. It's for instance in Riesz & Sz.-Nagy, Functional Analysis and in Kato, Perturbation Theory for Linear Operators. However, I think that it's not so common in works that restrict themselves to matrices, i.e., finite-dimensional. I had a look in some entry-level Linear Algebra books and none mentioned the notation (please correct me if I'm wrong).
 * For that reason, I think it's better to avoid the notation in the article if possible (of course, the notation should be mentioned). On the other hand, in some places the notation is really useful &mdash; for instance, in "M > N implies N^{-1} > M^{-1}" &mdash; so I guess in those places it's best to use the notation. -- Jitse Niesen (talk) 04:39, 15 February 2007 (UTC)

I've also seen the notation $$\succ$$ (etc.) used. This is used by Boyd in Convex Optimization, and I think I've seen it elsewhere (systems/control literature), but I can't recall exactly off the top of my head. Whether it's more or less confusing is debatable, though, since for vectors, $$\succ$$ usually means componentwise greater-than. Overall, it's probably a win, assuming you know what's a matrix and what's a vector. --Paul Vernaza (talk) 04:48, 10 February 2008 (UTC)


 * I agree. The only notations that I have ever seen for positive (semi)definite matrices are $$\succ$$ (and $$\succeq$$).Bender2k14 (talk) 04:09, 17 September 2010 (UTC)

minor question
Recall the property mentioned: 9. 	If M > 0 is real, then there is a δ > 0 such that M\geq \delta I where I is the identity matrix. I guess, we can as well write M > \delta I (because of the eigenvalue characterization), right? 146.186.132.163 19:43, 12 March 2007 (UTC)


 * yes. "is real" presumably means symmetric with real entries. it's true for positive matrices with complex entries also. Mct mht 22:18, 12 March 2007 (UTC)

Hermitian
The leading paragraph states that a positive definite matrix is Hermitian. Isn't that simply wrong? (The same article considers non-Hermitian positive definite matrices, anyway). The last sentence speaks of some "disagreement" about the definition for complex matrices; the |the article at Mathworld, however, explains it is only that some authors merely restricted the discussion on Hermitian matrices and that the definition have instances in matrices real and complex, Hermitian and non-Hermitian. -- 213.6.23.13 (talk) 16:07, 24 February 2008 (UTC)


 * You may notice that the "some authors" cited by MathWorld are in fact quite a lot respectable authors. Some of them restrict the definition of positive-definite matrix to Hermitian matrices. I think the end of this article reflects the literature better than MathWorld (which is not surprising given that I wrote it).
 * You're of course most welcome to improve the article. I agree that it's unfortunate that the leading paragraph states that a positive definite matrix is Hermitian, while there is no agreement in the literature about this. Perhaps we should just remove the word Hermitian in the first sentence? -- Jitse Niesen (talk) 14:41, 29 February 2008 (UTC)


 * I agree that Hermetian is besides the point. The definition of PD is that the quadratic form is always strictly positive. There are plenty of applications where this is far more useful. Also, even if Hermitian is required, why not start with the real case? Pdbailey (talk) 03:46, 2 July 2008 (UTC)


 * One can show that every diagonalizable matrix with real eigenvalues (positive-definite or not) is indeed self-adjoint ("Hermitian") under some inner product (if you change the inner product, the adjoint is no longer the conjugate transpose), which leads to some flexibility on whether you need to consider "Hermitian" matrices under the usual conjugate-transpose definition (which arises merely from the traditional x*y inner product). — Steven G. Johnson (talk) 03:34, 11 October 2011 (UTC)
 * Considering it is in the first paragraph, I think this can be very unclear to readers. While it is true that positive-definite requires self-adjoint, I think it is far too easy to see Hermitian and assume that over the reals it is necessary for your matrix to be symmetric. The explanation provided by Steven G. Johnson is satisfactory, but far too technical and buried for something in the first paragraph of a relatively basic topic. 142.189.238.225 (talk) 20:38, 18 September 2022 (UTC)

recent edits to emphasize reals over complex
I recently edited the page to remove the mention of requirement that a positive definite (PD) matrix is Hermetian because it is clearly not a requirement (see discussion above). I also put a clear, traditional, definition in the leed so that people would know what we were talking about on this page. I also changed reference of complex numbers with a trailing note about real number to a article that featured reals and had complex numbers mentioned afterwords. the basic idea here being to improve the readability for the audience who knows the least about the subject. Reals are easier to understand, and more people know them. It is my believe that those who understand complex numbers and the concept of a conjugate can easily see how the theorems I changed would work with the complex numbers. Pdbailey (talk) 04:22, 4 July 2008 (UTC)


 * errors were introduced and correct information removed. simply replacing C by R and * by T and you expect all the results still hold true? e.g. all positive eigenvalues---not true, existence of Gram matrices---not true, bijection between inner products on Rn and positive matrices according to your definition---not true. there's a reason it's common to consider the complex case first. the real case is less natural. the article already covers this. Mct mht (talk) 21:34, 4 July 2008 (UTC)


 * Mct mht, thank you for writing a comment on this page. I urge you to try to improve the edits I have made instead of reverting them; I'm trying to improve the readability of the article, and I think starting with the real case makes a lot of sense for imporving the accessibility of the article.


 * Now, as for the edits, the claims I made are in both versions. The page you appear to prefer reads, 'For real symmetric matrices, these properties can be simplified by replacing $$\mathbb{C}^n$$ with $$\mathbb{R}^n$$, and "conjugate transpose" with "transpose."' Let's break these down point by point:
 * (1) positive eigenvalues. imagine there is an eigenvalue $$\lambda$$ that is not strictly positive with associated eigen vector v then $$v^T M v = \lambda v^T v = \lambda ||v|| <= 0$$ and M is not a PD matrix. For a reference, see Mas-Colell, Whinson and Green, "Microeconomic Theory", 1995 (page 936).
 * (2) existence of Gram matrices. A stronger claim than this is that the Cholesky decomposition exists, so it certainly is true if the matrix is symmetric. Correct me if I'm wrong here, but a similar claim (to the Gram matricies) would be that a square root exists, since PD matrices have a full set of eigenvalues, a square root exists, so the claim is correct.
 * (3) bijection for inner products on $$R^n$$. I included this one only because it was on the page before (see above quote). But it looks like it is almost true when you write, "according to your definition---not true." Please fix the definition if you can.
 * Thanks for working with me to keep this page great! Pdbailey (talk) 22:19, 4 July 2008 (UTC)


 * ridiculous. i said positive definitness in your definition does not imply all eigenvalues positive. obviously positive eigenvalues implies positive definitness in any reasonable definition. positive definitness, in your definiton, does not imply existence of square roots. have fun distorting the aricle. Mct mht (talk) 22:26, 4 July 2008 (UTC)


 * Mct mht, I think you are claiming that if M is not symmetric but $$v^T M v >= 0$$, then there is no guarantee that M has an eigenvalue decomposition, is that right? Pdbailey (talk) 22:37, 4 July 2008 (UTC)


 * I think now that you were maybe worried about the only if statement, so I changed it to symmetric real matrices. Pdbailey (talk) 23:25, 4 July 2008 (UTC)


 * the fact that symmetry needs to be imposed, as an artificial requirement, should tell you there's something odd about the real case. this not obvious to you, you shouldn't be editting this page. Mct mht (talk) 19:42, 5 July 2008 (UTC)


 * Mct mht, I'm sorry, but I am not really sure I understand your argument. how is the requirement of symmetry in the real case different from the Hermitian requirement in the complex case? Perhaps you can clearly identify an advantage to starting with the complex case. Pdbailey (talk) 20:23, 5 July 2008 (UTC)


 * I was asked to explain on this talk page my reversal at Positive-definite matrix of a change whose stated objective was to focus on the real case first and then move on to the complex case.


 * I reverted this change because, apart from several errors, the new version was poorly organized. A reasonable organization focussing first on the real case would have been something like:
 * For the real case, here are a definition and various equivalent formulations, and here are interesting properties.
 * For the complex case, here are a definition and various equivalent formulations, and here are interesting properties.
 * As it was, in the new version it was most of the time not clear whether the statements pertained to only the real case, only the complex case, or both.


 * The new lede was as follows:
 * In linear algebra, a positive-definite matrix is a square matrix M for which
 * $$\textbf{z}^{T} M \textbf{z} > 0 \ $$
 * for all non-zero vectors z.
 * This is an acceptable definition only if the entries of the entities involved are constrained to range over the field of real numbers, a restriction that is not mentioned. Furthermore, the requirement of symmetry is not mentioned, although it is under "Equivalent formulations".


 * Further on there is also a mention of a "sesquilinear form" defining "an inner product on Rn". Sesquilinear forms live, by definition, in the complex world. The notation x* is used later (also for vectors over the reals!) but never explained.


 * The last sentence of the section entitled "Equivalent formulations" was:
 * For complex M, it the matrix is not symetric but Hermitian and these properties hold when replacing $$\mathbb{R}^n$$ with $$\mathbb{C}^n$$, and "transpose" with "conjugate transpose."
 * That doesn't make much sense unless you also apply this to the definition, something that was not done.


 * Presumably this was meant to make the article more accessible for readers who are not familiar with complex numbers, but in my opinion this was more likely to confuse than to help them. --Lambiam 15:09, 3 August 2008 (UTC)


 * Let's start simple. In the real case, is symmetry required? The above discussion was on that topic and three wikipedians said no. In addition, I offered a reference that says otherwise above (Mas-Colell, Whinson and Green, "Microeconomic Theory", 1995 (page 936).). But this is a graduate economics text book and I'd rather go with a definition out of a college linear algebra book. But I'll have to wait until next time I'm near the library to look at one of those. Do you have an alternative reference to offer? Pdbailey (talk) 20:14, 6 August 2008 (UTC)


 * I don't see where "three wikipedians said no". One said that this restriction was "unfortunate"; however, changing this would invalidate statements in another article. One possible definition requires the matrix to be Hermitian. If the matrix is real, this implies it is symmetric. Other, non-equivalent definitions are of course possible, such as that at MathWorld. I don't know if this is sufficiently common to merit being mentioned, or perhaps even should have priority, but if so, this should be done for the real and complex cases alike, and not one approach for real and another for complex. --Lambiam 22:03, 8 August 2008 (UTC)


 * Lambiam, can you explain, "If the matrix is real, this implies it is symmetric." Obviously you do not mean real matrices are all symmetric, or that real matrices with strictly positive quadratic forms are symmetric, what do you mean? Do you have references that define PD matrix as something other than strictly positive for all vectors in quadratic form? Pdbailey (talk) 22:45, 8 August 2008 (UTC)


 * What I meant is this: (1) The definition of positive-definite matrix implies the matrix is Hermitian; (2) All real Hermitian matrices are symmetric; (3) Therefore a real positive-definite matrix is symmetric. Quoting from T. A. Whitelaw (1991), Introduction to Linear Algebra, 2nd edition, section 79.3, page 248:
 * One detail which should not be overlooked is that describing a real matrix as positive-definite presupposes that it is symmetric.
 * (italics in the original). --Lambiam 07:14, 14 August 2008 (UTC)


 * I don't think everybody agrees that the definition of positive-definite matrix implies the matrix is symmetric (in the real case). There are a lot of papers that talk about "symmetric positive-definite matrices" (for instance, a MathSciNet search for papers with "symmetric positive definite" in the title returns 104 results), a phrase that wouldn't make much sense if that were true.
 * The problem is that most people don't care about matrices that satisfy x^T A x > 0 for all (real) vectors x but are not symmetric, so they don't care whether such matrices are called positive-definite or not. Many things that are true for symmetric matrices that satisfy x^T A x > 0, are no longer true for nonsymmetric matrices that x^T A x > 0. For instance, $$A = \left[ \begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix} \right]$$ satisfies x^T A x > 0 but is not diagonalizable.
 * I think the article in its current form takes the right approach in this matter. It restricts the discussion to symmetric / Hermitian matrices in the beginning, and has a small section at the end on what happens if you drop this requirement. -- Jitse Niesen (talk) 12:20, 14 August 2008 (UTC)
 * Jitse Niesen, I guess I can see this in the sense that there is only one case that I know of where a PD matrix is non-symmetric, and that is the weak axiom of revealed preferences does not imply that the slutsky matrix is symmetric (assuming this implies that the strong axiom of revealed preferences holds). If you still want to assume concavity (and thus negative definiteness of the second derivative matrix) you get a non-symmetric, negative definite derivative matrix. I realize that no state function (that is any function where f(x)=f(x) regardless of previous values of x) can be represented with non-symmetric second derivative matrix, but that's why the utility function is not defined for people who obey the weak axiom and not the strong axiom. How about this: can we mention the possibility in the lead, but not develop it until later? So the focus is not on non-symmetric cases, but the definition isn't so jarring. Pdbailey (talk) 13:45, 14 August 2008 (UTC)
 * I reorganized the material a bit. The new version gets rid of the TFAE definition (because I don't like TFAE definitions), it mentions the definition for real matrices earlier and it also mentions non-symmetric matrices earlier. Pdbailey asked me on the talk page whether I think real or complex should be done first. I don't care, but I do think both should be mentioned early. Non-symmetric positive-definite matrices also appear sometimes in numerical analysis; they're also called matrices with positive-definite symmetric part reflecting the definition that positive-definite matrices are necessary Hermitian. Comments, rewrites, even (argued) reverts are, as always, welcome. -- Jitse Niesen (talk) 16:09, 15 August 2008 (UTC)
 * I'm following Make_technical_articles_accessible and the specific advice to, "Put the most accessible parts of the article up front." and put the real case first. Pdbailey (talk) 19:30, 16 August 2008 (UTC)

(de-indent) As I said, I don't care on whether real or complex is put first. However, I reverted the rest of your edit, because it was quite frankly a mess. Please put more efforts in polishing your edits; this seems to be part of what angered Mct mht and I'm not too happy about it either. -- Jitse Niesen (talk) 20:01, 16 August 2008 (UTC)

Diagram
This article could do with a diagram showing examples and counter-examples of the range of positive-definite matrices. In particular, I am thinking about 2D transformations that can be represented as ellipses. 155.212.242.34 (talk) 13:00, 13 August 2008 (UTC)


 * 155.212.242.34, sounds like you are farmiliar with this topic. be bold and add the information! Pdbailey (talk) 02:48, 14 August 2008 (UTC)

Property 4
Doesn't the product need to look like B B^T or B B^dagger. (CHF (talk) 04:32, 23 October 2008 (UTC))


 * Well, the property is also true for B B^T, in which case we have the Cholesky decomposition. However, this is already mentioned under number 5 in the Characterization section. I think that Property 4 is supposed to be a different property (for any positive-definite matrix M, there exists a positive-definite B such that M = B2). I can see that it's very easy to misunderstand the text, so I tried to clarify the situation. -- Jitse Niesen (talk) 13:47, 29 November 2008 (UTC)

Contradictions?
On the Cholesky decomposition page, it says that the decomposition exists for any positive DEFINITE matrix. On THIS page, it says that the Cholesky decomposition exists for any positive SEMI-definite matrix. Which is it?

Also, I would like to see a section that explains the physical significance of positive definite matrices. It is explained in some haphazard locations that covariance matrices are positive definite, as are the "normal equations" in least squares problems. Why is that? What would it mean if one of those matrices were not positive definite?

In the linear least squares page, it says that "positive definite == full rank", and rank is the number of linearly independent rows. This description is MUCH easier to understand than the stuff on this page about eigenvalues. Is the rank definition correct? —Preceding unsigned comment added by Yahastu (talk • contribs) 15:08, 20 January 2009 (UTC)


 * From Cholesky decomposition: "The statement then reads: a square matrix A has a Cholesky decomposition if and only if A is Hermitian and positive semi-definite. Cholesky factorizations for positive semidefinite matrices are not unique in general."


 * I can't answer your second question right away. The linear least squares page means to say that "positive definite == full rank" for matrices of the form $$X^TX$$, though perhaps the formulation can be improves there. -- Jitse Niesen (talk) 20:38, 20 January 2009 (UTC)


 * Yahastu, it is unsurprising you find this article difficult to read, it is written at an insane level considering how simple the concept is. Also, the rank condition is correct for real and complex matrices. You can prove this trivially with just a little knowledge of the SVD. PDBailey (talk) 03:56, 21 January 2009 (UTC)


 * To clarify, then: If $$A = X^TX$$ where X is any real matrix and A is full rank, then A is positive definite. Is that 100% correct?  If so I think it needs to be added to the list of characterizations. Yahastu (talk) 15:04, 21 January 2009 (UTC)


 * Yahastu, yes, this is correct. PDBailey (talk) 00:36, 22 January 2009 (UTC)


 * Then, instead of "It has a Cholesky decomposition" it should read, "It has a unique Cholesky decomposition", right? --SlothMcCarty (talk) 05:58, 5 February 2013 (UTC)

lets define positive semidefinite before we use it.
As the article stands it reads, "For positive semidefinite matrices, all principal minors have to be non-negative. The leading principal minors alone do not imply positive semidefiniteness, as can be seen from the example" before semidefiniteness has been defined. I tried a couple of ways to move it down but could not get it to work easily. Can anyone else see a good way to do this? I think this should not be in there anyway because it is pretty long and implies that the other conditions are relatively intact for semidefinite matricies. PDBailey (talk) 22:10, 12 March 2009 (UTC)

Examples
I think the article would be greatly improved if there was a small section after the Definition called Examples showing a simple example of a 2x2 matrix which is positive-definate and an example of a 2x2 matrix which is not. For the latter it would be great to show a single vector which proves this fact. —Preceding unsigned comment added by Pupdike (talk • contribs) 17:43, 21 March 2009 (UTC)

I went ahead and added the example section. I think it serves to improve the value of the page, but please let me know if you feel otherwise. Pupdike (talk) 22:08, 23 March 2009 (UTC)

An example of positive definite matrix
 * $$ A = \begin{bmatrix} 2&-1&0\\-1&2&-1\\0&-1&2 \end{bmatrix} $$ is positive definite since for any vector $$ x = \begin{bmatrix} x_1\\x_2\\x_3 \end{bmatrix} $$, we have


 * $$ x^{t}Ax = \begin{bmatrix} x_1&x_2&x_3 \end{bmatrix} \begin{bmatrix} 2&-1&0\\-1&2&-1\\0&-1&2 \end{bmatrix} \begin{bmatrix} x_1\\x_2\\x_3 \end{bmatrix} $$
 * $$ = 2x_1^{2} - 2x_1x_2 + 2x_2^{2} - 2x_2x_3 + 2x_3^{2} $$
 * $$ = x_1^{2} + (x_1 - x_2)^{2} + (x_2 - x_3)^{2} + x_3^{2} >= 0 $$


 * 206.21.125.35 (talk) 20:00, 25 April 2009 (UTC)Nam Nguyen

Geometry of cone of positive definite matrices
I want to add information about the (differential/Riemannian) geometry of the cone of positive definite (real) matrices. ¿Is this article the right place, considering that the mathematical level must be higher? ¿If not, were to add it? --Kjetil Halvorsen 02:32, 9 January 2010 (UTC) —Preceding unsigned comment added by Kjetil1001 (talk • contribs)

inconsistent adjoint notation
unless I've missed something, the article uses both the dagger and the star symbol for the conjugate transpose - eek!

--Dmack —Preceding unsigned comment added by 163.1.167.234 (talk) 23:52, 14 March 2010 (UTC)

Examples not explanatory
The first two examples do not explain what makes them positive-definite or not. In particular it does not state how a vector is chosen for multiplication. ᛭ LokiClock (talk) 17:59, 14 June 2010 (UTC)
 * Yes, they do. The first matrix is positive-definite because the associated quadratic form (which is given) is positive on nonzero vectors (for reasons which are also explained). The second is not positive-definite because the associated quadratic form takes negative values on some vectors (one of which is given explicitly). Algebraist 18:04, 14 June 2010 (UTC)

relationship between positive definite and positive determinant
is there any relationship between these two??? I feel that positive definite implies all eigen values are positive, am I right?

Jackzhp (talk) 03:00, 12 February 2011 (UTC)


 * Yes, and the determinant is the product of all eigenvalues and so that makes it positive. CHF (talk) 15:37, 22 December 2011 (UTC)

Spelling: Semi-definite vs. Semidefinite?
(I apologize in advance if this had been hashed out previously--I don't see it covered on this page.) I noticed today that someone had edited the PSD disambiguation page to change Positive-Semidefiniteness to Positive-Semi-Definiteness. I thought it looked somewhat odd, so I decided to peruse the linked Positive-definite matrix page. I see that "semidefinite" generally prevails, except in the Further properties section where both appear. I also see both in this talk page. I am comfortable with one being standard or both being acceptable, but I don't see any hints to which it may be in the page or in the talk. I'm guessing that opinions differ.

(By the way, feel free to edit this section or paragraph (including heading) mercilessly if it supports the discussion. You won't hurt my feelings.  My contributions are intended to further the discussion and no attribution is required--if you change it enough, you may want to strike this parenthetical (with my name).  Cheers! rs2 (talk) 03:03, 29 June 2011 (UTC))


 * I strongly agree that consistency would be nice, and "semidefinite" seems to be more popular than "semi-definite". (FWIW, I also think the unhyphenated version it looks better.)  I'm in favor of changing every instance of the hyphenated version to the unhyphenated version. --Joel B. Lewis (talk) 00:58, 3 July 2011 (UTC)

minor error
The matrix M3 = {{1 2} {0 1}} in 'Examples' is not positive-definite (e. g. for z = {1 -1}) Sconden (talk) 19:53, 15 May 2012 (UTC)

The lead paragraph must define the topics
The lead paragraph of a Wikipedia article must define the topic, not merely say some vague nice somethings about it. (If there are many competing defintions or subtle details, it should give best or most common definition that can fit in one paragraph, and try to warn the reader about those subtleties.) Moreover, the lead section must define all topics that are redirected to the article -- in this case, "negative definite", "positive semidefinite", and "negative semidefinite". All the best, --Jorge Stolfi (talk) 00:33, 27 July 2012 (UTC)

Positive definite real is symmetric?
Can somebody tell me where I go wrong here - A real Hermitean matrix is symmetric, so a real positive definite matrix must be symmetric. If so, why is m={{2,2},{0,1}} not positive definite? I can find no r={x,y} such that r.(m.r)<=0. PAR (talk) 19:51, 20 January 2013 (UTC)

That'd be my question too, more or less. In the section "Quadratic forms" it is said: "It turns out that the matrix M is positive definite if and only if it is symmetric and its quadratic form is a strictly convex function." I would be really surprised that symmetric is a necessary condition for positive definite. In fact, PAR gave the counter example, the symmetric part of [2 2; 0 1] is positive definite as can be seen from its eigenvalues, and so is the matrix itself. But I don't know what the author of that section had in mind... — Preceding unsigned comment added by 137.226.57.179 (talk) 09:13, 14 June 2013 (UTC)

This is just a question of convention. Any real square matrix $$M$$ can be written as a sum of a symmetric matrix, $$S = (M + M^T)/2$$, and an antisymmetric matrix, $$A = (M - M^T)/2$$. Then we have the dot product $$v\cdot (Mv) = v\cdot (Sv) + v\cdot(Av)$$. Since dot products are symmetric, the term involving the antisymmetric matrix always vanishes: $$v\cdot (A v) \equiv v^T A v = (A v) \cdot v \equiv (Av)^T v = v^T A^T v = -v^T A v$$, so $$v \cdot (Av) = -v \cdot (Av)$$, which means it must be zero. Because of this, many people do not bother to define positive-definiteness for non-symmetric matrices. In fact, restricting positive-definite to apply only to symmetric matrices means that we can say that a matrix is positive-definite if and only if all its eigenvalues are positive. This statement would not be true if positive-definite matrices were allowed to be non-symmetric. But again, in the end, this is just a question of definition, and the definition in which positive-definiteness implies symmetry seems to be the more common one. Legendre17 (talk) 19:25, 12 August 2013 (UTC)


 * Thanks - I have edited the introduction accordingly. PAR (talk) 01:17, 13 August 2013 (UTC)

Under characterizations, the eigendecomposition is wrong. I think its supposed to be PDP^-1. If you follow the link "unitary matrix" it says the same (U = PDP* with P* = P^-1). Thus the proof is also wrong (maybe) — Preceding unsigned comment added by 188.155.117.79 (talk) 21:24, 25 April 2014 (UTC)

Proof for Further properties #4
I want to see a proof for that property, especially the part that $$Q^{-1}MQ$$ is symmetric.

Property #8 incorrect?
The symmetric matrix

$$M = \begin{bmatrix} 1 & -0.8 & -0.1\\ -0.8 & 1 & -0.8\\-0.1 & -0.8 & 1\end{bmatrix} $$

has the required from (with m(0)=1, m(1)=-0.8, m(2)=-0.1) and satisfies

$$\vert m(1)\vert + \vert m(2)\vert = 0.9 < 1 = m(0)$$

but is not positive definite. 141.34.29.108 (talk) 21:34, 6 February 2015 (UTC)


 * For the second row, $$\vert m(-1)\vert + \vert m(1)\vert = 1.6 > m(0)$$ PAR (talk) 13:34, 26 August 2016 (UTC)

Improvements needed in the example section
The identity matrix is not only positive-semidefinite but also positive definite (all its eigenvalues are >0). While what is written there is not wrong it would be very confusing for somebody reading this for the first time, because you might ask why only the weaker statement is given. Also in this example section a matrix N is mentioned which is never given. Ben300694 (talk) 12:47, 2 March 2017 (UTC)

Reference or proof to Further properties #1
It says: "If M ≥ N > 0 then (...) by the min-max theorem, the kth largest eigenvalue of M is greater than the kth largest eigenvalue of N." I can't see, how this comes by the min-max theorem and since I can't find this property in any textbook, I am wondering if it's correct. Can anybody provide a proof on that or a reference? BBC89 (talk) 10:44, 4 July 2018 (UTC)

Typo?
In the article I read under Simultaneous diagonalization I read the statement: "$$X^T M A = \Lambda$$". I am almost certain that the following is meant: "$$X^T M X = \Lambda$$". Can someone well-versed in matrix algebra confirm this?Redav (talk) 18:12, 27 January 2019 (UTC)
 * My fault. Fixed it. Fvultier (talk) 18:40, 27 January 2019 (UTC)

Hessian and local minimum
The page contains the following statement:

"More generally, a twice-differentiable real function $$f$$ on $$n$$ real variables has local minimum at arguments $$x_1, \ldots, x_n$$ if its gradient is zero and its Hessian (the matrix of all second derivatives) is positive semi-definite at that point. Similar statements can be made for negative definite and semi-definite matrices."

However, positive semi-definite Hessian doesn't guarantee local minimum, it may also be a saddle point. So either it must be changed to "positive definite", or it may be stated that if a point is local minimum then Hessian must be positive semi-definite.

"Negative semi-definite" listed at Redirects for discussion
A discussion is taking place to address the redirect Negative semi-definite. The discussion will occur at Redirects for discussion/Log/2020 June 9 until a consensus is reached, and readers of this page are welcome to contribute to the discussion. 1234qwer1234qwer4 (talk) 14:36, 9 June 2020 (UTC)

IP-user edits
An IP user is edit warring for adding the the lead a confusing comment to the lead of the article. These edits are not aceptable, as consisting essentially of an alleged proof that a matrix that has been supposed to be Hermitian is indeed Hermitian (circular reasoning). If the user continue this way, I'll asking for an edit block per wP:3RR. D.Lazard (talk) 15:04, 2 May 2021 (UTC)

Cleaning up / clarifying the intro section
D.Lazard recently reverted my edits on the intro section which, among other things, defined a positive definite matrix in terms of the angle between $$z$$ and $$Mz$$. D.Lazard pointed out that this only applies in Euclidean spaces, so strictly speaking the definition should only be made in terms of inner products. I recognize this point and thank D.Lazard for the correction.

However, I still think that the intro section could use some clean up and clarification and that the rest of my edits were essentially good. I'd also like to point out that the current intro section still has a sentence which asserts that the angle between $$z$$ and $$Mz$$ is "within an angle of $$\pi/2$$ of $z$," without the clarification that this only applies in Euclidean space.

There are a couple of issues here. First, if we're going to stick with the general outline of the current intro, where positive definite, positive semi-definite, negative definite, and negative semi-definite are all defined separately, then I think it would make sense for the very first sentence to go something like this:
 * In linear algebra, a symmetric $$n \times n$$ real matrix $$M$$ is said to be positive-definite if for any nonzero real vector $$z$$, the inner product $$z^\textsf{T}Mz$$ is strictly greater than zero. In a Euclidean space, this implies that the angle between $$z$$ and $$Mz$$ is strictly less than 90 degrees.

I think it's important to include the connection to right angles in Euclidean space because, at least going from my own experience, the whole concept of (positive) definite matrices only started to make sense to me when I read an explanation that made that connection. At bare minimum, we should point out that $$z^\textsf{T}Mz$$ is an inner product between $$z$$ and $$Mz$$— while that may be kind of obvious to anyone who already understands these concepts, to someone who is new to linear algebra it just looks like a rather arbitrary sequence of matrix-vector multiplications. Since the intro already has a sentence explaining that $$z^\textsf{T}$$ means the transpose of $$z$$- a sentence which honestly seems a little unnecessary for me- it seems like we are already on some level committed to making this article accessible for beginners in linear algebra.

Secondly, I think there's an argument to be made that since the title of the article is Definite matrix and not Positive definite matrix, the first sentence should actually be some kind of more general definition of definite matrices, and not a definition of positive definite ones. It might be tricky to come up with an easy-to-understand definition of the general concept before explaining the specific categories, though, so I'd only support this change if we can come up with a really clear definition that flows well into the definitions of positive definite, positive semi-definite, etc. Montgolfière (talk) 19:45, 4 May 2021 (UTC)
 * The lead has many other issues. IMO, the sentence on angles is an explanation that is not useful for many users. So, as most such explanations, it does not belong to the lead but to a specific section. On the other hand, the main properties of definite matrices must be summazed in the lead, which is not the case. Among these properties: being the matrix on any basis of a definite quadratic or Hermitian form; being characterized by positiveness of eigen values; definite-positiveness of the Hessian matrix characterize convex functions and is widely used in optimization and convex analysis; definitenes is an affine property (that is, it is invariant by a change of base of positive determinant, and, this it is defined on vector spaces that are not inner product spaces; etc.
 * Every edit of the lead must take this into account. D.Lazard (talk) 09:09, 5 May 2021 (UTC)
 * We may have to sort of agree to disagree about the merits of including the angle explanation in the lead. I should point out, however, that there currently already is a vague reference to the connection with right angles in the first paragraph: "Put differently, $$Mz$$ is in the general direction of $$z$$ (within an angle of $$\pi/2$$ of $z$)." I think we can both agree that this sentence is badly worded in at least two ways; first, "the general direction of" is very imprecise, and secondly, it doesn't make reference to the fact that this property only holds in Euclidean spaces. So as a first step, can we agree to change that sentence to something like: "In a Euclidean space, this implies that the angle between $$z$$ and $$Mz$$ is strictly less than 90 degrees."? It could stay at the end of the first paragraph, rather than in the middle as I had proposed earlier. That would surely be an improvement. Montgolfière (talk) 21:15, 5 May 2021 (UTC)
 * , IMO, this mention of angle is misplaced in the lead, and I have removed it, Possibly, it could be added again in some section. By the way, I have completely rewritten the lead. D.Lazard (talk) 16:41, 6 May 2021 (UTC)

Clarification on "resp." in intro
Hello all,

In the intro there are various conditions for a matrix to be Definite, of the form:

> M is congruent with a diagonal matrix with positive (resp. nonnegative) real entries.

What does *resp.* mean here? Can somebody clarify?

(AFAIK positive and nonnegative are not equivalent, this is confusing)

--kibibu (talk) 02:44, 20 September 2021 (UTC)
 * . "resp." means "respectively". However, as used here with many occurences, this is mathematical jargon that should not be used in the lead. D.Lazard (talk) 08:48, 20 September 2021 (UTC)

Displaying problem
The page is displayed like this on my browser. Is it because of my problem? I don't know how to fix this but it doesn't display like this on others' computers. Plurm (talk) 01:48, 3 June 2023 (UTC)


 * It is fixed now. Thanks. Plurm (talk) 05:00, 4 June 2023 (UTC)

Error in equivalent conditions
Some of the equivalent conditions in the first section are incorrect. The matrix does not have to be symmetric. See for example https://math.stackexchange.com/questions/1954167/do-positive-semidefinite-matrices-have-to-be-symmetric 158.38.1.98 (talk) 10:59, 23 May 2024 (UTC)


 * The equivalent conditions in the lead imply all that the matrix is symmetric, and are all equivalent to the definition given in the first paragraph, which suppose also that the matrix is symmetric. So there is nothing incorrect here.
 * The last sentence of the lead says also It seems that you consider these generalizations as a norm and that this is incorrect to not emphasizing them. This is a legitimate opinion, but other editors, including myself, have an opposite opinion. In any case, there is nothing incorrect in the present state of the lead. D.Lazard (talk) 12:54, 23 May 2024 (UTC)