Talk:Trace (linear algebra)

orthogonal
{1 0}

{0 1}

and

{1/sqrt(2) -1/sqrt(2)}

{1/sqrt(2) 1/sqrt(2)}

Both are orthonormal bases. The diagonals aren't equal. Is the definition of trace seriously just summing along the diagonal?


 * Yes it is. Seriously. You seem to make some unwarranted assumption and I can only guess what it is that you are doing wrong. Idea 1: You assume that orthogonal matrices all have the same trace. This is not true. Udea 2: You seem to assume that similar (conjugate) matrices have the same trace (that is true). But the two matrices you have here are not similar. The one realizes the identical function the other realizes a rotation. — Preceding unsigned comment added by 217.95.169.8 (talk) 14:49, 13 January 2019 (UTC)

— Preceding unsigned comment added by 107.3.37.113 (talk) 07:49, 1 August 2015 (UTC)

I do not think that the article title "Trace (matrix)" is any clearer than "Trace of a matrix". In fact, quite the opposite.

Wikipedia articles are about concepts, not about words. This is not about the word "trace" in the context of matrix theory (in which case it should properly be called "Trace (matrix theory)" or "Trace (linear algebra)"), but it is about the concept "Trace of a matrix", which is a perfectly self-explanatory headline of the article. AxelBoldt 23:10 Jan 31, 2003 (UTC)


 * Trace is empty. let's move it there -- Tarquin 23:26 Jan 31, 2003 (UTC)

I would like to see 2 things: Brief paragraph on the importace of the trace, and an intoative proof of the main feature (is it main?) tr(AB) = tr(BA). thanks
 * I agree that this article should be renamed. Mainly because the concept of the trace is more general that that of a matrix.  In other words there is a class of linear transformations for which the trace makes sense which are not matrices.  move the article to Trace -Lethe | Talk

Coordinate-free definition
Why is the map given by $$v \otimes h \mapsto (w \mapsto h(w)v)$$? If I'm not mistaken this would suggest that the image of any linear operator is one-dimensional. 99.240.144.168 (talk) 02:23, 31 December 2011 (UTC)

Trace of matrix products
I would like to suggest a change regarding the trace of the product of several (>2) matrices. Specifically the part beginning "If A, B, and C are square...". The phrase "all permutations" is, strictly speaking, not correct. The last three terms in the chained equality, while looking like general permutations, are transposes of the first three with the transpose marks omitted due to symmetry. My motivation for the change is that if more than three matrices are involved only the cyclic permutation may be used to reorder the matrices. Transposing with symmetry assumed will make more (apparent) permutations possible, but not all posible permutations. My quick and dirty suggestion is to delete the entire paragraph along with the chained equality. It could be fixed but it is just a special case involving the previously discussed cyclic permutation and transpose along with the symmetry condition. Doggit 17:19, 10 August 2006 (UTC)

Direct Link
I set the link from spur to this article. --Kajaktiger 16:47, 4 November 2006 (UTC)

Derivatives
Don't revert back to d Tr = I. This is not true. There is a vectorisation operator present if you must present the information in this form. The peterson & pederson seem to have dropped this when they copy pasted from brookes.

Just goes to show what sort of crap you can find on the internet. —The preceding unsigned comment was added by 150.203.45.188 (talk) 09:35, 5 December 2006 (UTC).

Inner product
I think that following $$ = tr\left(A\,^*B\right)$$, it should be mentioned that $$^*B$$ is the conjugate transpose of B. 131.130.90.152 10:36, 22 January 2007 (UTC)

Trace of a 1×1 matrix
Trace can be used in a non-obvious way where one considers a scalar the trace of a 1×1 matrix. For example, if X is a n×1 column vector and A is a n×n square matrix, then $$X^TAX$$ is a 1×1 matrix which is often regarded as a scalar, but it can be beneficial to use the trace operator instead (to use the cyclic property). This technique is used in the Estimation of covariance matrices article. Can anyone provide a simpler example to add to the main article? Roman V. Odaisky 17:23, 4 October 2007 (UTC)

$$\mathrm{tr}(AB)^n$$
I’m confused by the formula $$0 \leq \mathrm{tr}(AB)^n \leq \mathrm{tr}(A)^n \mathrm{tr}(B)^n$$. Shouldn’t it have read $$\mathrm{tr}((AB)^n)$$ instead of the middle term? Or perhaps the formula should be split into two inequalities:


 * $$\mathrm{tr} AB \leq \mathrm{tr} A \cdot \mathrm{tr} B$$
 * $$\mathrm{tr} A^n \leq (\mathrm{tr} A)^n$$ (if n is natural then this follows from the previous inequality)

Roman V. Odaisky 17:33, 4 October 2007 (UTC)

Various edits
Hi---just wanted to leave a note about the various edits I've been making to the article. If anyone has objections or comments, please respond here---thanks. Spireguy (talk) 18:28, 4 March 2008 (UTC)

Intuition/example
I know the determinate of a transformation matrix can be thought of as the change in volume it causes. For example, if det(A) = 1, then A doesn't change the (n-dimensional) volume of an object it is applied to. Could someone provide a similar intuition for trace? What is the meaning of the sum of the eigenvalues? —Ben FrantzDale (talk) 11:59, 27 March 2008 (UTC)

This is addressed in the article under "derivatives", although perhaps it is not clear as stated. The trace measures the infinitesimal change in volume, since it is the derivative of the determinant. Do you think this should be emphasized or re-stated in a more down-to-earth manner? -- Spireguy (talk) 16:41, 27 March 2008 (UTC)

OK, I went ahead and put in an example. Comments? -- Spireguy (talk) 16:49, 27 March 2008 (UTC)

Symmetric and anti-symmetric matrices
Let $$A$$ be a symmetric matrix, and $$B$$ anti-symmetric matrix. THen
 * $$\operatorname{tr}(AB) = 0$$

I was going to add this to the article, but can't see the proof on top of my head. -- Taku (talk) 02:40, 1 October 2008 (UTC)

Proof: $$ A = A^T \quad B = -B^T $$

So

$$ (AB)^T = -BA$$

$$\operatorname{tr}(AB)=\operatorname{tr}( (AB)^T ) = - \operatorname{tr}( BA) = - \operatorname{tr}(AB)$$

And thus

$$ \Rightarrow \operatorname{tr}(AB)=0$$

wpoely86 (talk) 16:58, 24 February 2010 (UTC)

the trace of a matrix is the sum of its eigenvalues?
Is this statement true?? Doesn't that only apply for hermitian matrixes which have been diagonalized?? 20:11, 17 January 2009 (UTC) —Preceding unsigned comment added by 128.113.65.174 (talk)

No, it always holds, even for non-diagonalizable matrices. As the article notes, you can see Jordan normal form for a further explanation. -- Spireguy (talk) 02:26, 18 January 2009 (UTC)

I find this statement confusing too. Wouldn't be better to formulate it as "The trace of a matrix is equal to the sum of the (complex) eigenvalues,..." 31 October 2012 — Preceding unsigned comment added by Fuzzyrandom (talk • contribs) 13:20, 31 October 2012 (UTC)

why is the reference section being used for footnotes?
????? —Preceding unsigned comment added by 86.20.235.138 (talk) 12:47, 14 March 2009 (UTC)

Trace of a commutator
The article contains the text "When both A and B are n by n, the trace of the commutator of A and B vanishes: tr([A, B]) = 0. Conversely, any square matrix with zero trace is the commutator of some two matrices.[3] In particular, the identity matrix is never similar to the commutator of any matrices.". Surely this can't be true! I mean, if you take two nxn matrices which commute, then their trace will equal the trace of the identity matrix, which is never zero. Is this meant to say something else? SetaLyas (talk) 12:53, 25 May 2009 (UTC)

I think you are confusing two different notions of "commutator." The relevant one here is [A,B]=AB-BA, whereas you seem to be thinking of A^(-1)B^(-1)AB. When A,B commute, the former is zero, while the latter is the identity matrix. The latter version is not directly relevant to the trace. -- Spireguy (talk) 13:26, 25 May 2009 (UTC)

Algebraic Multiplicities
"If A is a square n-by-n matrix with real or complex entries and if λ1,...,λn are the (complex and distinct) eigenvalues of A (listed according to their algebraic multiplicities)"

The line says that A is n-by-n and that the eigenvalues \lambda_1...\lambda_n should be distinct. Then it requires the eigenvalues to be listed accordingly to their algebraic multiplicity. But if the eigenvalues are all distinct, of course their algebraic multiplicities are 1. So in what the (listed according ...) helps? —Preceding unsigned comment added by 79.138.132.212 (talk) 08:41, 4 September 2009 (UTC)

$$\mathrm{tr}(AB)^n$$
Can somebody point me to a proof of the statement: $$0 \leq \mathrm{tr}(AB)^n \leq \mathrm{tr}(A)^n \mathrm{tr}(B)^n$$?
 * No, since it's totally false. Algebraist 22:06, 24 February 2010 (UTC)
 * I've found a proof for the case of n=1 for a real positive semidefinit matrix, so for general n it's also proven. wpoely86 (talk) 11:24, 25 February 2010 (UTC)

Similarity Invariance
Perhaps it should be mentioned that the matrix P must have the same dimensions as A. --Tomtheebomb (talk) 22:46, 3 April 2010 (UTC)

Also this might be linked in here: Von Neumann's trace inequality http://en.wikipedia.org/wiki/Von_Neumann%27s_trace_inequality --Tomtheebomb (talk) 23:34, 3 April 2010 (UTC)

Trace only for diagonal matrix?
The first line of the article says "the trace of an n-by-n diagonal matrix A is defined to be the sum of the elements ...", whereas, the first example shows calculation of trace for a non-diagonal matrix. As far as I know, trace is defined for square matrix, and does not require diagonal matrix. -- Samikrc (talk) 05:30, 7 February 2011 (UTC)

Trace of a symmetric matrix 0?
How can it be zero always? It is mentioned under the section example. — Preceding unsigned comment added by Tagib (talk • contribs) 07:03, 14 December 2011 (UTC)


 * Because it is false. Any simple example can prove that. However, the trace of a product of a symmetric and skew-symmetric matrix is always zero. wpoely86 (talk) 13:22, 3 October 2012 (UTC)

Trace of a projection matrix
The trace of a projection matrix is the dimension of the target space. $$P_{X}=X\left(X^{T}X\right)^{-1}X$$, we have $$\text{Tr}\left(P_{X}\right)=\text{rank}\left(X\right)$$. Jackzhp (talk) 03:08, 9 February 2011 (UTC)

"Trace is cognate to 'spoor'"
This doesn't make any sense. Can we clarify this point? —Preceding unsigned comment added by 152.3.68.8 (talk) 23:12, 23 March 2011 (UTC)

Paratrace?
Is a sum of the elements on a diagonal parallel to the primary diagonal given a name?

At least one use of $$\operatorname{tr_p}$$ is familiar. If $$D$$ is a distance matrix, a permutation of indices of $$D$$ which minimizes $$\operatorname{tr_1}(D)$$ is a solution of the traveling salesman problem. Regards, ... PeterEasthope (talk) 02:52, 7 October 2019 (UTC)

Coordinate free definition (2)
The claim: "the space of linear operators on a finite-dimensional vector space $V$ (defined over the field $F$) is isomorphic to the space $V ⊗ V^{∗}$ is not correct: the space of linear operators on $V$ has dimension n^2 while the space $V ⊗ V^{∗}$ has dimension 2n. Clearly that map between the two spaces is not surjective.
 * You are incorrect. The space $V ⊗ V^{∗}$ has dimension n^2. Gumshoe2 (talk) 02:36, 27 January 2022 (UTC)

"Coordinate-free definition" is not coordinate-free
It is ok to say that the map $$V\otimes V^\ast\to F$$ is defined in a coordinate-free way, but the isomorphism of $$V\otimes V^\ast$$ with $$Hom(V,V)$$ cannot be understood without coordinates/bases. So the section is saying that coordinates/bases are not used, but it is just using a linguistic trick of avoiding explicitly pointing out the part where they are used. The essential content of the section is fine but it should be phrased as just putting the trace in the language of tensors, not as giving a definition that doesn't use coordinates. (It also may be more appropriate on the wiki page for tensor product.) Gumshoe2 (talk) 02:45, 27 January 2022 (UTC)


 * I thought the isomorphism $$V\otimes V^*\rightarrow \text{Hom}(V,V)$$ was the linear extension of
 * $$u\otimes \varphi \mapsto A$$ where $$A(v) = \varphi(v)u.$$
 * This direction at least looks coordinate independent to me, though I'm not sure how to construct the inverse in a coordinate-free way. But as long as this is an isomorphism, then we can just define the inverse as being the inverse to this map, and that is coordinate-free.
 * To show the map is an isomorphism, since the dimension of both spaces is the same we need only show the kernel is trivial. It's not too hard to show this. Zephyr the west wind (talk) 10:37, 11 July 2022 (UTC)

Cyclic property of trace: why 4 matrices instead of 3?
In the article, cyclicity of trace is shown by the equation

Why are 4 matrices used here as opposed to 3 (say just A, B, C)? Using three matrices would demonstrate perfectly well that trace is invariant under cycling. Zephyr the west wind (talk) 10:43, 11 July 2022 (UTC)

Trace of AB and BA matrices
The statement "It can also be proven that tr(AB) = tr(BA) for any two matrices A and B." is incorrect. This equality holds only for matrices with compatible structure so that both AB and BA are defined. If A is an m x n matrix, B is an n x p matrix and m /= p, the BA product is not defined, and the trace is not defined for AB either.

The statement can be rectified by either

a) stating that A and B are m x n and n x m matrices respectively (as below, but this is confusing, it took me a while to notice this special structure in the text)

b) equivalently, stating that AB product is a square matrix (then BA is also square)

c) stating that "tr(AB) = tr(BA) for any two matrices A and B for which the trace is defined." (implying (b)).

d) Using the "Frobenius product" article notation: "tr(ATB) = tr(BAT) for any two m x n matrices A and B"

Which would be the most clear? Pterodaktilis (talk) 07:01, 22 July 2023 (UTC)