Wikipedia:Reference desk/Archives/Mathematics/2013 January 5

= January 5 =

Finite generation of ideal
Let R be the ring of continuous functions from [0,1] to the real numbers. Fix c in [0,1] and let Mc = ker Ec where Ec denotes evaluation at c, a ring homomorphism from R to the real numbers. That is, Ec(f) = f(c) for f in R. What is a nice clean way to show Mc is not finitely generated? I figure that one way is to suppose that
 * $$A = \left \{ f_1, f_2, ..., f_n \right \}$$

is a minimal generating set and try to come up with fn+1 which cannot possibly be an R-linear combination of the fk, but that feels like a lot of guesswork. (And I didn't succeed.) Thanks for the help. — Anonymous Dissident  Talk 13:19, 5 January 2013 (UTC)


 * Well, intuitively the reason it's not finitely generated is because continuous functions can go to 0 arbitrarily slowly, and a linear combination of functions can't go to 0 slower than its components. So use that to guide your attempt.  Given $$\{f_1, f_2, ..., f_n\}$$, build $$f_{n+1}$$ which goes to 0 slower than any of $$f_i$$, in the sense that $$\lim_{x\to c} |f_{n+1}(x)| / |f_i(x)| = \infty$$.  Then argue that this property is preserved if you replace the denominator with a linear combination of the $$f_i$$.--108.36.196.101 (talk) 14:07, 5 January 2013 (UTC)


 * (edit conflict) Here's a rough sketch. Define a partial order $$\le$$ on the generators $$f_i$$ by declaring that $$f_i\le f_j$$ if $$\lim_{x\to c} \frac{f_i(x)}{f_j(x)}$$ exists.  Now choose some element $$f_k$$ that is maximal with respect to this partial order and consider writing:
 * $$|f_k|^{1/2}=\sum_{i=1}^n g_i(x)f_i(x).$$
 * Divide both sides by $$f_k$$ and take the limit as $$x\to c$$. Sławomir Biały  (talk) 14:28, 5 January 2013 (UTC)
 * Doesn't this work only if $$\le$$ is a total order? — Anonymous Dissident  Talk 22:37, 5 January 2013 (UTC)
 * You're right that there's a gap. One needs to show that first that for any finite collection $$f_1,\dots,f_n$$ of elements of Mc, there is an f in Mc such that $$f_i\le f$$ for all i.  Some linear combination of the fi should work.  Take this f instead of $$f_k$$ in the above suggestion.   Sławomir Biały  (talk) 23:40, 5 January 2013 (UTC)

Identities true for only small (square) matrices
Given a bout or [of] algebra involving a non-commuting species, is it sufficient to verify its correctness for 2×2 matrices, to know that it is correct in general? Clearly 1×1 matrices are not sufficiently general since XY≡YX for this size, and this leads to the suspicion that 2×2 matrices might not be general enough either. So to make the question more specific: are there any identities which are true for n×n matrices, but not for(n+1)×(n+1) matrices for some n &gt; 1? Of course, mathematicians are sneaky and there are at least three obvious ways they might cheat here: 1. use det or trace to recover the size, 2. use quantifiers to recover the number of degrees of freedom and 3. use novel functions to do either of the first two. In an attempt to avoid such tediousness, I'll add the conditions that 1. only identities involving only the operations (possibly infinite in number) of addition, multiplication, multiplication by a scalar, inversion, the nth-root of the determinant, the nth of the trace are admissible, and 2. these identities should be invariant under a change of basis. (Thanks for reading all that), --catslash (talk) 19:31, 5 January 2013 (UTC)
 * Not sure exactly what you're looking for, but maybe the Rule of Sarrus is appropriate? It works for n=1,2,3, but not higher. Staecker (talk) 19:33, 5 January 2013 (UTC)
 * Actually now that I mention it, I don't think it works for n=1. Staecker (talk) 19:35, 5 January 2013 (UTC)
 * Doesn't work for n=2 either. Duoduoduo (talk) 23:51, 5 January 2013 (UTC)
 * Heh- yeah. Sorry- Staecker (talk) 02:53, 7 January 2013 (UTC)
 * As I understand the word "identity" (although the question is somewhat unclear on this point), for each n the only "identity" that holds for all n&times;n matrices is the Cayley-Hamilton theorem. Of course, this involves more symmetric functions of the eigenvalues than the trace and determinant, so it doesn't meet the criteria that you have imposed.  Thus for n > 3 there are no "identities" whatsoever.  So I think I must have misunderstood something.   Sławomir Biały  (talk) 20:05, 5 January 2013 (UTC)


 * Sorry: I need to state my question more clearly. Suppose I have an expression $$\scriptstyle{1 + \mathbf{X} (1 - \mathbf{Y} \mathbf{X})^{- 1} \mathbf{Y}}$$ (say) where $$\scriptstyle{\mathbf{X}}$$ and $$\scriptstyle{\mathbf{Y}}$$ are square matrices (don't commute), and I rewrite this as $$\scriptstyle{(1 - \mathbf{X} \mathbf{Y})^{- 1}}$$, so now I have the putative identity (i.e. allegedly true for all $$\scriptstyle{\mathbf{X}}$$ and $$\scriptstyle{\mathbf{Y}}$$)


 * $$1 + \mathbf{X} (1 - \mathbf{Y} \mathbf{X})^{- 1} \mathbf{Y} \equiv (1 - \mathbf{X} \mathbf{Y})^{- 1}$$


 * I decide to check this for all possible 2×2 $$\scriptstyle{\mathbf{X}}$$ and $$\scriptstyle{\mathbf{Y}}$$ (by expanding into 4 scalar identities say). Having verified it is an identity for 2×2 matrices, can I be confident that it is true for n×n matrices? (I am, but should I be?) --catslash (talk) 21:20, 5 January 2013 (UTC)
 * So identities are allowed to relate two matrices? Can we have more than two?  If so, here is an identity that is true for all 2&times;2 matrices, but not for n&times;n matrices for n>2:
 * $$(XY-YX)^2Z = Z(XY-YX)^2.$$
 * -- Sławomir Biały (talk) 21:44, 5 January 2013 (UTC)


 * That's exactly the sort of thing I'm after - only it doesn't seem to quite work. Just looking at the 1,1 element of the 2×2 case, I get


 * $$(\mathrm{X}_{1,2} \mathrm{Y}_{2,1} - \mathrm{Y}_{1,2} \mathrm{X}_{2,1}) \mathrm{Z}_{1,1} + (\mathrm{X}_{1,1} \mathrm{Y}_{1,2} + \mathrm{X}_{1,2} \mathrm{Y}_{2,2} - \mathrm{Y}_{1,1} \mathrm{X}_{1,2} - \mathrm{Y}_{1,2} \mathrm{X}_{2,2}) \mathrm{Z}_{2,1}$$


 * on the left and


 * $$\mathrm{Z}_{1,1} (\mathrm{X}_{1,2} \mathrm{Y}_{2,1} - \mathrm{Y}_{1,2} \mathrm{X}_{2,1}) + \mathrm{Z}_{1,2} (\mathrm{X}_{2,1} \mathrm{Y}_{1,1} + \mathrm{X}_{2,2} \mathrm{Y}_{2,1} - \mathrm{Y}_{2,1} \mathrm{X}_{1,1} - \mathrm{Y}_{2,2} \mathrm{X}_{2,1})$$


 * on the right. The first term agrees, but not the second. Perhaps the sum of all eight signed permutations of $$\scriptstyle{\mathbf{X}}$$, $$\scriptstyle{\mathbf{Y}}$$ and $$\scriptstyle{\mathbf{Z}}$$ (instead of just four of them) would sum to zero for the 2×2 case (only)? With hindsight that would be an obvious analogy to $$\scriptstyle{\mathbf{X} \mathbf{Y} - \mathbf{Y} \mathbf{X} = 0}$$ in the 1×1 case. --catslash (talk) 23:27, 5 January 2013 (UTC)
 * Something of a moving target there. Could you explain the reasoning please (as checking mechanically is tedious)? And also how to generalize to larger n? Thanks, --catslash (talk) 23:40, 5 January 2013 (UTC)
 * Sorry, there was a typo in my original reply (a missing power of two). The reasoning is that $$XY-YX$$ has zero trace, so the Cayley-Hamilton theorem guarantees that for 2&times;2 matrices, $$(XY-YX)^2 + \det(XY-YX)^2\,I=0$$, so $$(XY-YX)^2$$ is a scalar multiple of the identity, and therefore commutes with any other matrix $$Z$$.  Clearly this is not true for higher dimensional matrices.   Sławomir Biały  (talk) 23:46, 5 January 2013 (UTC)

'm still not clear on the context of the original question. Does "non-commuting species" mean that the entries of the matrices themselves come from a non-commutative ring, rather than from say the reals or complex numbers? What is a bout? --Trovatore (talk) 23:12, 5 January 2013 (UTC)
 * No, I may have meant that I was considering a non-commutative ring with an inverse (such as square matrices with complex elements) - but I'm unsure of the terminology. --catslash (talk) 23:27, 5 January 2013 (UTC)


 * What is a "bout"? --Trovatore (talk) 21:45, 5 January 2013 (UTC)


 * https://en.wiktionary.org/wiki/bout (Noun 1).--catslash (talk) 23:27, 5 January 2013 (UTC)


 * That still doesn't make much sense, as written. I suspect you meant to say "a bout of algebra" rather than "a bout or algebra". StuRat (talk) 23:47, 5 January 2013 (UTC)


 * I did - sorry for the confusion. --catslash (talk) 00:30, 6 January 2013 (UTC)

So the answer is $$\scriptstyle{(XY-YX)^2Z = Z(XY-YX)^2.}$$ - that works. Many thanks --catslash (talk) 00:59, 6 January 2013 (UTC)
 * See also Amitsur–Levitzki theorem --87.68.19.155 (talk) 18:32, 6 January 2013 (UTC)
 * So another answer for 2×2 is $$\scriptstyle{\mathbf{X} \mathbf{Y} \mathbf{Z} \mathbf{A} - \mathbf{X} \mathbf{Y} \mathbf{A} \mathbf{Z} - \mathbf{X} \mathbf{Z} \mathbf{Y} \mathbf{A} + \mathbf{X} \mathbf{Z} \mathbf{A} \mathbf{Y} + \mathbf{X} \mathbf{A} \mathbf{Y} \mathbf{Z} - \mathbf{X} \mathbf{A} \mathbf{Z} \mathbf{Y} - \mathbf{Y} \mathbf{X} \mathbf{Z} \mathbf{A} + \mathbf{Y} \mathbf{X} \mathbf{A} \mathbf{Z} + \mathbf{Y} \mathbf{Z} \mathbf{X} \mathbf{A} - \mathbf{Y} \mathbf{Z} \mathbf{A} \mathbf{X} - \mathbf{Y} \mathbf{A} \mathbf{X} \mathbf{Z} + \mathbf{Y} \mathbf{A} \mathbf{Z} \mathbf{X} + \mathbf{Z} \mathbf{X} \mathbf{Y} \mathbf{A} - \mathbf{Z} \mathbf{X} \mathbf{A} \mathbf{Y} - \mathbf{Z} \mathbf{Y} \mathbf{X} \mathbf{A} + \mathbf{Z} \mathbf{Y} \mathbf{A} \mathbf{X} + \mathbf{Z} \mathbf{A} \mathbf{X} \mathbf{Y} - \mathbf{Z} \mathbf{A} \mathbf{Y} \mathbf{X} - \mathbf{A} \mathbf{X} \mathbf{Y} \mathbf{Z} + \mathbf{A} \mathbf{X} \mathbf{Z} \mathbf{Y} + \mathbf{A} \mathbf{Y} \mathbf{X} \mathbf{Z} - \mathbf{A} \mathbf{Y} \mathbf{Z} \mathbf{X} - \mathbf{A} \mathbf{Z} \mathbf{X} \mathbf{Y} + \mathbf{A} \mathbf{Z} \mathbf{Y} \mathbf{X} = 0}$$ - that also works, and is nicely analogous to $$\scriptstyle{\mathbf{X} \mathbf{Y} - \mathbf{Y} \mathbf{X} = 0}$$ for 1×1 matrices. It's good to have a general answer for the n×n case. I would never have found the Amitsur–Levitzki theorem article for myself, so thank you for that, --catslash (talk) 01:11, 9 January 2013 (UTC)