Wikipedia:Reference desk/Archives/Mathematics/2008 April 16

= April 16 =

Short exact sequences of abelian groups
I'm trying to determine which abelian groups A will fit into a short exact sequence of the form $$ \textstyle 0 \rightarrow \mathbb{Z} \rightarrow A \rightarrow \mathbb{Z}_n \rightarrow 0 $$.

I know that I must have $$ \textstyle \mathbb{Z}_n \cong A / \mathbb{Z} $$. And by Lagrange's lemma, $$ |A| = \left[ A : \mathbb{Z} \right] \cdot | \mathbb{Z} | = | \mathbb{Z}_n | \cdot | \mathbb{Z} | $$ but I haven't been able to make any further headway than this. I managed to use this technique to prove that any group B that fits into $$ \textstyle 0 \rightarrow \mathbb{Z}_{p^m} \rightarrow B \rightarrow \mathbb{Z}_{p^n} \rightarrow 0 $$ is isomorphic to $$\textstyle \mathbb{Z}_{p^{m+n}} $$, but that worked because the orders were all finite, which fails in this case. Can anybody give me a suggestion on how to approach this? Thanks. Maelin (Talk | Contribs) 02:18, 16 April 2008 (UTC)


 * Okay, so you need to embed $$\mathbb{Z}$$ in A in such a way that its image is the kernel of a projection from A to $$\mathbb{Z}_n$$. I think one solution is to take A as $$\mathbb{Z} \times \mathbb{Z}_n$$, with embedding


 * $$\mathbb{Z} \rightarrow \mathbb{Z} \times \mathbb{Z}_n : j \rightarrow (j, \overline{0})$$


 * and projection


 * $$\mathbb{Z} \times \mathbb{Z}_n \rightarrow \mathbb{Z}_n : (j, \overline{k}) \rightarrow \overline{k}$$


 * (where $$\overline{k}$$ is the congruence class k mod n). Not sure if this is the only solution. Gandalf61 (talk) 10:28, 16 April 2008 (UTC)
 * It isn't. There's also the solution A=Z, with the maps being multiplication by n and the natural projection. There may be other solutions, depending on the value of n (e.g. if n=ab with a,b coprime, we can take A=ZxZa, with Z mapping into A by multiplication by b into the first factor). I don't have time to solve this completely; the classification theorem may be useful, and possibly the matrixy techniques used to prove it. Algebraist 13:10, 16 April 2008 (UTC)


 * From the theory of group extensions, any such group A must have a presentation $$ \langle x,y \mid y^n = x^s, y^{-1}xy = x^t \rangle $$ for some integers s and t. Since we want A to be abelian we must have t = 1 and then it is not too hard to see that A is isomorphic to $$ \mathbb{Z} \times \mathbb{Z}_{(n,s)} $$. Hence A is isomorphic to $$ \mathbb{Z} \times \mathbb{Z}_d $$ for some divisor d of n.


 * Also, for any integer s the homomorphisms $$\iota:\mathbb{Z} \rightarrow A$$ and $$\phi:A \rightarrow \mathbb{Z}_n$$ such that $$\iota(1) = x$$, $$ \phi(x) = 0$$, $$ \phi(y) = 1$$ do make the sequence exact, so d can be any divisor of n.


 * I think that you might have made a mistake in the finite case. There is an exact sequence $$ \textstyle 0 \rightarrow \mathbb{Z}_{p^m} \rightarrow \mathbb{Z}_{p^m} \times \mathbb{Z}_{p^n} \rightarrow \mathbb{Z}_{p^n} \rightarrow 0 $$ but $$\mathbb{Z}_{p^m} \times \mathbb{Z}_{p^n}$$ is not isomorphic to $$\mathbb{Z}_{p^{m+n}}$$ when both m and n are positive.
 * Matthew Auger (talk) 03:30, 17 April 2008 (UTC)
 * Matthew Auger's answer is quite clear, systematic, and elementary, so there is nothing really to improve that solution. Variety is nice, however, so I wanted to mention a couple of other ideas.  A different approach that interested me was given earlier this year at sci.math.  It introduces a number of more advanced concepts through example in a nice setting.  Another approach that might be helpful to some would be to consider Ext(Z,Z/nZ)=Z/nZ is cyclic, so the generator Z -n-> Z -> Z/nZ can be Baer summed repeatedly to get the other extensions.  These (including MA's solution) also work in the "finite case" above. JackSchmidt (talk) 00:47, 18 April 2008 (UTC)
 * Related to JackSchmidt's comment about Ext: since we want the resulting group to be abelian, we know that the action of Z/nZ on Z by conjugation will be trivial. Such short exact sequences are classified by the second group cohomology H2(Z/nZ;Z). Thus the answer to your question is equivalent to computing the integral cohomology H2(Ln;Z) of an infinite-dimensional lens space Ln. Tesseran (talk) 16:56, 21 April 2008 (UTC)

Calculus and Tensors
Will someone please give me a simple, easy to comprehend definition of vector space, tensor, vector feild, and tensor field? I have read the articles, but I still don't understand.

 Also, what is the integral of a partial derivative?  Thanks, Zrs 12 (talk) 02:55, 16 April 2008 (UTC)


 * For vectors, start with R², i.e. the Cartesian plane. Each point on the plane can be written as an ordered pair (x, y), and can be depicted either as just the point, or as an arrow from the origin to the point. You can then define addition on these arrows - to add two of them, join them head-to-tail and take the arrow that connects the tail of the first with the head of the second (equivalently, add the x and y coordinates separately). You can also define scalar multiplication - given the vector V and the scalar (i.e. real number) a, the multiplication aV is the vector heading in the direction of V, with length a times the length of V (if a is negative, then aV ends up pointing in the opposite direction to V). The plane, along with these two operations, forms a vector space over the real numbers - if you read the vector space article now, you can try and prove that these operations satisfy the vector space axioms given.
 * A tensor is a generalisation of vectors. Where a vector V can be written in terms of one dimension of components, say V_1 and V_2, a tensor can have any number of degrees in which to have said components - so, for example, T_11 T_12 T_13 T_21 T_22 T_23 would be a rank 2 tensor of dimensions (2, 3). However, it still has most of the linearity of a vector (i.e. you can multiply it by a scalar and add it to another tensor of the same rank), but with some trickier operations involved as well.
 * A vector or tensor field is just a region where each point in the region has an associated vector/tensor. You can think of it as a function V(x) where entering a position x gives you a vector/tensor V. For example, say you're in a wind tunnel. At every point in the tunnel, the wind pushes on you with a different force. If you mapped the force on you at various points in the tunnel, that map would show you the force vector field. Confusing Manifestation (Say hi!) 04:21, 16 April 2008 (UTC)

question
does always dy\dx=1\(dx\dy)?thank you.Husseinshimaljasimdini (talk) 14:04, 16 April 2008 (UTC)


 * As long as both dy/dx and dx/dy are defined and non-zero, I think so, yes. It's important to note that $$\frac{\partial y}{\partial x}\ne\frac{1}{\frac{\partial x}{\partial y}}$$ in general, it only works for total derivatives. --Tango (talk) 14:15, 16 April 2008 (UTC)


 * Perhaps I am being dense here, but I can't think of a case where $$\frac{\partial y}{\partial x}\ne\frac{1}{\frac{\partial x}{\partial y}}$$, as long as both partial derivatives are defined and non-zero. Do you have an example ? Gandalf61 (talk) 14:30, 16 April 2008 (UTC)
 * Now that you mention it, neither can I, but I swear I remember being taught that... I see three possibilities: 1) I was taught wrong, 2) I'm remembering wrong or 3) We're both being dense. Anyone want to help out? --Tango (talk) 15:25, 16 April 2008 (UTC)
 * I believe either (1) or (2) holds. Algebraist 16:16, 16 April 2008 (UTC)
 * Yeah... perhaps it was $$\frac{\mathrm{d}y}{\mathrm{d}x}\ne\frac{1}{\frac{\partial x}{\partial y}}$$, which is true. --Tango (talk) 16:38, 16 April 2008 (UTC)
 * Maybe you're thinking of the fact that $$\frac{\partial x}{\partial y}\frac{\partial y}{\partial z}\frac{\partial z}{\partial x} = -1$$ (provided all three derivatives are defined). The minus sign shows up when the number of variables in the chain is odd. Partial derivatives are weird. -- BenRG (talk) 10:34, 17 April 2008 (UTC)

The proposed identity follows quickly from the chain rule. Suppose y = &fnof;(x) and x = g(y). Then
 * $${dy \over dx} = f'(x),$$

and
 * $${dx \over dy} = g'(y).$$

Now, since
 * $$ g(f(x)) = x, $$

we have
 * $$ {d \over dx} g(f(x)) = 1.$$

But
 * $$ {d \over dx} g(f(x)) = g'(f(x))\cdot f'(x) $$

(that's the chain rule). So

\begin{align} 1 & {} = g'(f(x))\cdot f'(x). \\ \\ 1 & {} = g'(y) f'(x). \\ \\ {1 \over f'(x)} & {} = g'(y). \\ \\ {1 \over dy/dx} & {} = {dx \over dy}. \end{align} $$ Michael Hardy (talk) 20:19, 21 April 2008 (UTC)

Mathematics
Does anybody here know how to do the rational expopnents? 25 3/2, —Preceding unsigned comment added by 192.30.202.29 (talk) 14:32, 16 April 2008 (UTC)


 * Did you mean $$25^{3/2}$$? There are two rules for exponentiation, which, when combined, allow you to calculate rational exponents. First, $$a^{1/b}=\sqrt[b]{a}$$, and second, $$a^{b\cdot c}=(a^b)^c$$. See Exponentiation with real numbers for more information. nneonneo talk 14:55, 16 April 2008 (UTC)


 * Also see this simple example at Wikiversity: . Note that since there is an even root implied (a square root, in this case), there are two roots, a postivie and a negative. StuRat (talk) 17:30, 17 April 2008 (UTC)

Paradox
This brings up an interesting paradox. What's wrong with the following logic:

$$1 = 1^1 = 1^{2/2} = \sqrt{1^2} = \sqrt{1} = -1$$

StuRat (talk) 17:53, 17 April 2008 (UTC)


 * Well, first a notational point - $$\sqrt{1}=1$$, that symbol means "positive square root". However, that doesn't really matter - the key point is that there are two square roots to any number (except 0), but you don't always have a choice of which one to take. One of them will always work, but not necessarily both. See Extraneous and missing solutions for more information. --Tango (talk) 19:30, 17 April 2008 (UTC)


 * Hmm, it's news to me that $$\sqrt{1}$$ means only the positive root while $$1^{1/2}$$ means both roots. Are you sure this is a universal convention ? StuRat (talk) 04:12, 18 April 2008 (UTC)


 * It's not an isolated one, at least. In my experience, especially when working in the complex numbers, notation such as $$z^{1/2}$$ tends to refer to the dual-valued square root, whereas the square root sign tends only to be used as an operator on real numbers and returns a single value. Confusing Manifestation (Say hi!) 05:20, 18 April 2008 (UTC)
 * That's my experience. It's also what it says in the square root article, if memory serves. --Tango (talk) 18:08, 18 April 2008 (UTC)

Reflection matrix about any line rotated theta degrees from the x-axis
I am trying to find a transformation matrix that will reflect any vector (compatible of course) rotated theta degrees from the x-axis (counterclockwise). I remember seeing a matrix for this somewhere, but when I attempted to look this up I only got reflections across the x-axis, y-axis and the line x=y. John Riemann Soong (talk) 19:33, 16 April 2008 (UTC)
 * If I understood your question correctly, you are looking for $$\left(\begin{array}{cc}\cos2\theta&\sin2\theta\\\sin2\theta&-\cos2\theta\end{array}\right)$$. -- Meni Rosenfeld (talk) 20:08, 16 April 2008 (UTC)

Quotient map which is neither open nor closed
This is from Munkres's Topology, section 22, problem 3.

Let $$\pi_1:\mathbb R\times\mathbb R\to\mathbb R$$ be the projection on the first coordinate. Let A be the subspace of $$\mathbb R$$ consisting of all points $$x\times y$$ for wich either $$x\ge 0$$ or $$y=0$$ (or both). Let $$q:A\to \mathbb R$$ be formed by restricting $$\pi_1$$. Show that q is a quotient map which is neither open nor closed.

I think that I can show that q is not open by considering $$q([0,1)\times(1,2)\mapsto [0,1)$$. Am I correct? I'm stumped on showing that it's not closed, however. Donald Hosek (talk) 23:30, 16 April 2008 (UTC)
 * Hint: the fancy subspace isn't needed for the closed part, just the projection from R2 to R. Try hitting {1/n|n in N} with a closed set. Algebraist 00:17, 17 April 2008 (UTC)
 * So the line on y=0 shooting into the left half-plane is a mathematical McGuffin then? Donald Hosek (talk) 00:32, 17 April 2008 (UTC)
 * Without it, the map would not be a quotient map (it wouldn't even be surjective). Algebraist 10:08, 17 April 2008 (UTC)