Wikipedia:Reference desk/Archives/Mathematics/2011 March 31

= March 31 =

Jacobian matrix and determinant
Is the Jacobian a special case of the Wronskian or are they totally unrelated?--72.152.249.213 (talk) 00:46, 31 March 2011 (UTC)
 * I'd say they're unrelated. -- Meni Rosenfeld (talk) 08:26, 31 March 2011 (UTC)
 * Unrelated, and the Wronskian and Jacobi are use different types of function. In the Wronskian, the functions involved are all of one variable, and the different rows in the matrix represent how many times you take the derivative. For the Jacobi determinant, there needs to be a square matrix, so there are $$n$$ functions with $$n$$ variables (if $$n=1$$, the Jacobi determinant is reduced to the derivative of the one function), and all the elements in the matrix represent an $$n$$th order derivative. Sjakkalle (Check!)  13:50, 31 March 2011 (UTC)

stock growth
How does

$$ \frac{d}{dt} \log S_t = h_i $$

imply that

$$S_i = S_{i-1} e^{h_i \delta t}$$


 * First, let's make sure we understand the notation correctly. I am guessing that time t is divided into intervals of length &delta;t, so that the end-points of these intervals are the times


 * $$t_i = i\delta t \quad i \in \mathbb{Z}$$


 * and that some function S(t) takes values Si at times ti, and in the interval between ti-1 and ti there is a value hi, constant within that interval, such that


 * $$\frac{d}{dt} \log S = h_i \quad t_{i-1} \le t \le t_i$$


 * Is that correct ? If so, then you simply solve the differential equation within a given interval, to give


 * $$S(t) = Ce^{h_it} \quad t_{i-1} \le t \le t_i$$


 * for some constant C. We know that S = Si-1 when t = ti-1 and S = Si when t = ti, so


 * $$S_{i-1} = Ce^{h_it_{i-1}}$$
 * $$S_{i} = Ce^{h_it_{i}}$$
 * $$\Rightarrow \frac{S_{i}}{S_{i-1}}=\frac{Ce^{h_it_{i}}}{Ce^{h_it_{i-1}}}=e^{h_i(t_{i}-t_{i-1})}=e^{h_i\delta t}$$
 * $$\Rightarrow S_i = S_{i-1}e^{h_i\delta t}$$


 * I'll let you fill in the gaps. Gandalf61 (talk) 07:34, 31 March 2011 (UTC)


 * Wonderful, many thanks. —Preceding unsigned comment added by 130.102.158.15 (talk) 10:58, 31 March 2011 (UTC)

Multiplicative Maps Matrices to Reals
Are there any maps J taking N x N (any N) matrices over the reals to the reals that preserve multiplication besides powers of the determinant and 0? What about for more general fields? Thank you for any help:-) By the way this isn't homework, its just been a while since I've thought about this area of math and am not sure how to proceedPhoenix1177 (talk) 10:07, 31 March 2011 (UTC)
 * Any real power of the abslute value of the determinant will work as well, as will products of these with the determinant. These and the zero map are the only continuous functions of this kind. Sławomir Biały  (talk) 10:56, 31 March 2011 (UTC)
 * [ec] Short answer: Yes, the absolute value of the determinant.
 * But I'm guessing you want something else. It's fairly easy to show that the behavior of J is completely determined by its action on Jordan Blocks, or more precisely on matrices which start with a Jordan Block and end with a diagonal of 1's. I think with some work it can be shown to be completely determined by action on scalar matrices. If I'm not mistaken about the last part, it boils down to the existence of $$f:\mathbb{R}\to \mathbb{R},\ f(ab)=f(a)f(b),\ f\neq a\mapsto0,a\mapsto |a|^k,a\mapsto |a|^k\sgn a$$. If f (equivalently, J) is required to be continuous then this is impossible. If not, I think there should be a counterexample but I can't construct one. -- Meni Rosenfeld (talk) 11:01, 31 March 2011 (UTC)
 * Yes, that's pretty much my thinking in requiring continuity above. Also if we assume continuity it's almost trivial to reduce it to scalar matrices, since diagonalizable matrices are dense.  Sławomir Biały  (talk) 11:09, 31 March 2011 (UTC)
 * Right. And I just remembered that not all real matrices have a Jordan form over reals, so my argument might not hold.
 * Do you know how to construct a discontinuous example? -- Meni Rosenfeld (talk) 11:13, 31 March 2011 (UTC)
 * I think we both fell into that trap. We can still reduce it to matrices that have a 2x2 block diagonalization, fwiw.  Using a Hamel basis of R over Q, one can write down a discontinuous additive function.  This gives a discontinuous multiplicative function on the positive reals.  Define f(0)=0 and f(-x)=f(x).  Sławomir Biały  (talk) 11:26, 31 March 2011 (UTC)
 * By the reduction to 2&times;2, it's enough to show that there are no continuous multiplicative characters of SO(2). This follows by the density of the roots of unity in SO(2).   Sławomir Biały  (talk) 14:04, 31 March 2011 (UTC)

A higher level approach is that, by continuity it is enough to consider invertible matrices. Then J will restrict to a 1-dimensional representation of the special linear group, which has no nontrivial one dimensional representations. So we're left with finding a continuous representation of the center of GL(n), which is the scalar case. Sławomir Biały (talk) 11:16, 31 March 2011 (UTC)
 * Like Biały implies above, any symmetric polynomial of the $ n $ eigenvalues is such a function. One famous example is the trace of the matrix, which is just the sum of the eigenvalues.  The determinant is of course the product of the eigenvalues.  &#x2013; b_jonas 21:17, 31 March 2011 (UTC)
 * These won't be multiplicative in general. Only powers of the determinant are.   Sławomir Biały  (talk) 22:00, 31 March 2011 (UTC)

inequalities for simplices?
I am looking for inequalities for simplices, preferably relating circumradius (R) and (max) edge length, etc. I am trying to find an upper bound on R, given the lengths of edges in an n-dimensional simplex.

Alternatively, it would suffice to know under what conditions the circumcenter must be in the interior of the simplex.

If anyone could point me to useful sources, I'd be grateful. 65.163.88.26 (talk) 14:45, 31 March 2011 (UTC) —Preceding unsigned comment added by 65.163.88.26 (talk) 14:44, 31 March 2011 (UTC)

Published proof that there are only 5 regular polyhedra
Can anyone point me to a source proving that there are exactly 5 regular polyhedra (the Platonic solids)? I need this because in an article I am preparing I give the numbers of various polytopes in different dimensions and gave the number of regular polytopes in dimension 3 as 5 and would like to provide a source for this. Toshio Yamaguchi (talk) 16:25, 31 March 2011 (UTC)


 * None of the references at the bottom of regular polyhedron work out? In most mathematical contexts this fact could comfortably go unsourced, or with a notation such as "(known since antiquity)". –Henning Makholm (talk) 17:14, 31 March 2011 (UTC)


 * The MathWorld page Regular Polyhedron works for me. Thanks. Toshio Yamaguchi (talk) 17:28, 31 March 2011 (UTC)


 * The Euler characteristic for convex polyhedra implies $$V-E+F=2$$ for number of Vertices, Edges and Faces, respectively. For Platonic solids each face is a regular n-gon (3 &le; n &le; 5), so we have same number of edges and vertices in every face. Each vertex may be a common point for 3, 4 or 5 faces; let's denote that value by k, then the number of vertices is $$V=n/k\times F$$. Each edge belongs to two faces, so $$E=nF/2$$. Now we get $$(n/k - n/2 + 1)F=2$$. This equation has a few solutions in integer numbers (additionally F &ge; 4), so you can easily find them all manually. --CiaPan (talk) 06:30, 1 April 2011 (UTC)
 * Platonic solid has more details on these five. A proof can be found in Euclid's elements.--Salix (talk): 08:36, 1 April 2011 (UTC)


 * The book Regular Polytopes has a proof. It's a classic and supposed to be excellent.  I've been wanting for a long time to read it.  One of these days. 75.57.242.120 (talk) 09:49, 7 April 2011 (UTC)

Cycling theorem in linear programming
Does anyone know where I can find out more about the cycling theorem, as described on slide 10 of this presentation? It says that if the simplex algorithm fails to terminate then it must cycle.

It seems like a fairly obvious statement but some kind of reference would be useful!

Yaris678 (talk) 17:16, 31 March 2011 (UTC)


 * That's just a consequence of there being a finite number of different possible states of the algorithm, given the initial input. If the algorithm continues forever, then eventually it must reach a state it has already been in, and there you have a cycle. –Henning Makholm (talk) 17:34, 31 March 2011 (UTC)


 * When you put it like that, it seems kind of obvious. I guess that's why it is not widely known as a theorem.  Yaris678 (talk) 19:29, 3 April 2011 (UTC)

Fundamental Theorem of Algebra again
I'm considering a proof of the FTA that works by considering, for a general $$f(z)$$, the mapping into the w plane given by $$w=f(C_r)$$ where $$C_r$$ is the circle of radius r, centred at the origin. The proof works by finding values of r for which $$f(C_r)$$ passes through the origin in the w plane, indicating that $$C_r$$ has passed through a root of $$f(z)$$ in the z plane.

Anyway, to my question. I have to consider two functions, $$f_1(z)$$, degree two with a repeated root, and $$f_2(z)$$, a polynomial with a non-zero root of $$3^{rd}$$ or higher order. After considering the images $$w=f_1(C_r)$$ and $$w=f_2(C_r)$$ for a range of r values, I have to comment on how $$w=g(C_r)$$ will behave, where $$g(z)$$ is a given polynomial (whose root multiplicities I am unsure of), for a range of r values.

When I consider $$w=f_1(C_r)$$ for varying r, I see that for r<|a|, where a is the repeated root of $$f_1(z)$$, it is a 'dented circle' (technical term anyone?) tending towards an epicycloid, which it becomes when r=|a|; in both cases it has winding number one. Then, for r>|a| and as r tends to infinity, its winding number becomes two and the two loops tend towards each other, eventually becoming (I think) a circle.

However I cannot determine the behaviour for $$f_2(z)$$; I selected a cubic with a $$3^{rd}$$ order root but I'm unable to describe what I'm seeing in terms of r and its relation to |b|, where b is the root. Can someone help me out? Thanks. asyndeton  talk  19:30, 31 March 2011 (UTC)


 * It sounds wrong that the winding number of f_1 should be one for small r. The image of C_r should start out as a small circle centered on the constant term of the polynomial, and therefore not wind around the origin. It sounds like you have the wrong concept of winding number -- I suspect that you're confusing the winding number of the curve $$t\mapsto f_1(re^{it})$$ with the winding number of its tangent direction $$\tfrac{d}{dt} f_1(re^{it})$$. –Henning Makholm (talk) 19:48, 31 March 2011 (UTC)


 * OK, that is entirely possible and would be because I only a have a colloquial understanding of the term 'winding number'. Perhaps it's best to leave new terminology out of it until I understand it better. After ignoring all references to the winding number, allow me to modify my question; for $$f_1(C_r)$$, one r>|a|, there is a second 'loop' (which you could imagine as having a piece of circular string and then selecting a small portion of it and 'turning it over' so that the string now crosses itself once). How do the values of r at which the image in the w plane loops over itself depend on the value of the repeated root? asyndeton   talk  20:04, 31 March 2011 (UTC)


 * What you're supposed to observe for f_2 is probably something local like "so-and-so-many small loops around the origin suddenly appear when r passes through the magnitude of the root, and these small loops then grow". The behavior leading up to this will differ according to whether the multiplicity of the root is odd or even -- you may want to try more than these two examples in order to get a feel for the behavior. Bonus points for identifying the full curve right at root-crossing as an epicycloid, but this global behavior is almost surely not the kind of thing you're supposed to notice here. (FWIW, your "dented circle" is technically an limaçon for a second-degree polynomial, but again that is not really the point). –Henning Makholm (talk) 20:07, 31 March 2011 (UTC)

Thanks Henning, that's been very helpful. asyndeton  talk  21:16, 31 March 2011 (UTC)