Wikipedia:Reference desk/Archives/Mathematics/2009 March 20

= March 20 =

First and Second derivative
$$h(x)= x \sqrt{9 - x^2}$$: I need help finding the first and second derivatives. Im confused with the product and chain rule. —Preceding unsigned comment added by Dew555 (talk • contribs) 02:42, 20 March 2009 (UTC)
 * If you show what you've done so far, we can see where you may be going wrong. It's policy not to just give answers. However, you could start with the simpler case of $$h(x)= x \sqrt{x}$$, both by the product rule and, to check, by simplifying first to get the overall power of x. If that's OK, what's the derivative of $$h(x)= \sqrt{9 - x^2}$$, using the chain rule? Then put the parts together. →86.132.238.145 (talk) 10:12, 20 March 2009 (UTC)


 * (Explicit answer by User:Superwj5 removed --131.114.72.215 (talk) 10:18, 20 March 2009 (UTC))


 * These people will help you if you show you have made some effort to solve your problem. This way they can understand better where is the point where you met difficulties. So please try answering the following:
 * 1. Are you able to apply the product (Leibnitz) formula for the first and second derivative of a function $$f(x)=x u(x)$$ in terms of$$ u'(x)$$ and $$u''(x)$$ ? What do you get?
 * Now put $$ u(x):=\sqrt{9 - x^2}$$.
 * 2. Are you able to write $$u'(x)$$ and $$u''(x)$$ for this function? What do you get?
 * 3. Are you able to substitute the result of point 2 into the formula of point 1? What do you get?
 * --131.114.72.215 (talk) 10:14, 20 March 2009 (UTC)


 * (edit conlict *3) If you are having difficulty with this problem, then try starting with some easier problems. For example, try differentiating the following functions.  If you get stuck, let us know where you get stuck:
 * $$ \sqrt x$$ (Requires power rule.)
 * $$ (x - 1) (2x - 3) $$ (Requires either product rule or power rule.)
 * $$ \cos (x) + x^3 $$ (Requires power rule and derivatives of sums.)
 * $$ \sin (x) (\cos (x) + x^3) $$ (Requires product rule.)
 * $$ (\sqrt x) (x^3 + x^2) (\cos (x)) $$ (Requires product rule twice.)
 * $$ \sin (x^4 + 2) $$ (Requires chain rule.)
 * $$ \sqrt {\sin (x^4 + 2)} $$ (Requires using the chain rule twice.)
 * $$ \cos \sqrt {\sin (x^4 + 2)} $$ (...and three times.)
 * After you understand how to apply the product rule and chain rule separately, then try your original problem.
 * Maybe some other ref-desk-ers can offer help explaining...? Eric. 131.215.158.184 (talk) 10:17, 20 March 2009 (UTC)

Polynomial Roots
Is there a formula or method to find the roots of any polynomial? And if yes, what? The Successor of Physics  09:50, 20 March 2009 (UTC)
 * there are formulas that give you the roots of quadratics, cubics and quartics, and I've no doubt we have articles on them. For the quintic and higher it's a theorem (see Galois theory) that there's no formula that only involves ordinary arithmetic operations and nth roots.  But you can solve the quintic with the ultraradical and there are probably higher degree analogues.  Of course if an approximation is good enough, you can solve any polynomial with numerical methods...129.67.186.200 (talk) 10:15, 20 March 2009 (UTC)
 * The articles in question are Quadratic equation, Cubic function and Quartic function. Note that the last two of these are extremely rarely used in practice, because the roots are almost always simpler when written "the roots of polynomial so-and-so" than when written as whatever comes out of these formulas. And if you need the roots for an application, you'll need to compute an approximation sooner or later anyway, and I wouldn't be surprised if it would often take more computer power to calculate all these radicals to the needed precision than to just go for the roots of the polynomial directly with something like Newton's method. The search for these formulas make an interesting piece of mathematics history though. — JAO • T • C 11:31, 20 March 2009 (UTC)
 * Indeed. Approximating a radical is no easier than approximating a root of a general polynomial, so if you are going to calculate the root then the radical expressions are useless, it is faster and more precise to approximate the root directly. — Emil J. 13:25, 20 March 2009 (UTC)
 * To third this opinion: yes, it takes more computation to evaluate an expression in radicals (correctly) numerically than it does to find the root. One major problem is many of these expressions suffer from catastrophic cancellation, so that to get k-bits of accuracy in the final answer, you may need nk-bits of accuracy approximating the radical, where I think it can be very difficult to bound n above.  In other words, even with an oracle that spit out the expression in radicals for free, it might still take much much longer than directly applying a root finding technique, especially if the roots are real and not clustered.  This is similar to matrix inversion.  Even if someone offers you the inverse matrix for free, multiplication by it (correctly) might be more expensive than using a standard linear solver. JackSchmidt (talk) 19:00, 20 March 2009 (UTC)
 * See solvable group and Abel's impossibility theorem. -- PS T  02:19, 21 March 2009 (UTC)
 * Hey, let me tell you something: I HATE APPROXIMATIONS!!!!!!!!!!!!!!!!!!!! The Successor of Physics  17:17, 21 March 2009 (UTC)
 * Unless the polynomial can be solved in radicals (ie. the polynomial group is solvable), approximations are pretty much all you can have (either that, or really weird notation that doesn't help you much). Even when there is a solution in radicals, it may be extremely complicated, so doesn't help much anyway. In many situations it's best to just not solve the polynomial at all and just work with it as it is, when you do need to solve it approximations are often best. --Tango (talk) 17:44, 21 March 2009 (UTC)
 * And the radicals don't help you get any more than approximations if you want the answers to be expressed numerically. So you're no worse off for not being able to use radicals. Michael Hardy (talk) 19:26, 21 March 2009 (UTC)
 * Well, yes, "numerical" essentially implies "approximate" (unless it happens to be a rational number with a small denominator). --Tango (talk) 23:54, 21 March 2009 (UTC)

Good question! While there is not a "formula" in the usual sense, one can nevertheless operate on algebraic roots in a fairly arbitrary manner. For instance one can form polynomials in any finite number of roots of polynomials, by a result due to Leopold Kronecker. There are more modern algorithms, although I do not remember the source. Write to me if you are really interested, and I will try to dig them out. Bounding the roots is more straightforward. To the best of my knowledge, an initial bisection followed by Newton's method generally suffices to isolate the roots. 71.182.216.55 (talk) 02:51, 22 March 2009 (UTC)


 * You can also use the Sturm sequence, I think. 75.62.6.87 (talk) 10:38, 25 March 2009 (UTC)


 * Thanks! The Successor of Physics  11:30, 26 March 2009 (UTC)

Question of Cubes
I am trying to prove that 13 ± 2 3 ± ... ± n3=0 for all n of the form 4k-1 or 4k.

To do this, I need what I call start points for the proof. These are n = 11, 12, 15, 16, 19, 20, 23, 24. Once I have these numbers I can easily show that any 16 consecutive cubes can be made equal 0 by a particular arrangement of signs. However, proving the start points cases is something that eludes me. Any suggestions as to the choice of sign in each case for the following expressions:

13 ± 23 ± ... ± 113

13 ±2 3 ± ... ± 123

13 ± 23 ± ... ± 153

13 ± 23 ± ... ± 163

13 ± 23 ± ... ± 193

13 ± 23 ± ... ± 203

13 ± 23 ± ... ± 233

13 ± 23 ± ... ± 243,

in order to make each of them 0? —Preceding unsigned comment added by 143.117.143.13 (talk) 12:34, 20 March 2009 (UTC)


 * This looks like a perfect task for a computer: finite, numerical, and boring. Algebraist 13:27, 20 March 2009 (UTC)


 * Yes, just looking at the last case, there are 23 signs which can either + or -. That's 223 possibilities, or some 8.4 million.  That's well beyond what a person can do, but easy for a computer to handle. StuRat (talk) 14:56, 20 March 2009 (UTC)


 * Come on, people. How did mathematicians ever get anything done before there were computers? I found one solution for n=12 and think there are no solutions for n=11.


 * $$1^3+2^3-3^3+4^3-5^3-6^3-7^3+8^3+9^3-10^3-11^3+12^3=0$$.


 * I promiss to find another one if the original poster explains how his general solution works. Dauto (talk) 18:24, 20 March 2009 (UTC)


 * Before computers mathematicians had to do a lot more tedious work. I for one am not masochistic enough to go back to those days without good reason. Algebraist 18:33, 20 March 2009 (UTC)


 * And those cases for n=11 and n=12 are far simpler. The n=11 case only has 1024 possibilities to try, which, while tedious, is at least possible to do by hand. StuRat (talk) 18:40, 20 March 2009 (UTC)


 * As for the general solution, it works similarly to the case 12 ± 22 ± … ± n2 = 0 which was answered on this desk recently (but I didn't succeed in locating it in the archives): for every n, the sequence of cubes n3, …, (n + 15)3 with signs +−−+−++−−++−+−−+ sums to zero. — Emil J. 18:37, 20 March 2009 (UTC)


 * For those of you saying it's impossible to do it by hand, I got a solution for n=24, and I used only a hand held calculator


 * $$1^3+2^3+3^3+4^3-5^3-6^3+7^3+8^3-9^3+10^3+11^3+12^3-13^3+14^3+15^3+16^3+17^3+18^3+19^3-20^3-21^3-22^3+23^3-24^3=0$$


 * Dauto (talk) 20:31, 20 March 2009 (UTC)


 * So how did you do it ? Trying every combo obviously isn't the answer. StuRat (talk) 21:48, 20 March 2009 (UTC)


 * No, trying every combination would take too long. First, the number of solutions probabily grows quite fast as well and all we need is one solution for each value of n. Second (and more important) there are some heuristics that can speed things up considerably. During the seach, anytime that a partial sum (or subtraction) exceeds the sum of the remaining terms (the ones that were not yet included in the partial sum) we simply skip that branch of the search since no combination of signs of the remaining terms will be able to cancel the partial sum. That saves an incredible amount of time since the terms distribution is quite skewed, as long as we start the search by fixing the signs of the larger terms first and work our way down to the smaller terms. That alone allows the solution for n=12 to be found in just a few minutes. Dauto (talk) 04:49, 21 March 2009 (UTC)


 * Yes how did you do? Well, some nights ago I was asleep and made some of these computation on my computer, just out of curiosity... so: no solutions for n<12, and only two for n=12: one is Dauto's, the other is it with inverted signs. For the cases n=15 and n=16, we have solutions for free with the 16 signs written by EmilJ, because we can start the sequence either with 0 or 1. To make it short, it came out that there are solutions for any $$\scriptstyle n=4k-1,\, n=4k,\, 12\leq n\leq 27$$. I did not made the program write them though, just the number of solutions. The day after I remembered of the Sloane's OEIS, and made a search with the sequences of the number of solutions relative to sums of squares and cubes: there was neither, so I sent them (after all, it contains such sequences like a(n)=100n+1, so it may well have these ones too). However, the number of solution to $$\scriptstyle\pm 1\pm 2..\pm n=0$$ was there:, with some interesting links. One is to S.R.Finch page, where he finds in a quick and nice way the asymptotics for the number of solutions, by means of the central limit theorem; it seems that the computation extends to the case of sums of r-powers: luckily, giving the same result one gets by the expansion of the integrals.
 * By the way, note that the 16 signs in the sequence above are those of the expansion of the polynomial $$\scriptstyle P(x):=(1-x)(1-x^2)(1-x^4)(1-x^8)=\sum_{j=0}^{15}\varepsilon_j x^j$$. Since $$(1-x)^4$$ divides $$\scriptstyle P(x)$$, if S is the shift operator, acting on sequences by $$\scriptstyle (Sx)_k=x_{k+1}$$, then the linear operator $$\scriptstyle P(S)=\sum_{j=0}^{15}\varepsilon_j S^j$$ contains the fourth discrete difference operator $$\scriptstyle (S-I)^4$$ as a factor: and this is the reason why it killes the sequence of cubes, and in fact, all polynomial sequences of degree <4. Of course, this argument generalizes for any degree r, so we have solutions to $$\scriptstyle\pm 1^r\pm 2^r\pm.. \pm n^r$$ for all $$n=2^{r+1}k$$ and $$n=2^{r+1}k-1$$. It seems difficult to say what's the least n=n(r) for which there is a solution; it could be much less than the bound $$n(r)<2^{r+1}$$.... --pma (talk) 21:54, 20 March 2009 (UTC)

Here's some Python code to solve this using dynamic programming: It finds solutions for all n congruent to 0 or -1 mod 4, starting at 15, and outputs the number of solutions for each n. —David Eppstein (talk) 22:11, 20 March 2009 (UTC)

Intrinsic Equations
Given the intrinsic equation of a curve $$s=f( \psi )$$, how do you find the Cartesian form of the equation? 92.3.124.220 (talk) 22:22, 20 March 2009 (UTC)


 * Here $$\psi$$ is the angle between the tangent to the (planar) curve and the x-axis, I suppose. Write it as $$\psi=g(s)$$. Then you shold find a cartesian parametrization by arc-length ( x(s),y(s) ), where x(s) and y(s) come from the two differential equations $$x'(s)=\cos(g(s))$$ and $$ y'(s)=\sin(g(s))$$. Check here also. --84.221.81.199 (talk) 09:58, 21 March 2009 (UTC)