Wikipedia:Reference desk/Archives/Mathematics/2007 May 28

= May 28 =

Differentiating a vector
According to this website http://www.math.montana.edu/frankw/ccp/multiworld/building/vtrderiv/refer.htm you can differentiate a vector.

However it did not say whether you would get the same answer/vector, if you differentiate in Cartesian Coordinates as you would get if you differentiate in Spherical Coordinates. Does anyone know? 202.168.50.40 05:24, 28 May 2007 (UTC)


 * Normally you wouldn't, since the transformation from Cartesian coordinates to spherical coordinates is not a linear transformation. To take the special case in which the vector remains in one plane so that we can equivalently use polar coordinates, and taking derivatives with respect to variable t, this is illustrated by the following example:
 * The Cartesian vector (cos t, sin t) corresponds to polar (1, t).
 * The respective derivatives are (−sin t, cos t), which depends on t, and (0, 1), which does not.
 * Unless otherwise specified, I would assume that the derivative in Cartesian coordinates is meant. --Lambiam Talk  07:19, 28 May 2007 (UTC)


 * Hmm. If I may, I would like to disagree. A vector doesn't really require a coordinate system at all to exist (and be differentiated), right? So, if it's really the the vector we are talking about (as opposed to a tuple of numbers), a coordinate system need not enter the discussion at all, and thus the derivative is independent of coordinate system. —Bromskloss 07:58, 28 May 2007 (UTC)


 * [after edit conflict] I think the confusion is over where these coordinate changes take place. If we want to differentiate a function M → N, we need to know what we mean by "differentiation". Often we mean the directional derivative in the direction of a vector field; this specializes to the standard case of the derivative in terms of one coordinate. For example, $$\frac{\partial f}{\partial \theta}$$ for f: R2 → R can be considered as the directional derivative in the direction of a particular vector field (this vector field is sometimes confusingly denoted $$\frac{\partial}{\partial \theta}$$ — at $$r \theta$$ it has x- and y-components $$-r\sin\theta$$ and $$r\cos\theta$$ respectively.) But this all just depends on the domain (the left side of the function), and the context of the original question sounds like it's asking about vector-valued functions.


 * In answer to the original question, then: it's unclear whether your reference to Cartesian versus spherical coordinates is meant to apply to the domain or to the codomain. If you meant the former, then, as Lambiam said, it makes a difference whether you differentiate with respect to the Cartesian coordinates (for example, $$\frac{\partial f}{\partial z}$$) or with respect to the spherical coordinates (for example, $$\frac{\partial f}{\partial \phi}$$. But it is important to note that this has nothing to do with vectors or differentiating vector-valued functions; it is true for any function, including the familiar real-valued functions. If you are talking about coordinates on the codomain (/range), where the vectors of a vector-valued function live, then the coordinates do not matter. As Bromskloss said, the derivative will (at a point) be a vector. You can write that vector in different coordinate systems, but it will represent the same vector. Tesseran 09:01, 28 May 2007 (UTC)


 * If I understand the question correcty, it is not about Vector fields. While the value of the function is a vector, the parameter is a scalar. Vector fields take vectors as parameter. —Preceding unsigned comment added by 84.187.28.243 (talk • contribs)


 * [e/c] Vector fields do not take vectors as a parameter. A vector field is formally a section of the tangent bundle of a manifold; informally, it is a choice of a tangent vector at every point of the manifold. It can be thought of as a single object (a vector field) or as a function (for each point, a choice of a tangent vector), but when it is thought of as a function, its domain simply consists of points of the manifold. (In a way, such points may be the appropriate idea of "scalar" when you are in a manifold that is not the usual R or Rn.) Either way, I did not claim that the original question asked about differentiating vector fields; I believe it applied to vector-valued functions, just as you said. I just pointed out that to define differentiation in general, even differentiation of a real-valued function, requires vector fields. Tesseran 13:08, 28 May 2007 (UTC)


 * Well, you cannot differentiate a Vector (spatial). You can differentiate a Vector-valued function, and your link says exactly that. Now look at the definition of the derivative. It mentions no coordinates, just the difference of two vectors. The difference does not depent on the coordinate system so the derivative does not either, just as bromskloss said.


 * In the second part of the article the author shows you that you can calculate the derivative of a vector valued function by calculating the derivative of each of the ordinary functions in the coordinates. That does not work for spherical coordinates. You see that the subtraction is expanded to subtraction of the individual coordinates from line 2 to line 3 of the long proof? That step is invalid in spherical coordinates, because subtracting vectors in spherical coordinates just does not work that way.


 * It is of course possible to find the derivative if the function is given in spherical coordinates, but you need to perform a different calculation. If you do this, the resulting function will be exactly the same function as in the carthesian case, but due to the different coordinate system it will look totally different. —Preceding unsigned comment added by 84.187.28.243 (talk • contribs) 08:56, 2007 May 28


 * Take a look at our articles on vector operator and del for a variety of ways in which differential operators can be defined on both scalar and vector fields. The differential operators that are useful and meaningful in mathematical physics are independent of co-ordinate system or orientation because these are not properties of physical space. Each differential operator will then have an "implementation" in each co-ordinate system, together with rules for transforming this implementation between one co-ordinate system and another. But the results of applying a given differential operator to a given vector or scalar field will be the same, regardless of which co-ordinate system you use. For an analogy, think of how the magnitude of a vector is the same regardless of whether you calculate it using Cartesian or polar co-ordinates - but the formulae that you use are different in the two cases. Gandalf61 11:20, 28 May 2007 (UTC)
 * (e/c) Maybe philosophically, you get the same answer no matter what coordinate system you use to find the derivative, but in reality, to actually carry out the differentiation, you would convert to cartesian coordinates as Lambiam said. There is a formula based on the chain rule that gives the derivative when supplied with a vector-valued function in polar/spherical/whatever form, but it is derived from the conversion to Cartesian coordinates. nadav (talk) 11:31, 28 May 2007 (UTC)


 * Cartesian co-ordinates are not always the best/most convenient choice. To solve a problem with underlying spherical or cylindrical symmetry, it may be simpler to work in spherical or cylindrical co-ordinates. The key point is that a physically meaningful differential operator must have a definition or "existence" that is independent of co-ordinate system - the physical world has no bias in favour of Cartesian co-ordinates. Gandalf61 12:00, 28 May 2007 (UTC)


 * (Please sign your comments as directed at the top of this page, even if you do not have an account. Thanks.)
 * I believe the main points have been made. The most important point is that we cannot answer a question which is not clearly defined. A vector is a geometric object. The coordinates of a vector refer to a coordinate system. Differentiation requires a function, even if it must be a constant function. If we change coordinate system, the expression of the function must adapt just as the vector coordinates adapt.
 * 
 * Example: In Cartesian coordinates, the function
 * $$\begin{align}

f \colon [0,\pi) &{}\to \R^2 \\ t &{}\mapsto \begin{bmatrix} 1+\cos t \\ \sin t \end{bmatrix}, \qquad t \in [0,\pi) \end{align}$$
 * traces out a semicircle of unit radius centered at (1,0). If t represents time, the speed of travel never varies. At t = 0 the derivative df/dt will be
 * $$\begin{align}

\left. \frac{df}{dt} \right|_{t=0} &{}= \begin{bmatrix} -\sin t \\ \cos t \end{bmatrix} _{t=0} \\ &{}= \begin{bmatrix} 0 \\ 1 \end{bmatrix} \end{align}$$
 * If we merely rotate the coordinate system, the formula for f changes; but the derivative must still be tangent to the curve (and in this case perpendicular to a radial vector from the center). For example, if we rotate the coordinate system a quarter turn counterclockwise, so that what was x becomes −y and what was y becomes x, the same function has the expression
 * $$ t \mapsto \begin{bmatrix} \sin t \\ -1 - \cos t \end{bmatrix}, \qquad t \in [0,\pi) . $$
 * The coordinates of the derivative at t = 0 will also change, to (1,0); but geometrically all is identical. For example, the center of the circle is now at (0,−1), so the derivative vector is still perpendicular to a radial vector.
 * Changing to a polar coordinate system changes the derivative of f in a more complicated way. Here we have the formula for f.
 * $$ t \mapsto \left( 2 \cos \tfrac{t}{2}, \tfrac{t}{2} \right) = (r,\theta) , \qquad t \in [0,\pi) $$
 * The pair of coordinates are no longer vector coordinates (as the different notation suggests), and it is not the same to simply differentiate them with respect to t. Because we have curvilinear coordinates, we must compensate for their constant changing. Each point, (r,&theta;), now has its own unique "local" vector coordinate system. Thus we enter the realm of differential geometry.
 * The vector coordinate system at (r,&theta;) can take one unit basis vector to have components cos &theta; and sin &theta;, with the other using −sin &theta; and cos &theta;. With this choice, the correct expression for the derivative is
 * $$ \frac{dr}{dt} \begin{bmatrix} \cos \tfrac{t}{2} \\ \sin \tfrac{t}{2} \end{bmatrix} + r \frac{d\theta}{dt} \begin{bmatrix} -\sin \tfrac{t}{2} \\ \cos \tfrac{t}{2} \end{bmatrix}, $$
 * which reduces to
 * $$ \begin{bmatrix} -\sin t \\ \cos t \end{bmatrix} . $$
 * Thus at t = 0 we obtain the unit vector we expect,
 * $$ \begin{bmatrix} 0 \\ 1 \end{bmatrix} . $$
 * Explaining why this is the correct derivative is more than a mere reference desk thread can contain. And notice we may have a nasty problem at the origin (which I thoughtfully excluded by restricting the domain of t), because our coordinate system gets confused there.
 * 
 * So, we can differentiate vector functions, and we can differentiate functions expressed in curvilinear coordinates, but these are two different things and some fancy accomodation is required to marry them. My advice is to not fret the details for now; if you pursue enough mathematics all this will come naturally in time. --KSmrqT 12:58, 28 May 2007 (UTC)

Thanks to everyone for answering my question. The reason I asked is that I was told that I can do physics using vectors. If I use the vector s(t) for the position of a particle, I can differentiate the vector s(t) to get the vector v(t) which is the vector for velocity. And differentiate v(t) to get the vector a(t) which is a vector acceleration.

However, as someone in this reference desk pointed out, s(t) is not a vector but a function returning a vector value. What I'm actually differentiating is a function. So s(t) is a function returning a vector, v(t) is a function returning a vector and a(t) is a function returning a vector.

This is what happens when I literally believe the following phrases "Velocity is a vector. Differentiate velocity and you will get acceleration. Acceleration is a vector." I actually believed that I can differentiate a vector and get another vector. Can you believe that!!! 202.168.50.40 05:24, 28 May 2007 (UTC)

Roots of cubic functions
Hi, I know there is a formula to get the roots of a quadratic equation. Could someone tell me the equation to get the roots of a cubic equation, Cubic equation doesnt seem to give a direct answer. Thanks JJB 09:27, 28 May 2007 (UTC)


 * Take another look at the cubic equation article, in the sections Cardano's method and Solution in terms of a, b, c, and d. For a less well-known method, look at the section Solution in terms of Chebyshev radicals. Cardano's method is usually split into several steps because if you compressed it into a single expression equivalent to the quadratic formula, this expression would be very complex. Gandalf61 10:19, 28 May 2007 (UTC)

Here are your desired answers in the form which you desired. Be careful what you wish for, for you may get it.


 * If we have $$ax^3 + bx^2 + cx + d = 0 $$
 * You want a direct answer? You get a direct answer.
 * $$\begin{align}

x\to -\frac{b}{3 a} &{}+\frac{\sqrt[3]{-2 b^3+9 a c  b-27 a^2 d+\sqrt{4 \left(3 a c-b^2\right)^3+\left(-2   b^3+9 a c b-27 a^2 d\right)^2}}}{3 \times \sqrt[3]{2} a} \\ &{}-\frac{\sqrt[3]{2} \left(3 a c-b^2\right)}{3 a  \sqrt[3]{-2 b^3+9 a c b-27 a^2 d+\sqrt{4 \left(3 a   c-b^2\right)^3+\left(-2 b^3+9 a c b-27 a^2   d\right)^2}}} \end{align}$$


 * $$\begin{align}

x\to -\frac{b}{3 a} &{}-\frac{\left(1-i  \sqrt{3}\right) \sqrt[3]{-2 b^3+9 a c b-27 a^2 d+\sqrt{4 \left(3 a c-b^2\right)^3+\left(-2 b^3+9 a  c b-27 a^2 d\right)^2}}}{6 \times \sqrt[3]{2} a} \\ &{}+\frac{\left(1+i \sqrt{3}\right) \left(3 a  c-b^2\right)}{3 \times 2^{2/3} a \sqrt[3]{-2 b^3+9 a c b-27 a^2 d+\sqrt{4 \left(3 a c-b^2\right)^3+\left(-2  b^3+9 a c b-27 a^2 d\right)^2}}} \end{align}$$


 * $$\begin{align}

x\to -\frac{b}{3 a}  &{}-\frac{\left(1+i \sqrt{3}\right) \sqrt[3]{-2 b^3+9 a c b-27 a^2 d+\sqrt{4 \left(3 a  c-b^2\right)^3+\left(-2 b^3+9 a c b-27 a^2   d\right)^2}}}{6 \times \sqrt[3]{2} a} \\ &{}+\frac{\left(1-i  \sqrt{3}\right) \left(3 a c-b^2\right)}{3 \times 2^{2/3} a   \sqrt[3]{-2 b^3+9 a c b-27 a^2 d+\sqrt{4 \left(3 a   c-b^2\right)^3+\left(-2 b^3+9 a c b-27 a^2   d\right)^2}}} \end{align}$$


 * Ohanian 13:46, 28 May 2007 (UTC)
 * In that second equation, on the 2nd part of the fraction, at the bottom, is that $$32^{2/3}$$ or $$3 * 2^{2/3}$$? it appears as the former, but the code would seem to indicate the latter. --YbborTalk 15:08, 28 May 2007 (UTC)
 * My humble apologies, you are right. I'm fixing the equation now. The correct values are
 * 3 * 2 ^ (1/3) instead of 32^(1/3)
 * 3 * 2 ^ (2/3) instead of 32^(2/3)
 * 6 * 2 ^ (1/3) instead of 62^(1/3)
 * Ohanian 13:46, 28 May 2007 (UTC)
 * You may avoid multiplication and division signs: $$3^1 2^{2^1 3^{-1}}$$ . Bo Jacoby (talk) 10:59, 21 December 2015 (UTC).
 * For a truly fun time type the quartic equation, Fx^4+Gx^3+Hx^2+Ix+J=0, (skipped E because that would be Euler's constant) into . When I was 8 I could not understand why I couldn't go one higher and solve the quintic, damned Abel's impossibility theorem. Ozone 21:46, 2 June 2007 (UTC)

The formulas reduce the equation to extraction of roots. You are left with solving equations like x3=a, which are not easier than the original equation. Use the Durand-Kerner method for solving polynomial equations numerically. Bo Jacoby (talk) 10:59, 21 December 2015 (UTC).

Regression analysis for tidal water levels
I'm doing some studying on the behavior of tides. I've made a big table of water levels at different times over two days, and I made a big graph of it. Of course, the tides go according to a sinusoidal function, but both the gravitational effects of the Moon and the Sun affect tidal water levels, so therefore, the function of water levels would be in this form:

(where L is water Level and t is Time)

L = A + B sin(C(t)) + D (sin E(t + F))

Where A would be the mean water level, B would be the amplitude of the lunar component of water levels, C would represent the period of the moon, D would be the amplitude of the solar component to water levels, E would represent the period of the sun (it would be equal to 2π*24), and F would be an offset between the start of the lunar cycle and start of the solar cycle.

I'm trying to figure out the numerical values of all those lettered coefficients. Now, I could figure all of those out with astronomical/oceanographic data (the period of the sun, the moon, the mean water level) besides B and D. For those two, I'd need to do a regression analysis from the data I tabulated earlier.

I don't know how to do regression analysis by hand (though I guess I could figure it out or try to learn it), so I looked all over the internet for a regression analysis program. I found a bunch that do regression analysis, but none of them do sinusoidal regression analysis. A polynomial regression analysis would work fairly well except that to find out the corresponding sinusoidal function, I'd have to do some super-complicated reverse Taylor series...

So my question is: Does anyone know a freeware/shareware/demo program that can do a sinusoidal regression analysis for something like this? If not, is there an easy way (without spending hours messing with taylor series) to figure out how to find the corresponding sinusoidal-sum equation from a polynomial equation? Jolb 16:53, 28 May 2007 (UTC)


 * For fitting non-linear regressions, what you may be looking for is nonlinear least squares (nls) in R (GNU, free software). It should be able to fit almost any (reasonable) function using a squared residual loss.  For your purpose, however, I think I would probably also look at the spectrum, which can also be done in R.  (Although how much work to invesst perhaps depends on the application.) --TeaDrinker 18:01, 28 May 2007 (UTC)

It goes without saying that many of those factors vary by location. The water level can also vary depending on if the Moon is at apogee or perigee and whether the Earth is at aphelion or perihelion. Storm surges and even air pressure changes can also change the water level, as well as waves and tsunamis. StuRat 04:16, 29 May 2007 (UTC)


 * Try doing Fourier analysis, just represent it as a sum of sin(n t) and perform FFT on the raw data. The noise will appear as higher frequency terms leaving the dominant frequencies. I did something very similar with data from full moon. If you want to try regression you can probably get away with standard least squares to start with. You can get rid of the awkward $$\sin( E(t + F))$$ by applying a trig identity leaving a problem which is linear in each sin term. --Salix alba (talk) 20:40, 29 May 2007 (UTC)

Summation of the product of arithmetic and geometric terms
I know that

$$\sum_{k=0}^n r^n=\frac{1-r^{n+1}}{1-r}$$

and I also know that

$$\sum_{k=0}^n k=\frac{n(n+1)}{2}$$

but I am having a devil of a time with

$$\sum_{k=0}^n kr^n=?$$

Can anyone help? I've looked all over the place but can't seem to find the answer, and I can't figure out a trick (such as the multiply both sides by $$(1-r)$$ trick for geometric series) to come to an answer. I'm thinking it's something staring me in the face and obvious, but it eludes me.

Thanks in advance. Skalchemist 20:13, 28 May 2007 (UTC)

Drat, just noticed my own answer, in the bowels of the page on Geometric Progression. http://en.wikipedia.org/wiki/Geometric_progression. The trick is differentiation. My apologies, and please ignore the above. Skalchemist 20:22, 28 May 2007 (UTC)


 * I assume you meant $$\sum_{k=0}^n kr^k$$ (and $$\sum_{k=0}^n r^k$$ above). Differentitation is indeed one way. Another way is to write this as:
 * $$\sum_{k=1}^n kr^k = \sum_{k=1}^n\sum_{i=1}^kr^k=\sum_{i=1}^n\sum_{k=i}^nr^k$$
 * See if you can take it from there... -- Meni Rosenfeld (talk) 20:47, 28 May 2007 (UTC)


 * Your first two statements are incorrect.

$$\sum_{k=0}^n k=\frac{n}{2}(0+n)$$
 * Whereas

$$\sum_{k=1}^n k=\frac{n}{2}(1+n)$$

And ditto: $$\sum_{k=0}^n r^k=\frac{0}{1-r}=0$$
 * but

$$\sum_{k=1}^n r^k=\frac{1-r^{k+1}}{1-r}$$

Are you revising for C2? I was doing sigma notation today! MHDIV ɪŋglɪʃnɜː(r)d  ( Suggestion? | wanna chat? ) 21:59, 28 May 2007 (UTC)


 * Er, not quite - $$\sum_{k=0}^n k = \sum_{k=1}^n k = \frac{n}{2}(1+n)$$ since the extra term is 0, and $$\sum_{k=0}^n r^k = 1 + r + r^2 + \ldots + r^n = \frac{1 - r^{n+1}}{1-r}$$. (Don't believe me? Try expanding them out, or putting in some test values.) Confusing Manifestation 22:49, 28 May 2007 (UTC)


 * Yes, ConMan and the OP are of course correct. One way to look at the arithmetic series is that it is the average term times the number of terms. So the average term is indeed $$\tfrac n2$$ but there are $$n+1$$ terms, giving $$\sum_{k=0}^nk=\frac{n}{2}(n+1)=\frac{n(n+1)}{2}$$. As for the geometric series, I have no idea where you got your formulae from, or how did you think it possible that the sum is 0... -- Meni Rosenfeld (talk) 09:25, 29 May 2007 (UTC)

I did mean $$\sum_{k=0}^n kr^k$$ and $$\sum_{k=0}^n r^k$$ above and just made an error in my formulas. Thanks for all your assistance. However, I still have a problem. Using two different methods, I arrive at two different answers, meaning I must be wrong somewhere. I'm sure its a mistake in my algebra, but I just can't find it.

Given:

$$\sum_{k=m}^{n} r^k = \frac{(r^m-r^{n+1})}{1-r}$$

DIFFERENTIATION

$$\begin{align} \sum_{k=0}^n kr^k & = \sum_{k=0}^n k(r)r^{k-1}\\ & = r\sum_{k=0}^n kr^{k-1}\\ & = r \frac{d}{dr}\sum_{k=0}^n r^k\\ & = r \frac{d}{dr}\frac{(1-r^{n+1})}{1-r}\\ & = r \frac{(n+1)r^n(1-r)-(1-r^{n+1})}{(1-r)^2}\\ & = r\frac{(n+1)\color{red}r^n}{1-r}-r\frac{1-r^{n+1}}{(1-r)^2}\\ \end{align}$$

SUMMATION

$$\begin{align} \sum_{k=1}^n kr^k & = \sum_{k=0}^n\sum_{i=1}^kr^k\\ & = \sum_{k=0}^n\frac{r-r^{k+1}}{1-r}\\ & = \frac{r}{1-r}\sum_{k=0}^n 1-r^k\\ & = \frac{r}{1-r} \left (\sum_{k=0}^n 1 - \sum_{k=0}^n r^k \right )\\ & = \frac{r}{1-r} \left ((n+1)-\frac{1-r^{n+1}}{1-r} \right )\\ &=r\frac{n+1}{1-r} - r\frac{1-r^{n+1}}{(1-r)^2}\\ \end{align}$$

It's that pesky $$r^n$$ bit in the first method that doesn't appear in the second. I've checked over my work and can't see the error. Skalchemist 15:13, 29 May 2007 (UTC)


 * You got it almost right with the differentiation: In the calculation of the derivative of the quotient, you have forgot a minus sign on both terms, so the final result is the negation of the correct one.
 * With the double summation, however, you have a mistake in moving from the first line to the second. When you sum over i, you are summing a constant series, not a geometric one. You must change the order of summation, as I have described above, in order to proceed. -- Meni Rosenfeld (talk) 15:24, 29 May 2007 (UTC)
 * Of course! Thanks Meni (and ConMan, earlier).  I now have an answer, via CORRECT differentiation, as:


 * $$\sum_{k=0}^n kr^k = r\frac{1-r^{n+1}}{(1-r)^2}-r\frac{(n+1)r^n}{1-r}$$


 * I'm still working through the summation method, just to understand it, but light has dawned. Skalchemist 15:55, 29 May 2007 (UTC)