Wikipedia:Reference desk/Archives/Mathematics/2008 May 1

= May 1 =

Calculus/Dynamics
Hi, I've been set a question to answer and i'm having great difficulty answering it.

I have an equation: $$ \ddot{ \theta} + \dot{ \theta} + sin \theta = 0$$

The questions says rewrite the equation as a pair of coupled ODEs using $$ x = \theta $$ and $$ y=\dot{ \theta} $$.

I don't know where to start, could somebody suggest what to do, help is much appreciated. Thanks 212.140.139.225 (talk) 01:33, 1 May 2008 (UTC)


 * If
 * $$ y = \dot\theta\,$$
 * then
 * $$ \dot y = \ddot\theta,\, $$
 * so put $$\scriptstyle \dot y\,$$ in place of $$\scriptstyle \ddot\theta\, $$ and $$\scriptstyle x\,$$ in place of $$\scriptstyle\theta\,$$, getting
 * $$ \dot y + y = \sin x.\, $$
 * That's one differential equation in two unknown functions, so you need one more equation in the same two functions. Here it is:
 * $$ y = \dot x.\, $$
 * Notice that in this system of two equations, you have only first-order derivatives.
 * And instead of writing
 * $$ sin\theta\,$$
 * you should write
 * $$\sin\theta\,$$
 * Michael Hardy (talk) 02:27, 1 May 2008 (UTC)
 * $$ y = \dot x.\, $$
 * Notice that in this system of two equations, you have only first-order derivatives.
 * And instead of writing
 * $$ sin\theta\,$$
 * you should write
 * $$\sin\theta\,$$
 * Michael Hardy (talk) 02:27, 1 May 2008 (UTC)
 * you should write
 * $$\sin\theta\,$$
 * Michael Hardy (talk) 02:27, 1 May 2008 (UTC)
 * Michael Hardy (talk) 02:27, 1 May 2008 (UTC)
 * Michael Hardy (talk) 02:27, 1 May 2008 (UTC)

Thanks for your help. I tried this origionally as it seemed the obvious thing to do, but how do i solve the first equation? Should I solve it using a Jocobian matrix of partial derivatives? Thanks again 212.140.139.225 (talk) 07:51, 1 May 2008 (UTC)
 * The differential equation has no general closed-form solution, although, of course, y = 0 is a particular solution. --Lambiam 14:28, 2 May 2008 (UTC)

y(t) = 0 for all t is not a solution unless you also say something about x(t). Michael Hardy (talk) 22:40, 5 May 2008 (UTC)

Sequences
If I have a sequence of the outputs of some function f(x):

5  12   21   32

it can be extended by computing the differences of the numbers. For example:

2   2   7    9    11 5   12   21   32

Here we have a pattern; just extend the pyramid to compute more numbers in the sequence:

2   2    [2   7    9    11   [13 5   12   21   32   [45

f(x) is actually x^2+4x.

What is this called, and how can I use this pyramid alone to work out the original function? Thanks.

--wj32 t/c 03:57, 1 May 2008 (UTC)


 * I think your looking for Finite differences. It's quite useful in situations like these. A math-wiki (talk) 07:33, 1 May 2008 (UTC)
 * To find the original sequence (assuming you end up with a constant sequence, in which case the original sequence is polynomial), I would do the following: First, keep in mind that just as $$\frac{d^m(x^m)}{(dx)^m}=m!$$, so does the mth difference of $$a_n=n^m$$ is the constant $$m!$$. So if the second difference is the constant 2, the sequence has a $$\frac{2}{2!}n^2=n^2$$ term. So you subtract $$n^2$$ from the original sequence to get:

4  4    4 4   8   12   16
 * So you see that for the remainder, the first difference is the constant 4, thus it has a $$4n$$ term. You subtract this from the remainder to get:

0  0   0   0
 * Thus there is nothing left, and you have $$a_n=n^2+4n$$. There are of course other ways, but I find this easy to both remember and apply.
 * By the way: If I encountered this sequence in some real scenario, ending up with a "constant" sequence of length only 2 would not convince me that this low-degree polynomial is the true sequence. -- Meni Rosenfeld (talk) 08:25, 1 May 2008 (UTC)

At what values of n fraction is an integer?
At what values fraction (n³-10n+51)/(n+3) is an integer? I began it so: n³-10n+51=n(n²-9-1)+51=n((n-3)(n+3)-1)+51=n(n-3)(n+3)-(n-51)

(n³-10n+51)/(n+3)=n(n-3)- (n-51)/(n+3)

So, as n(n-3) is an integer, I have to find values of n when the fraction (n-51)/(n+3) is an integer. Please help me to do it. --Juliet16 (talk) 04:22, 1 May 2008 (UTC)
 * Hint: $$n-51=n+3-54$$.
 * By the way, I am not familiar with the terminology "meanings" of n. I'd say "values of n". -- Meni Rosenfeld (talk) 07:08, 1 May 2008 (UTC)


 * Furthermore (though you've assumed it in your own own work and it is often true when using the symbol n), I hope you mean integer values of n, because there are an infinite number of values for $$n \in \mathbb{R}$$ and the solution is almost trivial. --Prestidigitator (talk) 20:25, 1 May 2008 (UTC)


 * Thanks. Next: (n-51)/(n+3) = (n+3)/(n+3) - 54/(n+3) = 1-54/(n+3). It is an integer when |n+3| is a divider of 54. Juliet16 (talk) 04:03, 2 May 2008 (UTC)
 * Exactly. I assume you have no problem finding the divisors of 54, and hence, the possible values of n. -- Meni Rosenfeld (talk) 08:45, 2 May 2008 (UTC)

Factorial Designs
Hi, ok basically i'm trying to learn about 2 forms of factorial design, namely "Two-way completely randomised factorial design", and "two-way randomised block factorial design".

For "Two-way completely randomised factorial design" I have been told to use the formula:

SST = SSA + SSB + SSAB + SSE

and for "two-way randomised block factorial design" I have been told to use the formula:

SST = SSA + SSB + SSAB + SSR + SSE

Now in both instances I take it that SST is (the mean of every piece of data - the grand mean)squared ? And I also take it that SSE = SST - SSA - SSB - SSAB for "Two-way completely randomised factorial design" and SSE = SST - SSA - SSB - SSAB - SSR ? The big problem I have is that I don't know how to go about calculating SSA, SSB, SSAB, and SSR in each case. Now please go easy on me with formulas, because I don't always know how to interpret them. If it helps you to explain it better, then the case i am relating this all to concerns: 4 different designs of advert (a) and 4 different sizes of advert (b), with 2 replications, i.e. 32 observations in total. I look forward to your assistance and am greatly appreciative of it. 79.77.185.42 (talk) 14:53, 1 May 2008 (UTC)


 * Let's start with SSA. Take the data that have been "corrected" by subtraction of the grand mean from every entry in the whole vector.  For each level of the factor A, take the mean of those entries of the response vector for which the predictor specifies that level of A.  At every data point, take that mean minus the grand mean, square it, then add all of these.  That's SSA.  The thing that gets squared has the same value at every data point that is at the same level of A, so you can just multiply each square by how many are at the same level; that makes things simpler.  And so on.... Michael Hardy (talk) 22:52, 5 May 2008 (UTC)

Inverse Laplace transform
There was a question earlier from Omegatron phrased in electronic terms, which I answered in electronic terms. It did however contained a slightly disguised actual maths question which seems to have been overlooked. I have taken the liberty of rephrasing it here (I think this is what Omegatron was in effect asking). Does there exist a real time-domain function f(t) such that its Laplace transform is,

$$\mathcal{L} \left\{f(t)\right\}=\frac{1}{\sqrt{s}}$$

 Sp in ni ng  Spark  16:10, 1 May 2008 (UTC)


 * Is 2a.1 in Laplace transform not what you’re looking for? GromXXVII (talk) 16:51, 1 May 2008 (UTC)


 * Yep, that's it. I did look at that table but contrived to miss that line.  Sp in ni  ng  Spark  17:22, 1 May 2008 (UTC)


 * and since according to my maths book,


 * $$\Gamma \left(\frac{1}{2}\right)=\sqrt{\pi}$$


 * would the answer be,


 * $$f(t)=\frac{u(t)}{\sqrt{\pi t}}$$
 *  Sp in ni ng  Spark  17:34, 1 May 2008 (UTC)
 * You want Gamma of -1/2, surely? Algebraist 18:11, 1 May 2008 (UTC)


 * I'm well outside my comfort zone here so please be kind if I am wrong but I still think that $$\Gamma(1/2)$$ is right. According to the table of Laplace transforms,


 * $$\mathcal{L}\left( {t^q \over \Gamma(q+1)}\cdot u(t) \right) = { 1 \over s^{q+1} } $$


 * setting q+1 = 1/2 results in;


 * $$\mathcal{L}\left( {t^q \over \Gamma(\frac{1}{2})}\cdot u(t) \right) =\frac{1}{\sqrt{s}}$$


 * as required.  Sp in ni ng  Spark  20:24, 1 May 2008 (UTC)
 * Looks like I can't read. Time for sleep, perhaps. Algebraist 21:22, 1 May 2008 (UTC)

Defining the cross product
How to define the cross product rigorously (i.e., without making reference to human hands)? The closest I can get is to let $$\mathbf{a}\times\mathbf{b}$$ it be a function, not only of $$\mathbf{a}$$ and $$\mathbf{b}$$, but also of an ordered basis. One could then define $$\mathbf{a}\times\mathbf{b}$$ to point in the direction that makes $$(\mathbf{a},\mathbf{b},\mathbf{a}\times\mathbf{b})$$ form a right-hand system with respect to the chosen basis. How one would decide that a basis is inherently right-handed, not just with respect to another basis, I don't know. Still, that seems to be necessary in physics. —Bromskloss (talk) 21:41, 1 May 2008 (UTC)
 * Obviously orientation cannot be an inherent property of a basis for a general inner product space, as a reflection is an isomorphism which alters it. However, if some "universal" base is chosen (such as the standard basis for $$\mathbb{R}^3$$), the orientation of anything else is determined by the determinant - for example, {(0,1,0),(1,0,0),(0,0,1)} is left-handed because its determinant is -1. -- Meni Rosenfeld (talk) 22:00, 1 May 2008 (UTC)
 * (edit conflict - I think we're saying much the same thing) You just define "right-handed" to mean "has the same orientation as the standard basis", as far as I can tell. For physics, it becomes a matter of modelling. When you model physical space as a vector space you have to say which directions the standard coordinate vectors point in and at that stage you can choose either something which looks like your right hand or something which looks like your left hand. --Tango (talk) 22:03, 1 May 2008 (UTC)
 * If you like indices and co-ordinates and suchlike, you can just define the cross product directly: $$(\mathbf{a \times b})_i=\varepsilon_{ijk} a_j b_k$$. Here, the orientation of the standard basis is built into the definition of the Levi-Civita symbol. Algebraist 22:19, 1 May 2008 (UTC)


 * For those who may not be familiar with it, note that Algebraist's answer uses the Einstein summation convention of dropping the summation symbol when there are repeated indexes, so:
 * $$(\mathbf{a \times b})_i = \varepsilon_{i j k} a_j b_k = \sum_{j k} \varepsilon_{i j k} a_j b_k$$
 * --Prestidigitator (talk) 23:12, 1 May 2008 (UTC)
 * Sorry; forgot there are people who haven't learnt that yet. Algebraist 10:34, 2 May 2008 (UTC)
 * My teacher told me that in $$\mathbb{R}^3$$, the cross product is defined by the determinant which has nothing to do with left or right handed rule. It is the interpretation after the cross product has been defined. twma 03:55, 2 May 2008 (UTC)
 * I don't know if that's accurate. You have a choice as to how to order the items in the determinant, and this choice effects the handed rule for the product (or even, is effected by the handedness we want to have). -- Meni Rosenfeld (talk) 05:54, 2 May 2008 (UTC)


 * I think what you are looking for is called a Lie Bracket. The cross product is a Lie algebra on $$\mathbb{R}^3$$.  As you can see in the link, the cross product definitely satisfies all of the axioms of a Lie Bracket with the most noticeable being perhaps, antisymmetry.A Real Kaiser (talk) 08:32, 2 May 2008 (UTC)
 * If you have a determinant that means you've written it in a certain basis - that basis will be either left- or right-handed. --Tango (talk) 10:49, 2 May 2008 (UTC)