Wikipedia:Reference desk/Archives/Mathematics/2012 April 9

= April 9 =

297/140 seems like an unusual answer.
I don't think I've made a mistake - I've checked the math several times now - I guess I just want to make sure here because $$\frac{297}{140}$$ doesn't seem like a nice answer.

Calculate the line integral $$\int_C \vec{A}(\vec{r})\cdot d\vec{r}$$ where $$\vec{A}(\vec{r}) = (y^2+z^2,z^2+x^2,x^2+y^2)$$ and C is the path from (0,0,0) to (1,1,1) along the parametric curve $$\vec{r}(u) = (u,u^2,u^3)$$.

This is what I did:

$$=\int_C (y^2+z^2,z^2+x^2,x^2+y^2).(du,2udu,3u^2du)$$

Since $$r = (u,u^2,u^3) $$ we make the substitutions $$x=u$$,$$y=u^2$$,$$z=u^3$$ and the limits of the integral are 0 and 1.

$$=\int_0^1 (u^4+u^6,u^6+u^2,u^2+u^4).(du,2udu,3u^2du)$$

$$=\int_0^1 (u^4+u^6 + 2u^7+2u^3+3u^4+3u^6)du$$

$$=\int_0^1 (4u^4+4u^6 + 2u^7+2u^3)du$$

$$=\frac{4}{5}+\frac{4}{7}+\frac{2}{8}+\frac{2}{4}$$

$$=\frac{297}{140}$$

Widener (talk) 05:27, 9 April 2012 (UTC)


 * I can't fault your logic at any step; it seems straightforward enough. The final definite integral is also correct.  Due to the quirkiness of the form of the "field" A and of the path C, the apparent complexity is not surprising, and I would not have expected a "nicer" answer; that the result should be rational is about all you could expect.  — Quondum☏ 06:30, 9 April 2012 (UTC)

Second Order ODEs
Consider a second order homogeneous ODE of the form $$ay + by' + cy = 0$$ where a, b and c'' are constants with a ≠ 0. The general solution will be of the form $$\alpha f(x) + \beta g(x)$$ where α and β are constants and ƒ and g are linearly independent functions, i.e. one is not a constant multiple of the other. Everything's straightforward when b2 – 4ac ≠ 0. When b2 – 4ac = 0 the particular solutions are of the form $$f(x) = e^{rx}$$ and $$g(x) = xe^{rx}$$ where r is the repeated root of ax2 + bx + c = 0. The thing I don't see is where does $$g(x) = xe^{rx}$$ come from? It obviously works. Every book I've seen on the subject simply checks that it works. Obviously $$\alpha e^{rx}$$ and $$\beta e^{rx}$$ aren't linearly independent functions, and so another functions would be required. But where does $$xe^{rx}$$ come from, besides trial and error? Can anyone supply a decent online reference? — Fly by Night  ( talk )  17:31, 9 April 2012 (UTC)


 * Sorry as this is OR, so make of it what you will (also I have no references, I simply give this in the hope that you might find it useful): take the limit as two distinct roots converge
 * $$\lim_{\varepsilon \to 0} (\alpha e^{rx} + \beta e^{(r+\varepsilon)x}) = \lim_{\varepsilon \to 0} (\alpha e^{rx} + \beta e^{rx}e^{\varepsilon x})$$
 * where $$\alpha$$ and $$\beta$$ are adjusted as a function of $$\varepsilon$$ to avoid the two dimensions of freedom becoming either zero or infinity. I'm sure you'll see how the required solution appears when you write out the Taylor's series for $$e^{\varepsilon x}$$.  — Quondum☏ 19:55, 9 April 2012 (UTC)


 * Don't worry about OR, this is the reference desk. I can't seem to follow your argument to a satisfactory conclussion. The exponential function is an entire function. I took the limit and got:
 * $$ \lim_{\varepsilon \to 0} (\alpha e^{rx} + \beta e^{rx}e^{\varepsilon x}) = \alpha e^{rx} + \beta e^{rx} \, . $$
 * I computed the Taylor series, and didn't see anything; I also tried dividing through by ε but this gives an indeterminate expression as ε → 0. Could you be more specific, please? — Fly by Night  ( talk )  20:18, 9 April 2012 (UTC)


 * The way I was looking at it, I put
 * $$\lim_{\varepsilon \to 0} (\alpha e^{rx} + \beta e^{rx}e^{\varepsilon x}) = \lim_{\varepsilon \to 0} (\alpha e^{rx} + \beta e^{rx}(1+\varepsilon x+\varepsilon^2 x^2/2!+...))$$
 * $$ = \lim_{\varepsilon \to 0} ((\alpha + \beta)e^{rx} + (\beta\varepsilon) x e^{rx} + \beta \varepsilon^2 e^{rx}(x^2/2!+...))$$
 * If we now put $$\alpha'=\alpha+\beta$$, and $$\beta'=\beta\varepsilon$$, and then keep $$\alpha'$$ and $$\beta'$$ constant (by varying $$\beta$$ inversely with $$\varepsilon$$ and similarly finding a suitable way to vary $$\alpha$$), the terms with $$\beta \varepsilon^2 = \beta' \varepsilon$$ tend to zero, the prior terms remain unchanged. The exponential function being entire is useful in that this ensures that the result will be independent of the path in the complex or real domain along which $$\varepsilon$$ tends to zero.  Not very rigorous perhaps, but better than trial-and-error.  — Quondum☏ 20:47, 9 April 2012 (UTC)

If $$b^2-4ac=0$$, the differential operator factors as
 * $$\left(\frac{d}{dx}-r\right)^2y = 0.$$

Because of the commutator identity $$\left[\frac{d}{dx},e^{-rx}\right]=-re^{-rx}$$, we can rewrite this as
 * $$e^{rx}\frac{d^2}{dx^2}(e^{-rx}y)=0$$

which is now easy to solve. Sławomir Biały (talk) 23:59, 9 April 2012 (UTC)
 * Sławomir, could you please explain where the commutator identity came from and what its relevance is? Also
 * $$e^{rx}\frac{d^2}{dx^2}(e^{-rx}y)=0 \iff y'' - 2ry' + r^2y = 0 \, ,$$
 * which is just another homogeneous second order ODE whose characteristic equation has a repeated root. The solution of which is
 * $$ y(x) = \alpha e^{rx} + \beta x e^{rx} \, . $$
 * My original question was where does the factor of x come from? — Fly by Night  ( talk )  01:46, 10 April 2012 (UTC)
 * Why would you expand $$e^{rx}\frac{d^2}{dx^2}(e^{-rx}y)=0 $$ out? You can solve this equation!  The solution is $$e^{-rx}y = Ax+B$$, and this shows where your x comes from!   Sławomir Biały  (talk) 11:06, 10 April 2012 (UTC)


 * D'oh! Yeah, of course. Where did the commutator identity come from, and what is its relevance? I can see that your equation $$e^{rx}(e^{-rx}y) = 0$$ is the same as $$y - 2ry' + r^2y = 0$$ which is an equation of the type in question. But how did you arrive at $$e^{rx}(e^{-rx}y) = 0$$? Is there a systematic was of doing it, or do you just have "an eye" for it? I'm puzzled about why you use $$e^{-rx}$$ in the commutator and in $$e^{rx}(e^{-rx}y) = 0$$ when $$x = +r$$ is the repeated root of the characteristic equation. — Fly by Night  ( talk )  13:15, 10 April 2012 (UTC)


 * It's the commutator trick that allows you to remove the eigenvalue, the idea being that the case of $$r=0$$ is well-understood from first principles already. Each time an $$e^{-rx}$$ is moved past a $$d/dx$$, it gives a $$d/dx - r$$.   Sławomir Biały  (talk) 13:27, 10 April 2012 (UTC)


 * Thanks Sławomir. It feels like we're getting somewhere. Could you please give me some references and online links? I'd like to read up on this. — Fly by Night  ( talk )  14:03, 10 April 2012 (UTC)

Sorry, I don't know any good references offhand. Another way to approach the problem is to use the matrix exponential to find the fundamental matrix solution. Write $$y''-2ry'+r^2y=0$$ as the system (with $$v=y'$$):
 * $$\begin{bmatrix}y\\ v\end{bmatrix}' = \begin{bmatrix}0&1\\ -r^2&2r\end{bmatrix}\begin{bmatrix}y\\ v\end{bmatrix}$$

The is the companion matrix for the minimal polynomial $$\lambda^2-2r\lambda+r^2$$ (with a double root $$\lambda=r$$). In fact, the Jordan decomposition of the matrix is:
 * $$\begin{bmatrix}0&1\\ -r^2&2r\end{bmatrix} = \begin{bmatrix}r&-1\\ r^2&0\end{bmatrix}\begin{bmatrix}r&1\\ 0&r\end{bmatrix}\begin{bmatrix}r&-1\\ r^2&0\end{bmatrix}^{-1}$$

So the fundamental matrix solution is
 * $$\begin{align}

\exp\left(\begin{bmatrix}0&1\\ -r^2&2r\end{bmatrix}x\right) &= \begin{bmatrix}r&-1\\ r^2&0\end{bmatrix}\exp\begin{bmatrix}rx&x\\ 0&rx\end{bmatrix}\begin{bmatrix}r&-1\\ r^2&0\end{bmatrix}^{-1}\\ &=\begin{bmatrix}r&-1\\ r^2&0\end{bmatrix}\begin{bmatrix}e^{rx}&xe^{rx}\\ 0&e^{rx}\end{bmatrix}\begin{bmatrix}r&-1\\ r^2&0\end{bmatrix}^{-1}\\ &=\begin{bmatrix} e^{rx}(1-rx) & xe^{rx}\\ -r^2xe^{rx} & e^{rx}(1+rx) \end{bmatrix}. \end{align} $$ Anyway, less elegant than what I suggested before, but probably more systematic. Sławomir Biały (talk) 22:01, 10 April 2012 (UTC)

The ODE having characteristic equation roots $$r\,$$ and $$r(1+\epsilon)$$ has the solutions $$(\alpha-\frac \beta \epsilon)e^{rx}+\frac \beta \epsilon e^{(r+\epsilon)x}=\alpha e^{rx}+ \beta \frac{e^{\epsilon x}-1}\epsilon e^{rx}$$ for $$\epsilon\ne 0$$. The limiting case $$\epsilon\rarr 0$$ gives $$\frac{e^{\epsilon x}-1}{\epsilon}\rarr x$$. This is where the factor of x comes from. Bo Jacoby (talk) 08:54, 10 April 2012 (UTC).
 * It's worth remarking that the solutions $$y_\epsilon$$ all have the same initial conditions $$y_\epsilon(0)=\alpha$$ and $$y'_\epsilon(0)=\beta$$, so the limit is a natural one to take.  Sławomir Biały  (talk) 11:28, 10 April 2012 (UTC)

Generalised permutation block matrices?
Fix an integer k > 0. We can form 'block' permutation matrices, by taking a genuine permutation matrix, and replacing the zeroes with zero k x k matrices, and ones with k x k identity matrices. Further, we can form generalised block permutation matrices by, instead of having identity matrices, using a collection of k x k integral matrices. Do these things have a name? Are they well studied? The group formed by such matrices decomposes as the semi-direct product of r direct copies, say, of GL(k,Z) (viewed as block diagonal rk x rk matrices), with the symmetric group on r letters acting by permuting the blocks. I'd like to learn as much about this group as possible, so any pointers would be great! Thanks, Icthyos (talk) 19:13, 9 April 2012 (UTC)


 * It occurs to me that this group is a wreath product, with base space the integral matrix group, and the symmetric group acting. In particular, it contains as a subgroup the wreath product of the symmetric group on k letters being acted on by the symmetric group on r letters. Icthyos (talk) 10:55, 12 April 2012 (UTC)

Help locating a book
If you feel this question belongs elsewhere please feel free to move it. I put it here because someone else may have had their interest in Maths piqued by it.

When I was 7 in the mid-60s, I read a book which outlined about a dozen well-known mathematical problems, such as the Tower of Hanoi and the 7 Bridges of Konigsberg. I've been racking my brains to remember what book this was but to no avail. I also have a feeling it may have been by Edward Kasner, as I'm sure I read in the same book about the naming of googol. Can anyone help with this book please? --TammyMoet (talk) 19:32, 9 April 2012 (UTC)


 * I typed the author's name into Amazon and got . It was originally published in 1971. It seems like a very long book for a seven year old; it has 400 pages. But according to the contents page it mentions the Towers of Hanoi and the Googol. On page 270 it mentions seven bridges too. Hope this helps. — Fly by Night  ( talk )  20:32, 9 April 2012 (UTC)


 * Thanks for this. I can quite accurately date the book to before 1971 because of the library I got it from, which was in a town I moved from in 1968. I wonder if there was an earlier version? It was quite a long book as I remember, but I don't know about 400 pages. Anyway you've given me something to go on there! --TammyMoet (talk) 09:38, 10 April 2012 (UTC)
 * Bingo! This reference says the book was originally written in 1940! Thanks again! --TammyMoet (talk) 09:40, 10 April 2012 (UTC)


 * I have the Tempus version of that book, and on it's copyright page it says it first came out in 1940 by Simon and Shuster.

Powers of i
I can say that $$i^5=i^{\left (4+1 \right )}=i^4 \cdot i=1 \cdot i=i$$, but why can't I say that $$i^5=i^{4 \cdot \frac{5}{4}}={\left ( i^4 \right )}^{\frac{5}{4}}=1^{\frac{5}{4}}=1$$?  It Is Me Here   t / c 22:51, 9 April 2012 (UTC)


 * There are four possible fourth roots of 1, 1 is just one of them. You don't even have to go to imaginary numbers to get the problem, e.g. $$(-1)^1 = (-1)^{2/2} = ((-1)^2)^{1/2} = 1^{1/2}=1$$ using the same logic. Dmcq (talk) 23:20, 9 April 2012 (UTC)


 * In general, exponentiation is not single-valued. A derivation of this sort is guaranteed to give a correct value of the exponential. --COVIZAPIBETEFOKY (talk) 23:26, 9 April 2012 (UTC)


 * In concrete terms: $$1^4 = 1$$, $$(-1)^4 = 1$$, $$i^4 = 1$$ and $$(-i)^4 = 1$$, so $$ 1^{1/4} = \pm 1, \pm i.$$ Take a look at our article on roots of unity. As an example of the confusion: working over the real numbers, it's tempting to think that $$ \sqrt{x^2} = (x^2)^{1/2} = x^{2/2} = x^1 = x$$ when, in reality, $$\sqrt{x^2} = |x|.$$ — Fly by Night  ( talk )  00:17, 10 April 2012 (UTC)
 * If you're confused by the above answers, good! Complex exponentiation is confusing, even more so because different definitions give different ways to approach the same thing. But I'll try to sum up:
 * There's no problem when the exponent is an integer. $$i^5$$ is definitely $$i$$. Also, $$i^5=i^{4\cdot\frac54}$$ since it's just rewriting the exponent.
 * When the exponent is not an integer, the result is not a single value but a set of values, $$a^b = \{\exp(b\cdot\log a + 2\pi k i)|k\in\mathbb{Z}\}$$ where $$\log a$$ is any one of the natural logarithms of a. (A different approach is to consider Riemann surfaces.)
 * In general, $$a^{b\cdot c}$$ and $$(a^b)^c$$ are not necessarily the same set. However, they do need to have at least one value in common.
 * Your derivation shows that $$i^5$$ is necessarily equal to one of the values of $$1^{5/4}$$, namely i. The fact that 1 is another value of $$1^{5/4}$$ has no bearing on the value of $$i^5$$.
 * -- Meni Rosenfeld (talk) 09:01, 10 April 2012 (UTC)


 * This was a good answer. Thank you! LightThinker (talk) 14:34, 10 April 2012 (UTC)


 * This last remark puzzles me. Who was the OP? Was it the admin It Is Me Here or was it LightThinker? — Fly by Night  ( talk )  20:59, 10 April 2012 (UTC)
 * This is the first time in the history of forums everywhere that anyone has every benefited from another person's question! Ohnoes! :O --COVIZAPIBETEFOKY (talk) 12:58, 11 April 2012 (UTC)
 * Like — Fly by Night  ( talk )  21:55, 11 April 2012 (UTC)

OK, thanks, I think I understand from Dmcq and Meni Rosenfeld's bullet point 4 that the mistake comes in saying that $$1^{\frac{5}{4}}{\color{Red}=1}$$, just like the mistake in the $$-1$$ case was saying that $$1^{ \frac {1}{2}}{\color{Red}=1}$$, since although it can be in general, that is not the root you are looking for in this case. But I still don't quite understand bullet point 3: why is $$a^{bc} \neq \left (a^b \right )^{c}$$? I can swear I was taught the exact opposite—that they were equal—at some point.  It Is Me Here   t / c 16:59, 11 April 2012 (UTC)


 * They are equal whenever b,c are integers, regardless of whether a is non-real. I would think they would be equal whenever b,c are Gaussian integers, again even if a is non-real. As far as I know, the problem arises only if b or c is fractional and their product has a lower denominator than the product of their denominators -- that's what causes you to lose some solutions. Duoduoduo (talk) 22:14, 11 April 2012 (UTC)
 * According to the formula above, $$a^{bc} = \{X\cdot \exp(2\pi ibck)\}$$ where X is one value of $$a^{bc}$$ and k is an integer, while $$(a^b)^c = \{X\cdot \exp(2\pi ic(bk+l))\}$$. The latter has 2 degrees of freedom so there are many values in the latter which are not in the former. Assuming nonzero a, according to my calculations, the only case when the sets are equal is when $$(c-m)/(bc)$$ is an integer for some integer m. This happens in particular whenever c is an integer.
 * Even when b and c are Gaussian integers, the two sets can be different. For example, $$2^{i\cdot i}=1/2$$ but $$(2^i)^i = \{(1/2)\cdot e^{2k\pi}\}$$.
 * So $$a^{bc}=(a^b)^c$$ only if you either don't consider complex numbers at all and take each power to represent the positive value, or if there is a very specific relationship between b and c. -- Meni Rosenfeld (talk) 18:05, 14 April 2012 (UTC)

Can likelihood ratios used for estimation of post-test probability be continuous variables, or do they have to be dichotomous?
The article Pre- and post-test probability states in a table in the section Estimation of post-test probability, that using likelihood ratios for estimation of post-test probability has the disadvantage of requiring a dichotomous variable. Likewise, the article Likelihood ratios in diagnostic testing distinguishes between an LR+ and an LR-, and makes no mention of varying the cutoff. Is it really correct that the use of likelihood ratios in diagnostic testing requires that the variable be dichotomous? Take a hypothetical example where you know the distributions of a continuous test variable within the populations with and without disease. Can we not then calculate


 * $$ LR = \frac{\Pr({Testresult}|Disease+)}{\Pr({Testresult}|Disease-)} $$.

directly from the distributions?

I did some experimentation in R (programming language), using a hypothetical test with a normal distribution N(&mu;=70, sd=10) in the healthy population, and N(&mu;=95, sd=15) in the diseased population. Here is the function I used: The function behaves more or less as I would have expected. Its value is 1.0 (i.e. indifference) at arg=82.35 (which is identical to the cutoff point I found by maximising the number of correct decisions). In the range 83 to 133, it rises steeply from 1.13 to 3359.45. When moving towards the origin from 82 to 50, it falls more gently from 0.94080371 to 0.05472334. 0.05472334, corresponding to an argument of 50, is a minimum. Then, when moving from 50 to 10, the function rises again, from 0.05472334 to 4.65981528, crossing LR=1 at arg=17.65869. I assume that the reason it rises at very low arguments, is that the higher SD of the distribution in the diseased population becomes more important than the difference in means between the populations. I'm unable to figure out why it happens exactly at that point, though.

Anyway, my question is: are continuous likelihood ratios obtained using a function such as the one I've described here valid for deriving a post-test probability from a pre-test probability, for example by using Fagan's nomogram? --95.34.141.48 (talk) 22:54, 9 April 2012 (UTC)
 * Yes, this is perfectly valid, I don't know why the article would say it's not. Bayes's theorem and rule work also for continuous distributions. That is, for probabilities you have
 * $$\mathrm{Pr}(D|T)=\frac{\mathrm{Pr}(D)\mathrm{Pr}(T|D)}{\mathrm{Pr}(T)}$$
 * And more generally
 * $$\mathrm{Pr}(D|T)=\frac{\mathrm{Pr}(D)P(T|D)}{P(T)}$$
 * Where P is the value of the pdf (if T is continuous) or pmf (if T is discrete) at the particular instantiation. That this follows from the probability version can be seen by considering the event $$T \in (t-\epsilon,t+\epsilon)$$.
 * The calculation using the graphs in the article or the Fagan monogram is then just a rearrangement of the terms. -- Meni Rosenfeld (talk) 08:38, 10 April 2012 (UTC)
 * Thank you! I spent quite some time figuring out that this had to be so, and I'm grateful for your confirmation. I suspected it would work for densities too, but messed it up somehow when I tried it yesterday. Anyway, the density version in R which I now tested:


 * works perfectly! After some more searching, this paper turned up, which explains the different usages of likelihood ratios quite nicely, and suggests calculating them by using a function which pretty much matches my first version! --95.34.141.48 (talk) 21:48, 10 April 2012 (UTC)