Wikipedia:Reference desk/Archives/Mathematics/2009 November 7

= November 7 =

Probability question
This is nothing to do with Homework, just an interesting problem that I came across. Assume there are three possible states for a man : He can be healthy, sick, or dead. The probability that he is sick the next day given that he is healthy today is 0.2 (the values don't really matter much here). The probability that he is dead the next day given he is sick today is 0.25. The probability that he is dead the next day given he is healthy today is zero. The probability that he is healthy the next day given he is sick is 0.4. Given he is healthy today, what is the probability that he will never die ? My instinct tells me that the answer is zero, he has to get sick somewhere down the line, and eventually die.  Rkr 1991  (Wanna chat?) 06:32, 7 November 2009 (UTC)


 * Yes, it's 0. Assuming the independence you may observe that the probability to be dead within 2 days is in any case at least p=0.05, so the probability to be still alive after 2n days is not larger than (1-p)n, that goes to 0 exponentially. Yours is an example of a Markov chain. --pma (talk) 07:59, 7 November 2009 (UTC)


 * OK. So how do you find the answer to the slightly harder question, what is the probability that he is alive after n days given he is alive today ?  Rkr 1991  (Wanna chat?) 10:55, 7 November 2009 (UTC)
 * (I assume you mean "given he is healthy today" ). Just write the transition matrix relative to the states 1=H, 2=S, 3=D, which is
 * P:=$$

\begin{bmatrix} 0.8 & 0.2 & 0 \\ 0.4 & 0.35 & 0.25 \\ 0 & 0 & 1 \end{bmatrix}, $$
 * where pij is the probability to pass from the state i to the state j. Then, in Pn the coefficient p(n)1,3 is the probability to be in 3 at time n starting from 1 at time 0. To write Pn efficiently you should diagonalize P first (you can: it has 3 simple eigenvalues: 1, 0.213, and 0.936), P=LDL-1, Pn=LDnL-1. Check Markov chain for details.--pma (talk) 12:13, 7 November 2009 (UTC)


 * Great, thanks.  Rkr 1991  (Wanna chat?) 13:47, 7 November 2009 (UTC)


 * Just a quibble here: You say he has to eventually die. Not really.  That he will die eventually has probability 1, but is not certain; it is possible for him to remain healthy for all time, but the probability is 0. It's a subtle distinction, but an important one in some contexts. --Trovatore (talk) 01:20, 9 November 2009 (UTC)
 * Indeed. He will almost surely die. --Tango (talk) 01:24, 9 November 2009 (UTC)
 * And we should then add, that he will almost surely remain dead forever, but not certainly, as there is a probability 0 that he comes back :-/ --pma (talk) 04:12, 12 November 2009 (UTC)

Moment of inertia of a trapezoidal prism
I'm trying to put together a little tool to calculate a first approximation to the centre of gravity and moments of inertia of a conventional airliner. Most of it is pretty simple: I'll model the fuselage as a solid cylinder (though I'm tempted to try to make it more a combination of the structural mass as a cylindrical shell and the payload mass as a cylinder inside it), the engines also as cylinders, and the moments of inertia around the aircraft CG of the tail parts are dominated by the parallel axis theorem so I can neglect their contributions about their own CGs.

The challenge I'm finding is the wing. It is not small and it is close to the aircraft CG so I need it's own moment of inertia. I would like to take into account the effects of wing sweep, dihedral, and taper (that is, the difference in chord between the wing tip and the wing root). I don't need anything more complex like elliptical wings or multiple taper ratios. Sweep and dihedral I'll deal with by just rotating the moment of inertia matrix once I have it. So what I need is the moment of inertia of a trapezoidal prism. But I can't find any equations for that anywhere.

From the Moment of inertia article:


 * $$\mathbf{I}=\iiint_V \rho(x,y,z)\left( \|\mathbf{r}\|^2 \mathbf{E}_{3} - \mathbf{r}\otimes \mathbf{r}\right)\, dx\,dy\,dz,$$

The convention I am using is that x is rearward, y is out the right wing, and z is up. So the wing is a trapezoid when projected on the x-y plane, with a constant (relateively small) thickness in the z direction. I am happy keeping the density (rho) constant. I expect that the answer will be in the forms of formulae for Ixx, Iyy, Izz about the principal axes (the diagonal of the matrix), with Ixz, Iyz, and Ixy zero and I will have to rotate the matrix to work in my coordinate system.

I would also like some guidance on the appropriate thickness to choose as the z-dimension. Clearly it is airfoil shaped, but I was going to approximate it as rectangular. Given the maximum airfoil thickness, what would be a rough guide to the right prism thickness? Actually, now that I think about it, in truth the height of the airfoil is proportional to the chord as well, that is it decreases from root to tip. But I am happy to neglect that. An alternative to modelling it as a rectangular section would be a diamond section, i.e. a quadrilateral with two sets of two equal sides, and the user of my tool would have to specify the maximum airfoil thickness and the location of the maximum airfoil thickness.

I know I could have asked this on the Science RD, but I hope I've taken the physics out of the question and it is now actually a math question.

Thanks, moink (talk) 07:13, 7 November 2009 (UTC)


 * Have you considered numerical integration? If the parameters don't change too drastically, you can do that for various values and fit a function to the results. -- Meni Rosenfeld (talk) 06:30, 9 November 2009 (UTC)


 * I could do that, but I am pretty sure there is a closed-form solution. moink (talk) 08:54, 9 November 2009 (UTC)


 * Deciding I had to start deriving it myself, I googled "centroid of a trapezoid" to at least not have to derive the centroid, and found this page: which has two of the three answers I need.  The third can't be too terribly different from a rectangular prism.  moink (talk) 08:54, 9 November 2009 (UTC)


 * Looking more closely at those, they are area moments of inertia and not mass moments of inertia, but I think they are still helpful as partial solutions to the integral. moink (talk) 09:10, 9 November 2009 (UTC)

Concatenations with preimages
Given a function f : X → Y, how to prove f(f&minus;1(B)) ⊆ B and f&minus;1(f(A)) ⊇ A for all subsets A of X and all subsets B of Y? --88.78.2.122 (talk) 08:52, 7 November 2009 (UTC)
 * Just use the definitions of the preimage f&minus;1(B) ("all what goes in B")  and of the image f(A) ("where A goes"). So f(f&minus;1(B)) ⊆ B reads "all what goes in B, goes in B" and f&minus;1(f(A)) ⊇ A reads "A goes where also goes all what goes where A goes".
 * Also notice the useful equivalence, for any subset A of X and any subset B of Y:
 * f(A) ⊆ B &hArr; A ⊆     f&minus;1(B),
 * from which you can deduce both your inclusions starting respectively from f&minus;1(B) ⊆ f&minus;1(B) and  f(A)) ⊆ f(A). --pma (talk) 09:03, 7 November 2009 (UTC)


 * Just for talking, you may consider the power set of X and Y, namely P(X) and P(Y), as small categories, where objects are subsets, and arrows are inclusions. Then the image map f* : P(X)→P(Y) and the preimage map f* : P(Y)→P(X) are adjoint functors, and of course the inclusions you wrote are just the co-unity and unity of the adjunction f* $$\scriptstyle \dashv $$  f*. (Check closure operator also).
 * PS: Talking about Html, does anybody know how to get the &weierp; a bit higher, and to get a bit lower the star in f* ? (I got it in quite a weird way). And why I can't see &LeftTee; for $$\dashv$$ ? --pma (talk) 11:25, 7 November 2009 (UTC)
 * &weierp; &weierp; (but not abusing Weierstrass' p-function symbol for power set is a better option), f*  f* . Note that fine-tuning like this is very font-dependent, hence you cannot do it in a way which would reliably work for all readers. As for &amp;LeftTee;, there is no such thing among the XHTML 1.0 named character entities (Wikipedia's HTML tidy code will thus remove it even if a particular browser could support it as an extension). The character appears in Unicode on position U+22A3, hence you can get it by &amp;#x22a3; (or &amp;#8867;) in HTML: &#x22a3;&#8867; (or simply input the character directly: ⊣). — Emil J. 15:03, 7 November 2009 (UTC)
 * Thank you! Nice emoticon :⊣) also --pma (talk) 15:37, 7 November 2009 (UTC)

plausibility of mathematical completeness
It's the mid 1920's and you're a top mathematician being recruited to the Hilbert school. Your mission: prove Hilbert's famous contention (from "On the Infinite"):


 * As an example of the way in which fundamental questions can be treated I would like to choose the thesis that every mathematical problem can be solved. We are all convinced of that. After all, one of the things that attracts us most when we apply ourselves to a mathematical problem is precisely that within us we always hear the call: here is the problem, search for the solution; you can find it by pure thought, for in mathematics there is no ignoramibus.

The subtlety of Gödel's 1931 incompleteness proof may have escaped everyone else, but still, those guys weren't slouches. My question is why could anyone ever have been convinced Hilbert's thesis was true?

Consider a simple enumeration of all the sentences φ1, φ2,... over, say, the language of PA. For each sentence φk make a red dot on the number line at the point k, and write the formula denoted by φk next to the dot. At the end of this process you have an infinite row of red dots, each labelled by a formula. Now go through again and enumerate the theorems t1, t2,... and for each ti, find the dot you made earlier for that formula and flip its color from red to blue. (If it's already blue, don't do anything; this will happen a lot because you will generate all the theorems infinitely often). Also similarly flip the color of the dot for ~ti to blue. In other words you have colored the decidable formulas' dots blue and left the undecidable ones (if there are any) red. Hilbert says that at the end of this, you will have no red dots left, just blue ones. This seems like a pretty elementary description, certainly accessible and hopefully natural to any logician of that time.

Of course we know today that there will necessarily be red dots at the end, and of course we can reasonably say that until 1931, Hilbert could reasonably harbor some hope that there wouldn't be red dots. But what could make him (or anyone) so sure that there'd be no red dots. The process is just a big combinatorial mess, some weird contraption jumping around all over the place flipping dots from red to blue, and the exact pattern of flipping depends intimately on the exact make-up of the contraption (i.e. the content of the theory being studied). You could construct axioms that resulted in only flipping the prime-numbered dots, or whatever. Was there any mathematically reasonable plausibility argument that the dots would end up all blue for a theory like PA or ZF? Did they expect too much, based on the completeteness of simpler theories like Presburger arithmetic? Or was it just a bogus emotional conviction people had back then, that's alien to us now? 69.228.171.150 (talk) 12:22, 7 November 2009 (UTC)


 * While waiting for a more technical explanation, my answer to your question is: evidently, at that time it was not at all trivial. Speaking in general, it seems to me that this question reflects a common ahistorical attitude of we contemporary/modern people towards people of the past/ancient people (your post is still correct, I'm just taking the opportunity). We usually say "how could they be so naives to believe something, and not to see what for us is such a triviality". The conclusion is: "we are then smarter than them" (usually only thought), and a certain condescending air. I think I have enough evidence that our brain is not better (experiment: just ask for a formula for the solution of the third degree equation to a mathematician who doesn't know it). If today so many ideas are so easily understandable for so many people of medium intelligence, whereas once they were so difficult and just for a small elite, this seems to me the proof that the people who created these ideas, and those who prepared the ground for them, where giants, and we should have the greatest gratitude and admiration for them. --pma (talk) 14:02, 7 November 2009 (UTC)
 * I'm not at all saying it should have been obvious in the 1920's that PA was incomplete. I'm just wondering why anyone would have expected it to be complete.  Certainly it was an important question to investigate, but why did almost everyone expect a particular outcome instead of saying "gee, I don't know"?  Also I'm talking about the mid 1920's (maybe I should have said late 1920's, since "On the Infinite" was written in 1925), not the era of Frege.  The groundwork was pretty well established by then.  (Actually, one exception: Emil Post anticipated the incompleteness theorem in 1921, but wasn't able to put a proof together.)  There are plenty of statements today that we think are true but for which we don't have a proof (e.g. Riemann hypothesis, P!=NP, FLT until recently, etc).  But we can make reasonable plausibility arguments for each of those, even if we turn out to be wrong afterwards.  I'm wondering whether there was a plausibility argument for completeness.  69.228.171.150 (talk) 20:58, 7 November 2009 (UTC)
 * My guess is that it was based on the notion that mathematical truth derived entirely from axiomatic proof, combined with the optimistic faith that truth is knowable ("we must have no ignorabimus"). Had Hilbert really been thinking of the question in the combinatorial light you present, I doubt he would have been so sure; it was because he invested the combinatorics with meaning that he thought it had to be this way.  I have not extensively read Hilbert and am speculating a bit here, but this makes sense to me. --Trovatore (talk) 21:29, 7 November 2009 (UTC)

Calculus with imaginary constants
Is calculus with imaginary constants, say $$\int e^{ix}$$, just the same as when the constant is real? 131.111.216.150 (talk) 15:43, 7 November 2009 (UTC)
 * Broadly. If you want to evaluate an integral like $$ \int _a ^b f(x) dx $$ where f is a complex valued function of a real variable and a and b are reals, then you can just write f as the sum of its real and imaginary parts f = u + iv, and then the integral of f is defined to be the integral of u plus i times the integral of v (as long as the integrals exist).  However you do have to be a little careful with things like change of variables.  Here's an example: consider the real integral $$ \int_\infty ^\infty 1/(1+x^4) dx $$.  Making the substitution y = ix, so dy=i dx doesn't change the limits and, x^4=(y/i)^4=y^4.  So $$ \int_\infty ^\infty 1/(1+x^4) dx  = i \int_\infty ^\infty 1/(1+y^4) dy$$.  It follows the integral is zero....but it obviously isn't.  Tinfoilcat (talk) 16:38, 7 November 2009 (UTC)
 * Making that substitution certainly does change the limits, and the domain integrated over. It changes the integral from one on the real line from -∞ to ∞ to one on the imaginary line from -i∞ to i∞. Algebraist 19:04, 7 November 2009 (UTC)
 * It doesn't change the "limits" if you work on the Riemann sphere, where there's only one infinity. But of course it changes the path - that's why there's no paradox here. Tinfoilcat (talk) 19:26, 7 November 2009 (UTC)

To answer the OP: Yes, you treat complex constants just the same as real constants. For example:
 * $$\frac{d}{dz} i = 0, \ \int i \ dz = iz + c_1, \ \frac{d}{dz} e^{iz} = ie^{iz} \mbox{ and } \int e^{iz} \ dz = \frac{1}{i}e^{iz} + c_2, $$

where c1, c2 &isin; C are constants of integration. It's usual to use z as a complex variable instead of the familiar x for a real variable. Calculating complex integrals is a bit different to calculating real ones. Given two real numbers, say x1 and x2, there's only one way to integrate a function between x1 and x2 since the real numbers form a line. When we integrate over the complex numbers we have a lot more freedom. Given two complex numbers, say z1 and z2, we can integrate a function along a path which starts at z1 and ends at z2 since the complex numbers form a plane. In practise we just make some substitutions and calculate a simple integral. Say you want to integrate &fnof;(z) = 1/z around the the unit circle, well the unit circle is given by the contour &gamma; = {eit : 0 &le; t < 2&pi;}. So we make the substitution z = eit (and hence dz = ieitdt) and integrate for 0 &le; t < 2&pi;, i.e. we calculate
 * $$ \oint_{\gamma}\frac{1}{z} \ dz = \int_0^{2\pi} \frac{1}{e^{it}}\cdot ie^{it} \ dt = \int_0^{2\pi} i \ dt = 2\pi i. $$

Notice that &gamma; starts and ends at the same point, namely 1, but the integral is not zero (as it would be in the real case). This is where complex analysis starts to get interesting&hellip;

Dr Dec (Talk)   14:21, 8 November 2009 (UTC)

Mathematical Sequences and inductive reasoning
Use inductive reasoning to write the 10th term in each sequence below. THEN write a formula for the nth term.

a) 4, 13, 22, 31, 40...

b) 3, 5, 9, 17, 33...

c) 0, 6, 24, 60, 120...

d) 5, 10, 17, 28, 47, 82... —Preceding unsigned comment added by Whitesox1994 (talk • contribs) 18:36, 7 November 2009 (UTC)


 * We can't do homework for you. And even if it is not homework, at least show us your current attempt at solving these problems. You will find some inspiration at arithmetic sequence for the first answer. Zunaid 19:23, 7 November 2009 (UTC)


 * Such sequences are often best explored by calculating successive differences between the terms, sometimes continuing the process by calculating differences of the differences. A clue for d: it's growing too quickly for that to be useful.→81.153.218.208 (talk) 19:51, 7 November 2009 (UTC)
 * It's useful for (d), it just won't give you the answer by itself. Do the process twice and you'll get a sequence you should immediately recognise. You can get the next term in the sequence from that, but to get a formula from it will require some cleverness. --Tango (talk) 20:18, 7 November 2009 (UTC)

Your question is imprecise and unfortunately illustrates the "sort of mathematics" schools are teaching students these days. In many branches of applied mathematics, sequences will not behave in such a simple manner, and thus basic arithmetic is of no use (as pma notes below, one usually needs a large set of data to extrapolate, and even then, other techniques must be used for extrapolation - one can never be sure that any particular extrapolation of a sequence is correct). In any case:


 * (a) The successive differences are: 13 - 4 = 22 - 13 = 31 - 22 = 40 - 31 = 9. Since the difference between two terms of the sequence remains constant, one may conclude that "adding nine to the previous term gives the next term". Now, you should determine a formula for the nth term (which will of course contain n as a variable). Using this formula, compute the tenth term of the sequence.
 * (b) The successive differences are: 5 - 3 = 2, 9 - 5 = 4, 17 - 9 = 8, 33 - 17 = 16. Writing the diffences as a sequence - 2, 4, 8, 16..., we observe that the differences are powers of two. Thus the desired formula for the nth term is $$f(n) = 2 + \sum_{k=0}^{n-1} 2^{k}$$. Now, compute the tenth term of the sequence using this formula.
 * (c) Do this by yourself using the other answers I have given you. To learn mathematics is to do it (and think about it). Furthermore, mathematics cannot be done by watching someone else do it - often one must invent new methods to tackle new questions; methods not previously known. I hope that by doing (c), and by comparing it to the solutions to (a), (b) and (d), you will appreciate this.
 * (d) The successive differences are: 10 - 5 = 5, 17 - 10 = 7, 28 - 17 = 11, 47 - 28 = 19, 82 - 47 = 35. Writing the differences as a sequence - 5, 7, 11, 19, 35..., we shall now compute the differences of the differences - 2, 4, 8, 16...; a sequence which you should recognize. With (a), the successive differences remained constant - thus the formula for (a) involved only n (and not n2 or any powers of n). On the other hand, in (b), the differences were powers of 2, and thus we took the summation of all powers of 2 in our formula. Likewise, in this case, the differences of the differences are all powers of 2 - working in a similar manner to (b), the formula for the nth term is $$f(n) = 5n + \sum_{i=3}^{n} (n-i+1)2^{i-2}$$ where we adopt the convention that an empty sum is necessarily zero.

Although I have given you some of the answers here, we are specifically advised not to do so. Therefore, instead of merely accepting the answers as true, you can do a couple of simple exercises - for instance, check that for the first few values of n, the nth term agrees with the formula I have given. Also, try to invent new sorts of sequences and apply the above methods to those - this will be good practice. Finally, I have not given you the actual answers for (c) and (a), so computing the nth term for these sequences should be your first task - spend at least 30 minutes reflecting on these exercises (and their solutions), if you want to benefit from them (and be able to do future homework without assistance). In the future, we probably will not give you solutions unless you show us your work. Hope this helps. -- PS T  02:39, 8 November 2009 (UTC)

a) 4, 13, 22, 31, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...

b) 3, 5, 9, 17, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,......

c) 0, 6, 24, 60, 120, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,......

d) 5, 10, 17, 28, 47, 82, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...... --pma (talk) 22:44, 7 November 2009 (UTC)
 * Unfortunately, we are fighting a losing battle against maths teachers on this one - I think it is best to just give in as far as answering questions on the Ref Desk goes. --Tango (talk) 03:59, 8 November 2009 (UTC)
 * I think it's really up to the student to be in control of his learning. Honestly when I was in high school I just can't stand my English classes; I have absolutely no idea what the poems are talking about and it just hurts to be writing essays explaining Keats. Although the education is deteriorating, it does help to separate the more able students from the rest (who can learn by themselves), so I wouldn't say it's totally undesirable. Najor Melson (talk) 06:15, 8 November 2009 (UTC)

It is a good idea to repeatedly compute the negative differences bn=an&minus; an+1. Starting with 5 10  17  28  47  82 you get row by row 5 10  17  28  47  82 -5  -7 -11 -19 -35  2   4   8  16 -2  -4  -8  2   4 -2 Now extend the first column downwards and then compute the columns one by one in the same way. Extending by zeroes you get the answer 444. (This sequence is a polynomial of degree 5). 5 10  17  28  47  82 133 204  299 444 -5  -7 -11 -19 -35 -51 -71 -95 -145  2   4   8  16  16  20  24  50 -2  -4  -8 -14  -4 -14 -26  2   4   6   8  10  12 -2  -2  -2  -2  -2  0   0   0   0   0   0   0  0   0  0 Extending the first column in a periodic way you get 1054: 5 10  17  28  47  82  149  280  539 1054 -5  -7 -11 -19 -35 -67 -131 -259 -515  2   4   8  16  32  64  128  256 -2  -4  -8 -16 -32 -64 -128  2   4   8  16  32  64 -2  -4  -8 -16 -32  2   4   8  16 -2  -4  -8  2   4 -2 Bo Jacoby (talk) 07:57, 8 November 2009 (UTC).


 * I'd simplify this method a bit. For the last case, I'd only go down 2 levels, skip worrying about the signs, and align them like so:

5 10 17 28  47  82  149   280   539   1054  5  7  11  19  35  67   131   259   515    2  4   8  16  32   64   128   256


 * I also believe you made a mistake in the last term. StuRat (talk) 18:33, 8 November 2009 (UTC)


 * Thank you! I now corrected my error. The idea is that the row of function values is transformed into the column of coefficients. T(5,10,17,28,47) = (5,-5,2,-2,2). And vice versa: T(5,-5,2,-2,2) = (5,10,17,28,47). Extending the column of coefficients with zeroes leads to a polynomial extension of the row of function values. T(5,-5,2,-2,2,0,0,0,0) = (5,10,17,28,47,82,149,280,539,1054). The transformation T involves subtraction only, but no multiplication, (and so I was attempted to do it by hand, and made the error). The signs are essential for making T an involution, T=T&minus;1. Bo Jacoby (talk) 00:20, 9 November 2009 (UTC).


 * Yes, but my assessment is that an involution is a bit beyond a student who needs help finding the 10th term in the sequence 4, 13, 22, 31, 40..., and I always try to tailor my answers to the target audience. StuRat (talk) 00:51, 9 November 2009 (UTC)