Wikipedia:Reference desk/Archives/Mathematics/2009 June 25

= June 25 =

two questions
I have two questions
 * 1) The internal dimention of an open concrete tank are 1.89 m long,0.77 m wide and 1.23 m high. If the copncrete side walls are 0.1 m thick find in cubic metres ,the volume of concrete used.Give answer to 3 sigficant figures.  Answer on book is .704 cubic m . but i have solved it and find  the answer 0.79989 by multiplying surface area with thickness.
 * 2) Two trains A and B, are scheduled to arrive at a station at certain time .The time is in seconds after the scheduled time for each of the 40 days was recorded , mean and standard deviation  are as
 * A: mean 0.528 S.D  0.155
 * B: mean 0.498 S.D 0.167, My question is which train is more consistent in arriving late .why ?
 * and which train is more punctual on the whole ?why?. —Preceding unsigned comment added by True path finder (talk • contribs)


 * 1. Some of the volume in your method of calculation is added twice, specifically the edges where the sides meet, so you have to calculate total volume minus the volume of the empty space in the "tank", which is more like a tube since it doesn't have a top or bottom. Doing so, I ended up with 0.70356 m^3. --Wirbelwind ヴィルヴェルヴィント  (talk) 16:24, 25 June 2009 (UTC)

Thank U ,I have found it

Chain Rule for Matrices

 * $$ \frac{d \mathbf{F}(\mathbf{G}(x))} {dx} = \mathbf{F}'(\mathbf{G}(x)) \mathbf{G}'(x) $$

or
 * $$ \frac{d \mathbf{F}(\mathbf{G}(x))} {dx} = \mathbf{G}'(x) \mathbf{F}'(\mathbf{G}(x)) $$

76.67.79.85 (talk) 18:11, 25 June 2009 (UTC)


 * Neither of the rules are correct (assuming that you are using the standard vector-matrix notation). Note that a derivative of a "matrix function of a matrix" is not a matrix, so you will have to carefully define what you mean by:
 * $$ \mathbf{F}'(\mathbf{G}(x)) $$, the derivative of matrix F with respect to matrix G. (lets call this H)
 * multiplying H (on the left or right) by the matrix $$\mathbf{G}'(x)$$.
 * before we can write down the chain rule succinctly. Abecedare (talk) 19:45, 25 June 2009 (UTC)


 * OP here, F is a function from a matrix to another matrix (in my case, it is the matrix logarithm, but I want a general answer). By multiplying two matrix-valued functions, I mean that the two matrices should be multiplied according to the normal rules of the matrix product. 70.24.38.23 (talk) 21:20, 25 June 2009 (UTC)


 * Here is how you can proceed (with some abuse of notation I am not distinguishing between a matrix and a matrix valued function):
 * Let $$ x \in \mathbb{R}; \; \mathbf{G} \in \mathbb{R}^{M\times N}; \; \mathbf{F} \in \mathbb{R}^{P\times Q}$$
 * Then define:
 * $$\mathbf{D}:=\frac{d \mathbf{F}(\mathbf{G}(x))} {dx} \in \mathbb{R}^{P \times Q}$$ with $$ \mathbf{D}_{pq} = \frac{d \mathbf{F}_{pq}(\mathbf{G}(x))} {dx}$$, and
 * $$\mathbf{H} := \frac{d \mathbf{F}(\mathbf{G})} {d\mathbf{G}}\in \mathbb{R}^{M \times N \times P \times Q}$$ with $$ \mathbf{H}_{mnpq} := \frac{\partial \mathbf{F}_{pq}(\mathbf{G})}{\partial \mathbf{G}_{mn}}$$
 * Now using the above definitions and standard chain rule for functions of several variables:
 * $$ \mathbf{D}_{pq} = \sum_{m,n} \mathbf{H}_{mnpq} \frac{d\mathbf{G}_{mn}}{dx} $$
 * You can write the above sum as a matrix-vector product by defining those appropriately; but note that H is not a matrix, and that the matrix D itself is not obtained through a simple matrix product. Hope that helps. Abecedare (talk) 22:08, 25 June 2009 (UTC)


 * Thank you, I get it now. 70.24.38.23 (talk) 22:43, 25 June 2009 (UTC)

Integral is 0 on any interval
Say we have an integrable function, f. If the integral of f is 0 over any interval, then must f be 0 almost everywhere? It seems pretty obvious that it is true, but I'm not quite getting it. And, what if f is some weird function that is 1 half the time and -1 half the time, with the points mixed up in such a way that there are the same amount of each type in every interval? Obviously that's not possible because the integral of the positive part would be infinity and then f would not be integrable. But, perhaps some wierd thing like that could happen, perhaps it's decreasing toward 0 for the positive part and increasing toward 0 for the negative part, both at the same rate. Any help would be much appreciated. Thanks. StatisticsMan (talk) 19:27, 25 June 2009 (UTC)
 * Your objection that f would not be integrable is not a problem, since we can just define f to be your weird thing in [0,1] and constantly zero everywhere else. So all we need to make your idea work is a measurable subset of [0,1] that contains exactly half of every interval. Algebraist 19:50, 25 June 2009 (UTC)
 * Does the Regularity theorem for Lebesgue measure prevent that weird thing (assuming we are talking about Lebesgue integrability)? 208.70.31.206 (talk) 20:54, 25 June 2009 (UTC)
 * It does. Almost all points x of a measurable set S are points of density 1, meaning that | [x-ε, x+ε]\S | = o(ε) as ε→0. --pma (talk) 14:37, 26 June 2009 (UTC)
 * You mean μ([x − ε, x + ε] \ S) = o(ε), right? Except that this is Lebesgue's density theorem. I don't see how the regularity theorem has anything to do with it. — Emil J. 14:52, 26 June 2009 (UTC)
 * Yes, it's the density thm, that's what I linked. In fact I only read the word "Lebesgue" on the preceding post and concuded. I tend to assume correctedness .--pma (talk) 15:27, 26 June 2009 (UTC)
 * Ahh, I didn't bother to click on your link, and didn't realize it redirects to the density theorem. — Emil J. 16:03, 26 June 2009 (UTC)

The typical argument along these lines starts with the fact that characteristic simple functions over intervals are dense in the space of integrable functions, measured with the $$L^1$$ metric. Ray Talk 20:41, 25 June 2009 (UTC)

On further reflection, I realize that was unhelpful and wrong (since corrected the wrong part). Sorry. I shall try to make amends. If the function is not zero a.e., there must exist some positive value, call it M, such that $$m(\{x:f(x) \geq M\}) \geq \eta > 0$$. Since the function is integrable, for every $$\epsilon$$ there must exist a $$\delta$$ such that $$\int_A |f(x)| dx \leq \epsilon$$ whenever $$m(A) \leq \delta$$. Pick $$\epsilon \leq \frac{M\eta}{2}$$, say. Let us approximate the set where $$S(M) = \{x:f(x) \geq M\}$$ by a containing open set $$O_\delta$$ which contains all but $$\delta$$ of the mass of the set S(M). Open sets on the real line are just collections of open intervals, thus $$S(M) = (\cup I_i) \setminus A$$, where $$m(A) \leq \delta$$. Thus
 * $$M\eta \leq \int_{S(M)} f(x) dx = \sum_i \int_{I_i} f(x) dx - \int_A f(x) dx = -\int_A f(x) dx \leq \epsilon \leq \frac{M\eta}{2}$$

which is a contradiction. I'm not really satisfied with this argument, and I think there's a better measure-theoretic way of going about it, but ... Ray Talk 21:39, 25 June 2009 (UTC)


 * You can prove it in plenty of ways; here is another one. Recall that for an integrable function f the integral mean of f on each interval [x-ε, x+ε] defines a certain function fε(x) that converges to f in L1 as ε tends to 0. (This is very easy; btw, a bit less elementary fact is that you also have a.e. convergence, which is the content of the Lebesgue differentiation theorem: anyway that's not necessary here). On the other hand, the condition on f states that fε is identically zero, so you conclude that f itself is 0 a.e., as limit of 0= fε in the L1 norm.
 * Another proof: take a uniformly bounded sequence (φk) of step functions (that is, linear combinations of characteristic functions of intervals) converging a.e. to sgn(f). The integral of φkf is zero by the assumption; moreover by the Lebesgue dominated convergence theorem the integral converges to || f ||1, so f=0 a.e. We can rephrase the preceding argument saying that by assumption f is in the annihilator of the subspace of L∞ of all step functions; therefore f=0 as element of L1 because step functions are weakly* dense in L∞ (this easily follows form the fact that they are also dense in L1....&c. Notice that Ray's first hint is correct and useful.).
 * It is interesting that the same conclusion holds if you ask the condition only on the intervals of length 1. Indeed, this implies, by the sigma-additivity of integral, that the integral is 0 on all half-lines. By subtraction then the integral on intervals of any length also vanishes, and you are lead to the preceding case.
 * Still true, but less elementary, the following generalization to integrable functions on Rn : if the integral of f over any translated U+x of a given bounded set of positive measure U vanishes, then again f=0 a.e. Indeed, the assumption is equivalent to say that f * g = 0, where g is the characteristic function of -U. Applying the Fourier transform you have that the pointwise product of f^ and g^ is 0 a.e. But g^ is a non-zero analytic function by the Paley-Wiener theorem, so it is a.e. different from 0, hence f^ is 0 a.e., and since the Fourier transform is injective, f is 0  a.e. --pma (talk) 23:15, 25 June 2009 (UTC)


 * Thanks everyone for the responses. I found a theorem in Royden's Real Analysis, which is the text I use, such that this is an immediate corollary.  It is Lemma 5.8:
 * If f is integrable on [a, b] and $$\int_a^x f(t) \,dt = 0$$ for all x in [a, b], then f(t) = 0 a.e. in [a, b].
 * Also, PMA, thanks for the generalization to intervals of unit length. I find it interesting and perhaps it will be useful. StatisticsMan (talk) 00:21, 29 June 2009 (UTC)


 * Note that the generalization to intervals of unit length only works for L1 functions. For example, $$\int_a^{a+1}\sin(2\pi x)\,dx=0$$ for all real a, but sin(2πx) certainly does not vanish almost everywhere. In contrast, if the integral (it does not even have to be a Lebesgue integral, it works with gauge integral too) of f is 0 over each bounded interval, then f = 0 a.e. without any further hypotheses on f. — Emil J. 12:23, 29 June 2009 (UTC)
 * By the way, this generalization to unit length intervals also holds for Lp functions, for finite p (see below in the next StatMan post) --pma --131.114.72.186 (talk) 12:39, 30 June 2009 (UTC)

Is this a valid proof using the first method (actually I am showing convergence a.e. instead of L^1 convergence) given by PMA? Define $$f_\epsilon(x) = \frac{1}{2\epsilon} \int_{x - \epsilon}^{x + \epsilon} f(t) \,dt$$. Then, $$f_\epsilon(x) = 0$$ for any x and any $$\epsilon > 0$$ by the given conditions. Now, pick any bounded interval, [a, b]. Define $$F(x) = \int_a^x f(t) \,dt$$ on [a, b]. Since f is integrable everywhere, F is absolutely continuous. And, for any $$x \in (a, b)$$, $$f_\epsilon(x) = \frac{1}{2\epsilon}(F(x + \epsilon) - F(x - \epsilon)) = \frac{1}{2}(\frac{F(x + \epsilon) - F(x)}{\epsilon} + \frac{F(x) - F(x - \epsilon)}{\epsilon})$$. Since F is absolutely continuous, we know that F'(x) exists and is equal to f(x) a.e. Thus, for almost all x in (a, b), this shows
 * $$\lim_{\epsilon \to 0} f_\epsilon(x) = F'(x) = f(x)$$.

Thus, f(x) = 0 a.e. in (a, b). You can do this for any interval, so you can do it for [-n, n] for any natural number n. Since the reals are a countable collection of intervals of the type (-n, n), countable subadditivity says the subset of the reals where f is not 0 is measure 0. StatisticsMan (talk) 20:13, 15 August 2009 (UTC)


 * Also, as far as your second suggestion, I guess I don't see how we can guarantee there is a uniformly bounded sequence of step functions that converges to sgn f. Though, I think it makes sense to me that if we have that, everything will work out.  StatisticsMan (talk) 20:32, 15 August 2009 (UTC)


 * Very good. I answerred in my talk page; feel free to put there any further question, in case. --pma (talk) 18:49, 16 August 2009 (UTC)

36 standard deviations away from the mean on a normal distribution
What would be an example of 36 standard deviations away from the mean in nomally distributed data? I'm thinking like an iq of 650 or being a normal human who happens to be 15 feet tall, but neither of these is something I can imagine...  Any better (real) examples, not imaginary? Also how about in terms of dice, like throwing sixes in a row with a fair dice? How many sixes in a row would happen about as frequently as something 36 standard deviations away from the mean? Thanks! 193.253.141.81 (talk) 21:16, 25 June 2009 (UTC)
 * It doesn't happen, if the data is actually normally distributed. 36 standard deviations is well outside the "happens once in N times the lifetime of the universe," where N is a large number. Ray  Talk 22:05, 25 June 2009 (UTC)

So it's like throwing how many sixes in a row with a fair die?193.253.141.64 (talk) 22:48, 25 June 2009 (UTC)


 * Ok, let's work it out. The number of sixes in N throws follows a binomial distribution. The expected number of sixes is, of course, N/6. The variance of a binomial is np(1-p), in this case that is N*1/6*5/6=5N/36. The standard deviation is the square root of that, which is $$\frac{\sqrt{5N}}{6}$$. So we want N such that N-N/6=$$6\sqrt{5N}$$. That is N=259.2, so we'll round up to 260 throws. The chance of that happening is about 1 in 10202 (for comparison, the age of the universe is about 4x1017 seconds). I think I've done that right, can someone check me? --Tango (talk) 00:35, 26 June 2009 (UTC)


 * Let me give a different approach, letting the variable be $$\bar{x}_n$$, the sample mean of rolling a die n times in a row, rather than N, the number of sixes, as Tango did (I don't feel like I have the background to check Tango's work). It shouldn't be terribly surprising that we get significantly different answers, since we used different variables.
 * $$\mu_{\bar{x}_{n}} = 3.5$$ is the mean, and $$\sigma_{\bar{x}_{n}} = \frac{\sigma}{\sqrt{n}} = \frac{\sqrt{\frac{35}{12}}}{\sqrt{n}} = \sqrt{\frac{35}{12n}}$$ is the standard deviation, since the standard deviation for a single die is $$\sqrt{\frac{35}{12}}$$ by direct calculation. Since for sufficiently large n (and we're talking about at least, say, 5 here; not a very stringent requirement), the distribution is approximately normally distributed, we need only look for an n which satisfies:
 * $$\frac{\bar{x} - \mu_{\bar{x}_{n}}}{\sigma_{\bar{x}_{n}}} \approx 36$$
 * Where we want $$\bar{x}$$ to be 6. We can solve this:
 * $$\frac{6-3.5}{\sqrt{\frac{35}{12n}}} \approx 36$$
 * $$\frac{(2.5)\sqrt{12n}}{\sqrt{35}} \approx 36$$
 * $$\frac{5\sqrt{3n}}{\sqrt{35}} \approx 36$$
 * $$\sqrt{n} \approx \frac{36\sqrt{35}}{5\sqrt{3}}$$
 * $$n \approx \frac{36^2 35}{5^2 3}$$
 * $$n \approx \frac{3024}{5} = 604.8$$
 * So the smallest n such that n rolls of 6 would be more than 36 standard deviations from the mean is 605.
 * Since this distribution is approximately normal, we can give a rough approximation of the probability of a normal distribution exceeding 35 standard deviations by calculating the corresponding probability in rolling 605 dice, and we get the unsurprisingly tiny number $$6^{-605} \approx 10^{-471}$$. A better approximation can be achieved by calculating the corresponding probability for a larger number of rolls. --COVIZAPIBETEFOKY (talk) 01:21, 26 June 2009 (UTC)
 * I'm not convinced by your method. It seems that you would get a different answer if we talked about rolling lots of 1's in a row, rather than 6's, which doesn't fit with the spirit of the question being asked. Also, I'm not sure the normal approximation is valid for any number of rolls, since you are examining what happens at the maximum and a normal distribution doesn't have a maximum so clearly the exact distribution and the approximate distribution differ significantly at that point. --Tango (talk) 10:53, 26 June 2009 (UTC)
 * I agree on both points, although ironically, in your first point, you picked an example that would actually get exactly the same answer (if you do it right), by symmetry around the mean. But if you're talking about lots of 2's, 3's, 4's or 5's in a row, then you're right; my analysis won't work. In part, that's because $$\bar{x}=5$$ does not imply a long streak of 5's, as it does for 6. --COVIZAPIBETEFOKY (talk) 12:09, 26 June 2009 (UTC)

'''So who's right? Is it 260 or 605 throws''' of sixes in a row with a fair die?193.253.141.64 (talk) 06:54, 26 June 2009 (UTC)
 * We've answered different questions, so we could both be right. --Tango (talk) 10:53, 26 June 2009 (UTC)
 * It seems to me that neither of you has answered the question the OP was asking. The probability of a normal variable being greater than $$\mu+36\sigma$$ is $$4 \cdot 10^{-284}$$. This is also the probability that, if you throw a die 364 times, they will all turn out 6.
 * I'll also point out that as far as I know, the normal approximation of the binomial does not work for values so far from the mean. -- Meni Rosenfeld (talk) 11:38, 26 June 2009 (UTC)
 * But the OP didn't ask about a normal variable, they asked about rolling dice. As you say, the normal approximation doesn't apply, so you have to do it exactly, which is what I attempted (potentially incorrectly). There is also a matter of what variable you are measuring. I did the number of sixes in N rolls (for fixed N), you could also do the number of sixes before you get a different number. I would imagine you would get different answers (since I have no reason to assume they would give the same answer). My statistics training is rather minimal, so I can't think what distribution the latter would have... --Tango (talk) 11:54, 26 June 2009 (UTC)
 * But the OP did ask about a normal variable (see title), specifically about the probability that it will be 36 SD from the mean. He wanted to express this probability in dice.
 * The number of throws until you get something other than 6 is distributed geometrically, which is a special case of the negative binomial distribution. -- Meni Rosenfeld (talk) 12:32, 26 June 2009 (UTC)

Implementing the Q-function:
 * $$\frac{x}{1+x^2} \cdot \frac{1}{\sqrt{2\pi}} e^{-x^2/2} < Q(x) < \frac{1}{x} \cdot \frac{1}{\sqrt{2 \pi}}e^{-x^2/2}$$

in the J (programming language) gives x =. 36             NB. x is 36 a =. % %: o. 2      NB. a is the reciprocal of the square root of 2 pi  b =. ^ - -: *: x    NB. b is the exponential of minus half square of 36 6 ^. % b * a % x    NB. base 1/6 logarithm of upper bound for Q(36) 364.169 So it's like throwing 364 sixes in a row with a fair die. Bo Jacoby (talk) 11:59, 26 June 2009 (UTC).