Wikipedia:Reference desk/Archives/Mathematics/2011 November 10

= November 10 =

PMQ
Yesterday in Parliamentary Question time, David Cameron said that over a period there had been 100% increase in firearms seizures at the border controls. If in period 1, 0 guns had ben seized, was he saying 100 guns wee seized in period 2? If 1 gun was seized in period 1. were 100 guns seized in period 2? Kittybrewster  &#9742;  00:58, 10 November 2011 (UTC)
 * A 100% increase means twice as much. So If 1 gun was seized in period 1, then 2 were seized in period 2; 1000 gun were seized in period 1, then 2000 were seized in period 2. See Relative change and difference.--RDBury (talk) 01:59, 10 November 2011 (UTC)
 * And if we got none in period 1, we got none in period 2? Kittybrewster  &#9742;  02:23, 10 November 2011 (UTC)
 * Right. If there were none in period 1, and none in period 2, you could claim any percentage increase or decrease you like, and you would be correct. --Trovatore (talk) 02:25, 10 November 2011 (UTC)
 * That is more misleading/ better than Tony Blair. Kittybrewster  &#9742;  02:44, 10 November 2011 (UTC)
 * The Civil Service statisticians would not have allowed either PM to make such a claim if the number had been zero.   D b f i r s   07:46, 10 November 2011 (UTC)
 * Civil Service statisticians can give orders to the Prime Minister of the United Kingdom? Would you please forward me a job application? --Trovatore (talk) 08:13, 10 November 2011 (UTC)
 * No, I didn't quite mean it like that, but they certainly wouldn't have provided a 100% increase figure with a base of zero, and I don't think any PM is quite stupid enough to interpret zero figures as a 100% increase. I'm not privy to conversations between the Civil Service and the PM, but I suspect that someone checks that the figures he presents to Parliament can be justified on a sound statistical basis.  If you make claims such as "any percentage increase or decrease you like is correct" then I don't think you would get the job.   D b f i r s   08:29, 10 November 2011 (UTC)

Kilogram
What is the approx weight of a kilogram in terms of Planck's Constant? What variable will cause that to vary over time? Kittybrewster  &#9742;  02:48, 10 November 2011 (UTC)
 * Planck's constant doesn't have units of mass, so the question doesn't exactly make sense. Planck's constant has units of angular momentum.  But if you want to know what a kilogram is in natural units, the linked article should answer that (maybe you'll have to take a reciprocal). --Trovatore (talk) 03:03, 10 November 2011 (UTC)


 * In Planck units the unit of mass is the Planck mass, which is about 2.2 x 10-8 kg. So 1 kg is about 45 million Planck masses. According to orders of magnitude (mass), one Planck mass is about 1/10 of the mass of a fruit fly, or 1/3 of the mass of an eyebrow hair. Gandalf61 (talk) 08:42, 10 November 2011 (UTC)


 * Could you perhaps be referring to the recent proposal to redefine the kilogram based on the Planck constant (h)? (See Kilogram, specifically Kilogram and also New SI definitions) In that case, the proposal is to define Planck's constant to be exactly 6.62606X×10−34 kg·m2·s-1 (where X denotes a digit to be specified later). -- 71.35.99.151 (talk) 17:17, 10 November 2011 (UTC)

Calculus of variations problem
I'm trying to find a nonnegative function which gives a value as low as possible for the following:
 * $$\frac{\int_0^{\infty}f(t)t\ dt\cdot\int_0^{\infty}f(t)^2\ dt}{\left(\int_0^{\infty}f(t)\ dt\right)^3}$$

So far the best I've found is $$f(t)=\max(1-t,0)\,\!$$ with a value of 4/9. Can this be optimized with standard calculus of variation techniques? Is there a better function? -- Meni Rosenfeld (talk) 13:37, 10 November 2011 (UTC)
 * The lower bound is zero. Try $$f(t) = e^{-a t}$$, for which it is easy to evaluate the integrals. Looie496 (talk) 16:18, 10 November 2011 (UTC)
 * No. This gives a value of 1/2 for any a. A step function also gives 1/2.
 * The expression is invariant to scaling on both axes. -- Meni Rosenfeld (talk) 16:57, 10 November 2011 (UTC)
 * No, it gives a value of a/2. Note that $$(e^{-a t})^2 = e^{-2 a t}$$. Looie496 (talk) 04:57, 11 November 2011 (UTC)
 * Please stop this.
 * $$\int_0^{\infty}e^{-at}t\ dt=\frac{1}{a^2}$$
 * $$\int_0^{\infty}(e^{-at})^2\ dt=\frac{1}{2a}$$
 * $$\int_0^{\infty}e^{-at}\ dt=\frac{1}{a}$$
 * $$\frac{\frac{1}{a^2}\frac{1}{2a}}{\left(\frac1a\right)^3}=\frac{1}{2}$$
 * It might surprise you but I've actually spent some time working on this, both manually and with my silicon half (I have a program that calculates this for any function I input, and I've tried a variety of functions). An exponential function is among the first I considered. Like I said, the value is invariant to scaling. It comes from an application (which I could disclose, but it's fairly involved and doesn't add much) where it would be very surprising if indeed the lower bound is 0. -- Meni Rosenfeld (talk) 05:29, 11 November 2011 (UTC)
 * Ah, I missed the extra t in the first integral -- sorry. Looie496 (talk) 06:05, 11 November 2011 (UTC)

That is a nice problem, and I don't think I can contribute beyond what you have already done yourself. But what is the interpretation of the expression? If y=f(t) is a probability distribution function, then the expression is E(t)&middot;(E(y))2. That makes no sense to me. Bo Jacoby (talk) 15:44, 11 November 2011 (UTC).


 * Hi everybody. I think $$4/9$$ is indeed the minimum, and any minimizer is the one Meni wrote, up to rescaling on both axes (i.e., it is $$\scriptstyle \max(a-bt, 0)$$ for some positive $$a$$ and $$b$$). We want to minimize the functional you wrote, $$\scriptstyle J(f)\le+\infty$$, say among all nonnegative $$\scriptstyle f\in

L^1([0,\infty[)$$. A first remark is that this infimum is the same if we do it on the smaller class of all $$L^1$$ functions with bounded support (just because for any $$f$$ we have $$\scriptstyle J (f \chi_{[0,T]} ) \to J(f) $$ as $$\scriptstyle T \to+\infty$$, so the infimum in the smaller class is not larger. Here $$\scriptstyle \chi_{[0,T]}$$ denotes the characteristic function of course). Let $$f_n$$ be a minimizing sequence of $$L^1$$ functions with bounded support. By the invariance of $$J$$ may rescale them if needed, and assume wlog that $$\scriptstyle f_n\in L^2[0,1]$$ and $$\scriptstyle \int_0^1 f_n^2dt=1$$. Also, we may extract a subsequence if needed, and assume wlog that $$f_n$$ is weakly convergent in $$L^2$$. The $$L^2$$ norm (squared) $$\scriptstyle \int_0^1 f^2 dx$$ is weakly lower semicontinuous, and the other integrals that enter in the definition of $$J$$ are linear in $$f$$, thus weakly continuous. So $$J$$ is weakly semicontinuous and $$f_n$$ converges to a minimizer.
 * So far we know that there exists a solution of your problem (ok, this kind of statements is an evergreen source of jokes about mathematicians, but here existence is really a useful information). Let's write the first variation of $$J$$ at a minimizer $$f$$ along a direction $$v=\chi_E$$, a characteristic function of a measurable set $$\scriptstyle E\subset [0,1].$$ That is,   $$\scriptstyle\frac{\partial J(f)} {\partial v} :=\frac

{\partial} {\partial \epsilon} J(f+\epsilon v) \Big| _{\epsilon=0}  $$. It's easy to compute it, because for any fixed $$v$$ it is just the derivative of a harmless function of the real variable $$\epsilon,$$ a rational function indeed. If you write it, you'll find for the directional derivative an expression of the form $$\scriptstyle \frac{\partial J(f)} {\partial v} := c\int_E \big(f(t)-a+bt\big) dt$$, $$a, b, c$$ being positive coefficients depending on $$f.$$ We know that if $$E$$ is a set where $$f$$ is bounded away from zero, the above directional derivative must vanish, because $$f+\epsilon v$$ is a nonnegative function for small $$\epsilon$$, so $$J(f+\epsilon v)$$ has a local minimum at $$\epsilon=0$$. This means that $$f(t)=a-bt$$ where it is different from zero. Again, we may assume $$a=b=1$$ after rescaling, and we then have that $$f$$ has the form $$\max(1-t,0)\chi_A$$ for some measurable set $$A$$. To conclude we must show that $$A$$ is empty (or a null set). To this end, we can compare $$J(f)$$ with $$J(f^*)$$, where $$f^* $$ is the decreasing rearrangement of $$f$$, characterized by: $$|\{f^*> s \}|= |\{f >s \}|$$ for all $$s$$ (the area below the graph is pushed towards the y-axis preserving the lenght of all orizontal sections). In this transformation the $$L^1$$ and $$L^2$$ norms remains unchanged, while $$\scriptstyle \int_0^1 t f(t) dt$$ gets smaller, because we are moving the mass of $$f$$ where the weight $$t$$ is smaller. Therefore $$A$$ is an interval $$[0,\alpha]$$ (actually, for this reason we could have assumed from the beginning that all functions are decreasing), and now we know that the minimizer is $$f_\alpha:=\max(1-t,0)\chi_{[0,\alpha]}$$ for some $$\scriptstyle \alpha\in[0,1]\; .$$   Lastly, we are left with the problem of deciding which $$\scriptstyle \alpha\in[0,1]$$ makes smaller the value of $$J(f_\alpha)$$. But the latter is easily seen to be a decreasing function of $$\alpha$$, and we arrive at $$\scriptstyle\max(1-t,0)$$. I hope it's clear enough. --pm a 22:36, 11 November 2011 (UTC)
 * Thanks for the detailed proof! I'm a bit rusty on some of the machinery you've used but it looks right. I'll also mention that earlier I did some calculations that indicate that if the function isn't linear then there's some way to improve it, but I wasn't sure my derivation correctly respects the nonnegativity requirement. Looks like you managed to prove this formally. -- Meni Rosenfeld (talk) 16:39, 12 November 2011 (UTC)
 * f here isn't exactly a probability distribution, it's more of a weight function. Suppose $$\int f=1$$ and that there's a Poisson process with rate 1 starting at time 0. For every event in time t we receive a prize $$f(t)$$. Denoting the total prize by $$W=\sum_if(t_i)\,\!$$, then $$\mathbb{V}[W]=\int_0^{\infty}f(t)^2\ dt$$. Also, $$\mathbb{E}\left[\sum_if(t_i)t_i\right]=\int_0^{\infty}f(t)t\ dt$$ which represents the average time it takes to get prizes. We can choose f and want both the variance and the time to be as low as possible, but we're bounded by the fact that their product must be at least 4/9. To allow working with $$\int f\neq1$$ we put $$\left(\int f\right)^3$$ in the denominator. -- Meni Rosenfeld (talk) 16:39, 12 November 2011 (UTC)

Union of Hamiltonian Subgraphs question?
Given a graph G and a vertex v in G, HC*(G,v) is defined as the union of all Hamiltonian cycles on G that include vertex v. Is it true that for all vertices w in HC*(G,v) that HC*(G,v) = HC*(G,w)? If yes, how can it be proved. If not, can a counter example be constructed?Naraht (talk) 14:24, 10 November 2011 (UTC)


 * Please see Hamiltonian path. HC*(G,v) contains cycles on G. A cycle on G includes all vectors in G. Is it possible for a vertex w in G to not be a cycle on G? -- k a i n a w &trade; 17:25, 10 November 2011 (UTC)
 * I concur — Preceding unsigned comment added by 203.112.82.129 (talk) 22:44, 10 November 2011 (UTC)
 * Sorry, let me rephrase, Given a graph G and a vertex v in G, HC*(G,v) is defined as the union of all Hamiltonian cycles on subgraphs of G that include v. Is it true that for all all vertices w in HC*(G,v) that HC*(G,v) = HC*(G,w)? If yes, how can it be proved. If not, can a counter example be constructed? What I'm trying to do is figure out a way to cut off the parts of G that are only connected to the part of G that contains v through a single edge.Naraht (talk) 01:49, 11 November 2011 (UTC)
 * Every graph is a counter example. HC*(G,v) contains singleton v.  HC*(G,w) does not.--121.74.125.249 (talk) 02:41, 11 November 2011 (UTC)
 * Woops, union, not set of. So to put it another way, HC*(G,v) is the union of all cycles containing v.  Then sure, there's a simple counterexample.  Consider the 5 vertex graph made by gluing two triangles together at a single vertex.  If v is the wedge point, HC*(G, v) is the full graph.  If w is a different point, HC*(G,w) is just the one triangle containing it.--121.74.125.249 (talk) 03:00, 11 November 2011 (UTC)
 * Thanks, that's a perfect counter-example!Naraht (talk) 13:04, 12 November 2011 (UTC)

Long series of implications
If we are met with a long series of implications (⊃), is the best solution to use a modus ponens ?

Example : The Little Red Riding Hood


 * If the Little Red Riding Hood goes in the woods (P), she takes her little pot of butter (Q).
 * In order to see her grandmother (R), she must behave well (S).
 * If she meets the wolf (T), she will be scared (U).
 * If she takes her little pot of butter (Q), she goes to see her grandmother (R).
 * She puts her overshoes on (V) only when she goes in the woods (P).
 * If she is scared (U), it means that she hasn't behaved well (~S).

So :
 * (P⊃Q), (Q⊃R), (R⊃S), (S⊃~U), (~U⊃~T) .........and (V⊃P)
 * (P⊃Q⊃R⊃S⊃~U⊃~T) .....................................   and (V⊃P)
 * (P⊃~T) ............................................................and (V⊃P)

And so, I think at this point, we do a modus ponens. (((P⊃~T) ⊃ (V⊃P)) & (P⊃~T)) ⊃ (V⊃P). And the final implication is (V⊃P).

But I'm not really sure about this reasoning. Can anybody point out if I made a mistake at some point, and where ? 76.67.166.12 (talk) 18:25, 10 November 2011 (UTC)
 * I really have no clue what the hell you're talking about. A chain of implications screams "transitivity" to me, but clearly, that's not what you have in mind. --COVIZAPIBETEFOKY (talk) 23:11, 10 November 2011 (UTC)
 * It is not modus ponens. Modus ponens would apply if we were told the premise is true. (P & P⊃Q) ⊃ Q. But what you did is correct, I just don't know the name. KyuubiSeal (talk) 00:21, 11 November 2011 (UTC)
 * That is modus ponens. However, what you did is incorrect; (V⊃P) is one of your premises, and so is probably not what you are looking for as a conclusion.  Certainly, there's no need to invoke MP to conclude it.  You have (P⊃~T), (V⊃P) in the last line.  So collapse one more implication, just as you did up until this point.--121.74.125.249 (talk) 03:12, 11 November 2011 (UTC)


 * Wait, I think I've got it.


 * (P ⊃ ~T) contraposes as (T ⊃ ~P) and (V ⊃ P) contraposes as (~P ⊃ ~V).


 * With these two contrapositions, we can do the transitivity.


 * ((T ⊃ ~P)) & (~P ⊃ ~V)) ⊃ (T ⊃ ~V)


 * And this itself contraposes as (V ⊃ ~T)


 * So the final proposition is that if the Little Red Riding Hood puts her overshoes on, she doesn't meet the Big Bad Wolf. I think that makes sense. 76.67.166.12 (talk) 04:33, 11 November 2011 (UTC)

While the answer is still right, I still feel like nitpicking. It isn't modus ponens. Modus ponens would apply if you knew she put on her shoes. Then you can conclude that she doesn't meet the big bad wolf. KyuubiSeal (talk) 18:17, 11 November 2011 (UTC)


 * Well, I changed my approach and did a second transitivity instead of a modus ponens. So it isn't a modus ponens that's needed, it's another transitivity and some more contrapositions. 76.67.166.12 (talk) 19:23, 11 November 2011 (UTC)


 * OH! Okay, I thought you were referring to the steps of (P ⊃ Q & Q ⊃ R) ⊃ P ⊃ R as modus ponens. I see what you were doing now. Sorry about that. KyuubiSeal (talk) 20:13, 11 November 2011 (UTC)