Wikipedia:Reference desk/Archives/Mathematics/2008 June 10

= June 10 =

constans and variables
Umm, how do I identify the constants and variables? I know constants never change and the variable is the letter, but when I got something like

2t + 3

I know t is the variable, but what is the constant? 2 remains the same, so does 3, and t as well, right? Are they all constants? Please help. —Preceding unsigned comment added by 24.76.248.193 (talk) 02:28, 10 June 2008 (UTC)


 * I think you might be confused by a general versus a specific usage of the word "constant".


 * In the general case, a constant is any number that doesn't vary, so in "2t + 3", both 2 and 3 are constants.


 * Now, many equations consist of a number of terms, each involving a different power of some variable. For example, in a quadratic equation like
 * 3x2 + 2x + 4
 * we have an "x squared" term 3x2, and an "x" term which is 2x. The third term, that doesn't have any x in it, that doesn't vary at all no matter how x varies, is the "constant term".  (Actually, you can also think of it as being the x0 term, since no matter what x is, x0 is 1.) —Steve Summit (talk) 03:29, 10 June 2008 (UTC)


 * If your expression is a polynomial, then the terms you use are:
 * Variable - the quantity that is allowed to vary, generally represented by a letter or pronumeral. In Steve's quadratic example, the variable is x.
 * Term - a little ambiguous, it can refer to either a power of the variable, or the power multiplied by a factor. So either x2 or 3x2 is a term. Taking the second definition, we have ...
 * Coefficient - the number multiplied by a power of the variable to give a term. So the coefficient of x2 is 3 in this example. It is sometimes also said that 3 is the coefficient of the x2 term, which moves you back to the first definition of the word.
 * Constant term (or just "constant") - the term with no power of the variable. So in Steve's example, 4.
 * You can extend these to more complicated expressions as well, with minimal modification. Confusing Manifestation (Say hi!) 06:01, 10 June 2008 (UTC)


 * Steve, don't mess up. Your (3x2 + 2x + 4) is an algebraic expression, but it's not an equation (see definition). --CiaPan (talk) 07:23, 10 June 2008 (UTC)

2 envelopes and game theory
The two envelopes problem is the following:
 * The player is given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other. The player may select one envelope and keep whatever amount it contains, but upon selection is offered the possibility to take the other envelope instead.

Now suppose that the envelopes are filled according to the following distribution
 * Suppose that the envelopes contain the integer sums {2n, 2n+1} with probability 2n/3n+1 where n = 0, 1, 2,....

(cited here).

We call "elemenraty envelope game" the 1 player game where the 2 envelopes are filled according to the distribution above and the player can check the content of one envelope, then he can choose which one he wants to take and finally he scores a number of "points" equal to the content of the chosen envelope.

Now consider a two player game consisting of a sequence of n elementary envelope games played by each player, then each player sum up all the points he scores in each game and the winning player is the player who scores the greater total sum of points. In this game each player plays his elementary games without knowing anything about the outcomes of the other player. This is a zero sum game since the possibilities are just win, lose and tie.

Which is the best strategy for this game (or the Nash equilibrium strategy)?

More simple question: between the faimily of strategies "change your envelope if you find less than L inside it" which value of L produces the best strategy in a n-rounds game?--Pokipsy76 (talk) 09:20, 10 June 2008 (UTC)


 * Unless if I don’t understand the problem, or don’t remember how to calculate expected values, it seems like you should always change your envelope. The expected value is $$\sum_{n=0}^\infty \frac{2^n \cdot 2^n}{ 3^{n+1}} =\infty$$ which no matter what you get you’re less than ...GromXXVII (talk) 11:03, 10 June 2008 (UTC)
 * We're not talking about expected values, though. We're talking about winning a game. If you're playing last and the envelope you've got will win the game, and half its value will lose the game, then you should stay. Algebraist 11:06, 10 June 2008 (UTC)
 * Ahh I think I see. The player is only given a finite number of envelopes from the infinite distribution, and winning is only determined by having the most, independent of what that value actually is. I was thinking in terms getting the most money, in which case from my above statement it’s always advantageous to switch. So I guess until you sum values from multiple envelopes it’s really a nonparametic problem because the value’s of each envelope make no difference other than that their values are strictly monotonic. GromXXVII (talk) 11:58, 10 June 2008 (UTC)
 * On an aside, I tried to calculate the expected value of switching given that the value of the envelope you picked was $$2^{n+1}$$. I'm getting that the probability that the other envelope is $$2^{n+2}$$ is $$\frac{\frac{2^{n+1}}{3^{n+2}}}{\frac{2^{n+1}}{3^{n+2}} + \frac{2^{n}}{3^{n+1}} }$$, and that the probability of the other envelope being  $$2^{n}$$ is $$\frac{\frac{2^{n}}{3^{n+1}}}{\frac{2^{n+1}}{3^{n+2}} + \frac{2^{n}}{3^{n+1}} }$$. Therefore the expected value of switching is $$2^{n+2}$$ times the first probability plus $$2^{n}$$ times the second probability, which unless I made a mistake on my arithmetic is $$2^{n} \frac{11}{5}$$. That's higher than the value $$2^{n+1}$$ of keeping your envelope.  I just wanted to get a second opinion on my calculations since I did it differently than Grom above. 63.111.163.13 (talk) 15:24, 10 June 2008 (UTC)
 * That same calculation is done in the article (two envelopes problem), and you should indeed get a higher expected value by switching, that's where the paradox comes in. --Tango (talk) 15:44, 10 June 2008 (UTC)


 * Huh. I have another proposed solution.  See how this looks to you.  I know it can't be added to the article because it probably counts as original work/research, but this discussion seems a good enough place to have a look and discuss it.  Start with a finite number of envelopes; call them envelopes 0 through N (N+1 total).  We will select two such envelopes {x, x+1} with probability:


 * $$P(L=x)

= \begin{cases} p(x), & \mbox{if } 0 \le x < N \\ 0,   & \mbox{otherwise} \end{cases}$$


 * where L represents the choice of the lower numbered envelope. Let X represent the probability a certain envelope is first selected:


 * $$P(X=x|L=x-1)

= \begin{cases} \tfrac{1}{2}, & \mbox{if } 0 < x \le N \\ 0,           & \mbox{otherwise} \end{cases}$$


 * $$P(X=x|L=x)

= \begin{cases} \tfrac{1}{2}, & \mbox{if } 0 \le x < N \\ 0,           & \mbox{otherwise} \end{cases}$$


 * $$P(X=x) = P(X=x|L=x-1) P(L=x-1) + P(X=x|L=x) P(L=x)

= \begin{cases} \tfrac{1}{2} p(0),           & \mbox{if } x = 0 \\ \tfrac{1}{2} (p(x-1) + p(x)), & \mbox{if } 0 < x < N \\ \tfrac{1}{2} p(N-1),         & \mbox{if } x = N \\ 0,                           & \mbox{otherwise} \end{cases}$$


 * Now let Y represent the probability of a final choice of envelope given the strategy "always switch envelopes":


 * $$\begin{align}

P(Y=x) & = P(X=x-1 \land L=x-1) + P(X=x+1 \land L=x) \\ & = \begin{cases} P(X=1 \land L=0),                        & \mbox{if } x = 0 \\ P(X=x-1 \land L=x-1) + P(X=x+1 \land L=x), & \mbox{if } 0 < x < N \\ P(X=N-1 \land L=N-1),                    & \mbox{if } x = N \\ 0,                                      & \mbox{otherwise} \end{cases} \\ & = \begin{cases} \tfrac{1}{2} p(0),           & \mbox{if } x = 0 \\ \tfrac{1}{2} (p(x-1) + p(x)), & \mbox{if } 0 < x < N \\ \tfrac{1}{2} p(N-1),         & \mbox{if } x = N \\ 0,                           & \mbox{otherwise} \end{cases} \end{align}$$


 * which shows that Y is identical to X. However, now let Z represent the probability of a final choice given the strategy "switch envelopes unless X=N":



\begin{align} P(Z=x) & = \begin{cases} P(X=1 \land L=0),                        & \mbox{if } x = 0 \\ P(X=x-1 \land L=x-1) + P(X=x+1 \land L=x), & \mbox{if } 0 < x < N-1 \\ P(X=N-2 \land L=N-2),                    & \mbox{if } x = N-1 \\ P(X=N-1 \land L=N-1) + P(X=N \land L=N-1), & \mbox{if } x = N \\ 0,                                      & \mbox{otherwise} \end{cases} \\ & = \begin{cases} \tfrac{1}{2} p(0),             & \mbox{if } x = 0 \\ \tfrac{1}{2} (p(x-1) + p(x)),  & \mbox{if } 0 < x < N-1 \\ \tfrac{1}{2} p(N-2),           & \mbox{if } x = N-1 \\ \tfrac{1}{2} (p(N-1) + p(N-1)), & \mbox{if } x = N \\ 0,                             & \mbox{otherwise} \end{cases} \\ & = \begin{cases} \tfrac{1}{2} p(0),           & \mbox{if } x = 0 \\ \tfrac{1}{2} (p(x-1) + p(x)), & \mbox{if } 0 < x < N-1 \\ \tfrac{1}{2} p(N-2),         & \mbox{if } x = N-1 \\ p(N-1),                      & \mbox{if } x = N \\ 0,                           & \mbox{otherwise} \end{cases} \end{align}$$


 * Now let w(x) be the value of envelope x, so that:


 * $$\overline{w_X} = \sum_{x}{w(x)P(X=x)}$$


 * $$\overline{w_Y} - \overline{w_X} = w(x) \sum_{x}(P(Y=x)-P(X=x)) = 0$$



\begin{align} \overline{w_Z} - \overline{w_X} & = w(x) \sum_{x}(P(Z=x)-P(X=x)) \\ & = w(N-1) (P(Z=N-1)-P(X=N-1)) + w(N) (P(Z=N)-P(X=N)) \\ & = w(N-1) \tfrac{1}{2} (p(N-2)-p(N-2)-p(N-1)) + w(N) (p(N-1)-\tfrac{1}{2} p(N-1)) \\ & = \tfrac{1}{2} p(N-1) (w(N)-w(N-1)) \end{align}$$


 * Now notice that strategy Y--which does not depend on the value of X for the choice of staying or switching--does not change the expected value of w, but strategy Z--which depends only on whether or not X=N--has a higher expectation value for w. As we let N grow to infinity, blindly switching envelopes (Y) is no better than never switching (X), but Z grows as:


 * $$\lim_{N \to +\infty} \overline{w_Z} - \overline{w_X} = \lim_{N \to +\infty} \tfrac{1}{2} p(N-1) (w(N)-w(N-1))$$


 * And for the case where $$p(x) = \tfrac{1}{3} \left( \tfrac{2}{3} \right)^x$$ and $$w(x) = 2^x$$:


 * $$\lim_{N \to +\infty} \overline{w_Z} - \overline{w_X} = \tfrac{1}{6} \lim_{N \to \infty} \left( \tfrac{2}{3} \right)^N \left( 2^N - 2^{N-1} \right) = \tfrac{1}{9} \lim_{N \to +\infty} \left( \tfrac{4}{3} \right)^{N-1} \to +\infty$$


 * So I believe the "paradox" actually rises from the confusion of strategy Y for strategy Z, and the difference this makes as we let the sequence get infinitely big. Thoughts?  Thank you.  --Prestidigitator (talk) 23:12, 10 June 2008 (UTC)


 * By the way, Z was just an example strategy. There are plenty of others that can improve on the expected result of X and Y, such as:
 * Switch only if X=0
 * Switch iff X<m<=N (of which Z is a special case)
 * etc.
 * The key is that they depend on the value of X and the special behavior at the endpoints. --Prestidigitator (talk) 00:13, 11 June 2008 (UTC)


 * I'm not sure that your limiting process is valid; you're taking the limit as $$N\rightarrow\infty$$ of expectation values defined over probability distributions that are normalized only for $$N=\infty$$. Until you "get there", the expectation values are wrong.  --Tardis (talk) 15:45, 11 June 2008 (UTC)


 * Hmm. Interesting point.  However, it would only be a small constant factor difference to normalize the given distribution for a finite value of N.  For example, changing the factor in p(x) from $$\tfrac{1}{3}$$ to $$\frac{1}{3-3 (\tfrac{2}{3})^{11}}$$ would normalize the given distribution for N=10.  The important bit is that the difference in the expectation values between X and Y is zero for any distribution or value of N, and the difference between Z and X is positive for any given value of N (and grows with N too, for that matter).  What making the bounds of the distribution infinite does is "push out" the value of X at which you would decide to switch envelopes until we lose sight of it, making it appear as if an effective strategy for switching envelopes doesn't depend on the value in the initial envelope.  --Prestidigitator (talk) 16:32, 11 June 2008 (UTC)
 * Well it seems like a strategy of always switching would be effective (although precisely as effective as not switching), but that switching based upon the initial envelope is more effective. GromXXVII (talk) 22:06, 11 June 2008 (UTC)

decimal expansion
Not an homework or similar, only a problem that I saw in the Net:

Let $$\alpha = 0.999... \,$$ where there are at least 2000 nines. Prove that the decimal expansion of $$\sqrt{\alpha} \,$$ also starts with at least 2000 nines.

How to?

87.4.86.63 (talk) —Preceding comment was added at 14:34, 10 June 2008 (UTC)
 * Since $$0<\alpha<1$$, you have $$\sqrt{\alpha}>\alpha$$. -- Meni Rosenfeld (talk) 14:41, 10 June 2008 (UTC)
 * In case there is no other digit except nines after the 2000th, then &alpha;=0.999...=1, and $$\sqrt{\alpha}=1,$$ which has the alternate decimal expansion 0.999... Pallida  Mors  15:56, 10 June 2008 (UTC)
 * @meni: thank you! It was quite obvius. $$\alpha<\sqrt{\alpha}<1 \,$$ \me fool
 * @mors: I know, but $$\alpha \,$$ has "at least" 2000 nines. If the nines are infty, it's easy to prove ;-) --87.4.86.63 (talk) 16:05, 10 June 2008 (UTC)
 * Well, if there are infinitely many 9's, the question is invalid because $$\sqrt{\alpha}$$ has two decimal expansions and "the decimal expansion of $$\sqrt{\alpha}$$" is undefined. So we must assume that the premise $$\alpha<1$$ is added to the question. My professors don't like to bother with writing questions that actually make sense, so I have learnt to alter the premises and conclusions of questions without even noticing it :). -- Meni Rosenfeld (talk) 16:51, 10 June 2008 (UTC)
 * Yeah Meni, good point! It is a fine opportunity to remind these "teaser builders" that phrases like "and at least one decimal digit is not a nine" can be handy. Pallida  Mors  17:05, 10 June 2008 (UTC)
 * Mind you, it would be easier to fix the question simply by replacing "the decimal expansion" with "a decimal expansion". —Ilmari Karonen (talk) 16:35, 12 June 2008 (UTC)

news paper maths
The Times (uk) mentioned today that epi sqrt(163) is somehow interesting - it seemed to say that the number was rational?? Is this true - can it be proofed one way or another?87.102.86.73 (talk) 19:11, 10 June 2008 (UTC)


 * It's very close to being an integer, but isn't. It was discovered by Ramanujan, and googling turns up --Fangz (talk) 19:32, 10 June 2008 (UTC)


 * epi sqrt(163) = 262,537,412,640,768,744 - 7.49927... Dragons flight (talk) 19:54, 10 June 2008 (UTC)


 * This was one of the subjects of Martin Gardner's "Mathematical Games" column in the April 1975 issue of the Scientific American. By way of April fool's joke, Gardner claimed that it was an exact integer, as conjectured by Ramanujan. The title of that month's column was: "Six Sensational Discoveries that Somehow or Another have Escaped Public Attention"; all six were hoaxes, including a planar map that was said to require five colours, and a sketch of a flush toilet that was supposed to have been discovered in one of Leonardo da Vinci's notebooks. Although some of the items were really over the top (like a motor driven by psychic energy invented by a Mr. Robert Ripoff), the letters that came in showed that many readers had not realized these were hoaxes. --Lambiam 21:37, 10 June 2008 (UTC)


 * Check out Almost Integer at MathWorld for lots of other examples. -- BenRG (talk) 02:37, 11 June 2008 (UTC)
 * The number is discussed at Ramanujan's constant. Enjoy, Robinh (talk) 07:49, 11 June 2008 (UTC)
 * You can also have a look at Heegner number, where it discusses some reasons as to why that number is almost an integer. --XediTalk 16:17, 11 June 2008 (UTC)


 * As usual, this too can be found in our article on 163 (number). &#x2013; b_jonas 13:48, 12 June 2008 (UTC)
 * Occasionally wikipedia pleases me - this is one such occasion - an article on a number! who would have thought of it.87.102.86.73 (talk) 16:57, 12 June 2008 (UTC)