Talk:Two envelopes problem/Arguments/Archive 3

iNic's arguments:
iNic, please let us start slowly, step by step, for I do not want to be caught in the trap, as the trap is already lurking. You say above: ''But once you open your selected envelope A the amount of money in that envelope becomes fixed, right? Let's say you find 512 monetary units in your envelope. This is a fixed number. Then for sure the other envelope must contain either 1024 or 256 monetary units, right? The other envelope only has these two possible contents and as we have no reason to believe in one of the cases more than the other, the probability for each possible amount must be 1/2, right? We use here the same mundane principle we use when we determine that the probability of the sides of a fair die is 1/6 each, for example. If that principle can't be used in this case we have to know why. We also need to know what new principle for determining P(B contains 1024) and P(B contains 256) we should use instead, and of course what probabilities that principle leads to. And last but not least we need to have a clear, explicit and exact rule for when this new principle should be used instead of the old one. Almost all authors miss this last step completely''

Everything you wrote seems to be clear and seems to be valid, but it is against reality, and we have to find out why. So we have to be careful. Your example shows one pair of two envelopes containing 512 and 1024 (resp. 1024 and 512), and one pair of two envelopes containing 512 and 256 (resp. 256 and 512). So we have two pairs of envelopes now, together four envelopes. It makes a great difference whether you only choose envelopes containing 512 (the determining amount), or whether you choose envelopes containing 1024 or 256 as well (the dependent amounts). If you always exclusively choose envelopes containing 512, with extreme asymmetry in choosing only envelopes with the determining amount of 512 in this example, then in any case the other envelope will on average contain 640, that's 5/4 of your amount, that's right. Requirement however is indeed that you know which are the determining ones, otherwise: symmetry.


 * TEP and I only talks about one pair of envelopes. Two pairs is your invention. You still talk as if you know how the envelopes were filled. You don't. So please stop doing that. Besides, it is irrelevant anyway. iNic (talk) 11:52, 12 December 2011 (UTC)

A similar phenomenon appears if you only, with extreme asymmetry, exclusively should be choosing envelopes with the dependent amounts of 256 and 1024. Then in any case the other envelope will on average contain only 512, that's on average only 4/5 of your envelope. Think of step 9 !


 * You don't understand the situation in TEP. You can't choose any predetermined amounts at all. If you would have x-ray eyes you should simply choose the largest amount and just walk away. That must be the Superman solution to the paradox. :-) iNic (talk)

But if there is symmetry (e.g. if you cannot distinguish "determining" and "dependent"), then the other envelope will on average contain exactly the same amount as your envelope contains (scale on average 1:1). (Reason why? In case you choose the envelope with the maximal amount available in a symmetric world, then there will never exist any envelope with twice that amount in a symmetric world.)


 * OK then but what is the maximal number? I agree that if there is a biggest natural number then finding that one in one envelope then we know that the other envelope must be half that amount. But so far no one could tell me which the biggest number is. Let's say that X is the biggest number there is. How about X+1? Why would that not be a number? Even small kids know that there is no biggest number because postulating one leads immediately to contradictions. iNic (talk) 11:52, 12 December 2011 (UTC)

Suppose there are say 50 pairs of such envelopes with content of 512 and 1024 each (resp. 1024 and 512), and another 50 pairs of envelopes with content of 512 and 256 each (resp. 256 and 512). Then the total amount of those 100 pairs of together 200 envelopes will be 115'200. And if you choose, from every one of those 100 pairs of envelopes, one envelope at random then, whether you look into your envelopes or not, your total amount of your 100 envelopes will be about 57'600 whereas the total of the other 100 unchosen envelopes is about 57'600 also. So the ratio can never be expected to be 80:100 resp. 100:125 if your choice is at random, whether you have a look into your envelope or not, but the ratio is exactly 1:1. The same principle is in effect if you look at millions of pairs of doors. The reasoning that (2 + 1/2)/2 = 1.25 is only valid if you choose intentionally envelopes containing the "determining amount" (of 512 in this example). But you never can know whether you pick an envelope containing 512 or 1'024 or 256. Symmetry is the "trap" of step 2 to 7. Every step, from 2 to 7, is a pure lie if there is symmetry resp. your choice is at random, even if it does not "seem so". We have to solve that "trap of thinking", to solve that paradox. Is that correct? Gerhardvalentin (talk) 13:42, 11 December 2011 (UTC)


 * TEP is a problem in decision theory, and decision theory is all about finding principles for making decisions. Almost all real world decisions the theory should be applied to are unique situations. This means that there are no repetitions involved at all. More often than not it's not even possible to imagine how to repeat the situation. This is how you should properly think about TEP. It's a once in a lifetime opportunity to get money for free. You have no information at all about the donor of the money. He or she might have given away money like this before but it's highly unlikely that the donations follow any general rule or pattern. If you gave away money for free, would you do it like a random robot? Probably not. So most likely the donor didn't follow any scheme at all when he or she decided what money to put in the envelopes. So this is the proper setting of TEP. No probabilistic rule for how to choose the amounts in the envelopes. Repetitions not even conceivable. The only random event is you choosing one of the two envelopes. That's it. iNic (talk) 11:52, 12 December 2011 (UTC)


 * True, there is one "truly" random event and that is choosing one of the two envelopes. But according to conventional decision theory we make decisions by weighting possible outcomes with our prior expectations of those outcomes. Then we should choose the action which maximizes our expected gain. Fortunately, subjective probability theory uses the same calculus as objective (frequentistic) probability theory: Kolmogorov rules. So we attack the problem by specifying a probability distribution which represents our prior beliefs as to the the smaller of the two amounts of money. Now we can do probability calculations where we pretend that there are many repetitions, where in each repetition, first of all the smaller of the two amounts is chosen by selecting a value from our probability distribution for our prior beliefs. Then, indepedendently of that, a fair coin is tossed to decide which envelope gets the smaller amount and which gets twice that. Now we can use ordinary probability calculus for instance to write down our expectation value (according to our prior beliefs and the coin toss) of how much money would be in Envelope B on average, if were to peep in envelope A and see a certain amount a there. We can do this calculation for all values of a which conceivably could be in the first envelope. That is what the TEP argument is doing. The mistake is to assume that whatever is in envelope A, envelope B would be equally likely to be half or twice that amount. That can only be the case if our prior beliefs are improper, making every integer power of 2 equally likely for the smaller of the two amounts of money (after they have been rounded down to integer powers of 2, which can be done without any loss of generality). Richard Gill (talk) 16:34, 13 December 2011 (UTC)


 * Exactly. As a sidenote my forthcoming paper will show why this is the wrong solution, why conventional decision theory can't solve TEP at all. iNic (talk) 10:09, 14 December 2011 (UTC)


 * Exciting! It can be that conventional decision theory is useless in practice. It is not necessarily the case that people actually do make decisions in accordance with decision theory, and it is also arguable that they have to. The usual axioms of "rational behaviour" are not entirely compelling. So conventional decision theory is neither descriptive nor prescriptive.


 * I will show something stronger. I will show that conventional decision theory is inconsistent. iNic (talk) 17:05, 15 December 2011 (UTC)


 * On the other hand, TEP is usually seen as a paradox *within* conventional decision theory. And *within* conventional decision theory all known variants of the paradox are resolved. After all, even if conventional decision theory is neither descriptive or real behaviour, nor prescriptive in the sense of telling us optimal behaviour, within itself it is a self-consistent abstract mathematical framework. Just like Euclidean geometry is a self-consistent piece of abstract mathematics, independently of whether the world is or ought to be describable by Euclidean geometry. Conventional decision theory has proper prior probability distributions (the probability part is within the Kolmogorov framework) and bounded utility. Assuming proper priors and bounded utility we can resolve every known probabilistic variant of TEP - ie say where the argument is wrong and why and how to avoid the wrong argument in future.


 * I will show that none but the simplest TEP variants are solvable within conventional decision theory. iNic (talk) 17:05, 15 December 2011 (UTC)


 * For each variant, *where* the mistake is made, and what it is, precisely, can be different. But common to all variants is the fact that the mistakes are mistakes in failing to specifiy a self-consistent probability description within which the argument is situated, and failing to distinguish things which need to be distinguished within such a framework: random variables, outcomes thereof, and expectation values; conditional and unconditional expectations and probabilities.


 * You keep forgetting the logical or non-probabilistic variants of the paradox... iNic (talk) 17:05, 15 December 2011 (UTC)


 * These variants are word games. Use the same word for different things. Not interesting. Have been satisfactorily solved. Richard Gill (talk) 09:25, 17 December 2011 (UTC)


 * I admire your self-centric world view. If I would only care to mention the solutions of TEP that I find interesting I wouldn't mention a single one. The Wikipedia article would be blank. But fortunately this is not what Wikipedia is all about. iNic (talk) 11:40, 19 December 2011 (UTC)


 * If you claim that TEP needs to be situated outside of a Kolmogorovian probability framework then you are asking for trouble. Kolmogorovian probability rules were invented in order that the rules of probabilistic reasoning become explicit and self-consistent. If you want to work without any rules you are highly likely to be able to generate mututally contradictory solutions to the same problem. Richard Gill (talk) 13:28, 15 December 2011 (UTC)


 * Not at all. I'm safely within the Kolmogorov theory. I will show, instead, that it is the conventional decision theory that is outside of the Kolmogorovian framework. This is precisely why it fails. I will also provide an alternative decision theory that complies with the Kolmogorov theory. This decision theory will not be subjective, not psychological and thus not artificially restricted to human beings as current decision theory. It will be universal, equally appliccable to economics as to physics. iNic (talk) 17:05, 15 December 2011 (UTC)


 * Exciting! Richard Gill (talk) 09:25, 17 December 2011 (UTC)

Back to your example of two pairs of envelopes: One pair containing 512 and 1024,  and the second pair of envelopes containing 512 and 256. If you have a look on a full permutation for these two pairs of envelopes, fully symmetric, that means that you choose not only envelopes with content 512 but any possible envelope in a symmetric world, then the average amount in your chosen envelope A is not 512 but exactly 576, and by swapping to envelope B you will on average get exactly 576 also, the same amount that you on average already have in envelope A. Although step 2 of the switching argument explicitly says that to double 1024 is possible with probability 1/2 in case you choose the envelope with 1024 ("The probability that A is the smaller amount is 1/2, and that it is the larger amount is also 1/2"), in this example here, there certainly is no envelope with the contents of 2048. That's why step 2 of the switching argument obviously is a flaw in the symmetric world. It is only valid in the asymmetric world when you exclusively choose envelopes with content 512 and on average will get 640. You see, even step 2 of the "switching argument" is a total lie in a symmetric world, if you trust in reality. (By the way, 576 in envelope A and in envelope B is exactly 1 1/8 of the determining amount of 512.) Gerhardvalentin (talk) 23:20, 11 December 2011 (UTC)


 * We are not talking about any actual averages here as this is a once in a lifetime event. When I open envelope A and find 512 monetary units I know for sure it is exactly 512 monetary units. No more, no less. When computing the expected value for the other envelope and find that that is larger than 512 monetary units, it is not exactly because I will gain "on average" that I should take that envelope instead. Instead, it is because decision theory says that I should take the envelope with the biggest expected value. This decision theory principle is in turn a generalization from the case when we do have a situation that can be repeated. Because in those situations we indeed gain on average by picking the option with the largest expectation. The situations are related but it's nevertheless very important to clearly distinguish them. Both conceptually and in practice. iNic (talk) 11:52, 12 December 2011 (UTC)

proposal
You could reason: "the amount in the other envelope may be twice or half of my amount, or reverse, I don't know. And as I don't even know the amount of my envelope, I have to suppose that I got just the average value of (1 + 2   +    1 + 1/2) /4 = 1 1/8. And the other envelope will contain either (1 + 2   +   1 + 1/2) /4 = 1 1/8   or   (1 + 2   +   1 + 1/2) /4 = 1 1/8  also. That's all I can say ..." – Is that correct? Gerhardvalentin (talk) 14:44, 11 December 2011 (UTC)


 * No, you can't suppose that you got the average value. You can't suppose that you will raise 2,45 children in your family, even if that is the average. iNic (talk) 11:52, 12 December 2011 (UTC)

Why the first resolution is false
The problem with the first resolution in the article is that it applies equally to Falk's coin toss example (I have also called this Nalebuff's Ali-Baba example but I can find no evidence of that usage in his paper).

Suppose the envelopes are set up this way. A sum, chosen at random from any proper and reasonable distribution, is placed in one envelope and given to the player then a fair coin is tossed and, depending on the fall of the coin, either half or double this sum is placed in another envelope. The player is given the option of swapping. It is obvious and generally accepted that the player should swap (once). The problem is that steps 1 to 8 are now perfectly correct. A can be a random variable from the original distribution and all the steps are valid. The first resolution, however suggests that there is a problem with one of the steps.

Any proposed resolution must apply in cases where it is wrong to swap but fail in cases where you should swap. Martin Hogbin (talk) 16:17, 27 January 2012 (UTC)


 * The point of Ali Baba is that Baba also makes the same reasoning as Ali (exchanging A for B). They can't both be right. Ali's reasoning is correct if A is interpreted as a fixed possible value of the content of his envelope, and he's computing the conditional expectation of what's in Envelope B, given that there's A in Envelope A. It would have helped to use small and large letters to make the distinction between outcomes of random variables and random variables themselves. Then we could have seen from the notation that we are computing a conditional expectation. But we can also see this from the last step of the computation. The expectation value of what's in B cannot be a random variable. It's a fixed number, depending only on the probabilty distribution of what's in Envelope B. But the conditional expectation of what's in B, given what's in A, is (in general) a random variable. TEP is about the distinction between conditional and unconditional expectation (or at least, that is what it was meant to be about: the originators of the problem certainly knew about the difference, and write about it, too). Unfortunately laymen and philosophers are not even aware of a distinction hence write a lot of nonsense about the problem. And don't understand the mathematicians' solutions. A tragic state of affairs, a comedy of errors. Richard Gill (talk) 14:41, 28 January 2012 (UTC)


 * Richard, the first page or two of Nalebuff's paper seems to be missing in your dropbox so I cannot make sense what you say above. Could you add the missing page please.  Martin Hogbin (talk) 15:16, 28 January 2012 (UTC)


 * Odd, I think both the two Nalebuff papers as presently in my local dropbox are complete. Shall I email them both to you? Richard Gill (talk) 11:06, 29 January 2012 (UTC)
 * The paper entitled Nalebuff starts mid-sentence, 'her an expected gain of $2.50 (or 25 percent). Acting in a risk-neutral manner, she would want to switch'.

Ancestral TEP
If TEP is supposed to be arguing that E(B)=A, there is another mistake: how can a random variable, A, equal a constant, E(B)? I don't think the originator of TEP was so naive. In fact, the originators were great scientists and professional if not great mathematicians: Schrödinger, Littlewood, Kraitchik.

I think that TEP is arguing that E(B|A)=5A/4, or, to be more explicit, E(B|A=a)=5a/4 for every possible amount a that might be in Envelope A. That does make sense, and it can even be almost true, if our calculations of probability refer not only to the randomness of which envelope (the one with the smaller or the larger amount) is given which name (A or B), but also to our uncertainty as to the possible amounts in the two envelopes.

Suppose a priori the amounts 2, 4 are equally likely as the smaller amount in the two envelopes, say they each have chance p. Then the chance Envelope A contains 4 and B contains 8 is 0.5p. The chance Envelope A contains 4 and Envelope B contains 2 is 0.5 p. The chance Envelope B contains 8 given Envelope A contains 4 is 0.5p/(0.5p+0.5p)=1/2. Similarly, the chance Envelope B contains 2 given Envelope A contains 4 = 1/2. So E(B|A=a)=5a/4, for the case a=4.

Extending this calculation, if all values 1/2, 1, 2, 4, ..., 2 to the 1 billion are equally likely possibilities for the smaller of the two amounts, then E(B|A=a)=5a/4 for a=1,2,..., 2 to the 1 billion. The random variables E(B|A) and A are equal on the event that A is between 1 and 2 to the power of 1 billion. If the values of the smaller of the two amounts which I just listed are moreover the only possible values thereof, then E(B|A=a)=2a if a= 1/2, and E(B/A=a)=a/2 if a= 2 to the power 1 billion plus 1. And otherwise E(B|A=a)=5a/4. For all practical purposes, E(B|A)=5A/4.

I think that behind TEP is the paradox that comes from believing that *all* integer powers of 2 are equally likely, which in the Bayesian literature is the usual way of representing total ignorance about a positive number. In fact Littlewood's and Schrödinger's ancestral paradoxes are explicitly built on such a premise.

The resolution of the paradox, under this interpretation, is to note that the expectation value of A is at best infinite, at worst completely undefined, in this case; so basing decisions by optimizing conditional expectations does not actually necessarily lead to an overall optimal decision. Richard Gill (talk) 14:22, 18 December 2011 (UTC)
 * Richard, have a look at User:Martin_Hogbin/Two_envelopes now. I think it covers all the possibilities and has mathematical notation added.  You may consider some of the interpretations stupid but some people think like that. Can you put any comments on the talk page please.  Martin Hogbin (talk) 13:12, 19 December 2011 (UTC)
 * Martin, I think your survey of all possibilities misses the one I discuss in this section ("Ancestral TEP"). You chose to simplify things by supposing the amounts in the two envelopes are fixed at, say, 2 and 4. For sure, some of the "reliable sources" also take this point of view, but many others do not. Let me try to present a version of the TEP argument which explicitly uses subjective probability. Note that we are given no idea at all how much money is put in the two envelopes. My presentation of the problem will use this "total ignorance" explicitly. I'll denote the actual amounts of money in the two envelopes as a and b with respect to some standard currency unit, say, the Euro. We are told that both amounts are positive and one is twice the other. I'll suppose, moreover, that only whole numbers of Euros are possible. So a and b are positive whole numbers. We are given no further information about them. Let's now imagine what might be in Envelope A. If that amount, a, were an odd number of Euros then Envelope A would certainly contain the smaller amount of money, and we would want to switch to Envelope B. If on the other hand a were an even number of Euros then the Envelope B could either contain twice or half that amount. Imagine for instance the possibility that a=4. Envelope B would have to contain 2 or 8 Euros. There are now exactly two possibilities: the two amounts of money, in order, were 2 and 4, and Envelope A happened to contain the larger of the two; or the were 4 and 8, and Envelope B happened to contain the smaller of the two. Now since we know nothing at all about how much was put in the two envelopes, it is equally likely, as far as we are concerned, that the envelopes contained 2 and 4 Euro, as that they contained 4 and 8 Euro. Also, it is equally likely that Envelope A contains the smaller as that it contains the larger of the two amounts, whatever they might be. So the two possibilities are equally likely: the amounts are 2 and 4, and Envelope A contains the larger; and the amounts are 4 and 8, and Envelope B contains the smaller of the two. Therefore, if we were informed that the amount in Envelope A happened to be 4, we would find it equally plausible that Envelope B contains 2 and 8. The conditional expected amount in Envelope B, give that Envelope A contains 4, is therefore (1/2).2+(1/2).8=5, implying that we would want to switch envelopes. Now there is nothing special about the number 4 here. Whatever amount a we imagine being in Envelope A, if it is an even number of Euros then conditional expected amount in Envelope B is 5/4 times larger. This is because as far as we are concerned, for any even a, it is equally likely that the two envelopes contain a/2 and a, as that they contain a and 2a. Since we would want to switch envelopes whatever amount we happened to see in Envelope A if we were to take a look, there is no point in looking in the Envelope and we should want to switch anyway. Richard Gill (talk) 17:27, 1 January 2012 (UTC)
 * Thanks for reading my page.
 * As I understand it, the standard resolution of the above version is based on our assumption 'it is equally likely, as far as we are concerned, that the envelopes contained 2 and 4 Euro, as that they contained 4 and 8 Euro'. No assumed distribution of possible envelope values can justify this assumption unless it has an infinite expectation.  Is this correct?
 * Could you alternatively argue that whatever finite sums are in the two envelopes, the arguments given on my page must apply. Thus the only way to make the paradox stick is for the sums themselves (rather than the expectation) to be infinite? Martin Hogbin (talk) 18:17, 1 January 2012 (UTC)


 * I did not (yet) give a resolution of the paradox when it is interpreted in this way. I just ask us to imagine some particular amount, say a=4 Euros, in Envelope A. I then state that because we have no information about how the envelopes were filled, a priori it is equally likely that the pair of envelopes contain 2 and 4 Euro, as that they contained 4 and 8 Euro. I also state that, in either case, each envelope is equally likely to be chosen to be Envelope A. Therefore, if we were informed that Envelope A contains 4, we would judge it equally likely that Envelope B contains 2 as that it contains 8. Hence we would reasonably prefer to have Envelope B, since we have a 50% chance of gaining 4, against a 50% chance of losing 2 Euro, by the switch. Repeat this argument, now starting with imagining Envelope A to contain 8 Euros instead of 4. We'll get the same conclusion (switch), where now we will have used the assumption that the pair of contents 4 and 8 are equally likely as the pair 8 and 16. Repeating indefinitely, we see that we are getting the fixed conclusion "switch" by assuming that 2 and 4, 4 and 8, 8 and 16, 16 and 32 .... are all equally likely to be the contents of the envelopes. Resolution of the paradox: though one might imagine the pairs 2 and 4 Euros, and 4 and 8 Euros, very close to equally likely, and we would also imagine the pairs 1,048,576 and 2,097,152 Euros, and 2,097,152 and 4,194,304 Euros, very close to equally likely, I don't think it is realistic to suppose that 2,097,152 and 4,194,304 Euros is approximately equally likely as 1 and 2 Euros. Knowing nothing at all about the amounts of money in the two envelopes, we still would not imagine astronomically large amounts to be equally likely to modest amounts of money. So to begin with, we must realise that though it could be the case that for quite a few values of a, the equality E(B|A=a)=5a/4 could be approximately true, we would never in the real world imagine it exactly true for all possible values of a. This might be sufficient resolution of the paradox for many readers. However some people might object that I have not ruled out the possibility that with a realistic prior distribution of amounts of money in the two envelopes (to represent our prior beliefs about what is in the envelopes) one might still have E(B|A=a) > a for all a, implying the paradoxical advice to exchange Envelope A for Envelope B without looking in Envelope A. To this objection I would say, that by averaging left and right hand side of the inequality E(B|A=a) > a over all possible values of a, one obtains the conclusion: either E(B) > E(A), or  E(B) = E(A) = infinity. But we can rule out the first possibility by symmetry. Hence prior beliefs leading to the paradoxical recommendation to switch envelopes without inspecting the contents of A are prior beliefs implying an infinite expected value of the amount of money in either envelope. One can now resolve the paradox either by saying that such prior beliefs are unreasonable, or by saying that in such a situation, following the advice of conditional expectation values does not necessarily guarantee you any improvement in your position. Here is an example: suppose we know that the envelopes are filled by someone tossing a biased coin, with probability 1/3 of heads, as long as is necessary till the first head is obtained. Suppose this took t+1 coin tosses, where t=0,1, ... is the number of tails seen before the first head. Then the envelopes are filled with 2 to the power t, and 2 to the power t+1 Euros. One of them is chosen at random and denoted Envelope A. This is the Broome example. You can easily calculate that E(B|A=a)=11a/10 if a=2,4,8,...; E(B|A=a)=2a if a=1. You can also calculate that the chance is bigger than 3 in 10 thousand that t will be at least 20 and hence the smaller amount of money is more than 1 million Euros. I would argue that examples like this are totally unrealistic. For instance, I cannot imagine anyone playing the real Broome game with me, wth real money. Any realistic example has a finite expectation value of the amounts of money in the two envelopes, and that implies that there are values a for which E(B | A=a) < a.  So for some values of a one should switch, for some values one should not switch. If you want to improve your situation, you have to inspect the contents of Envelope A and decide whether or not to switch on the basis of what you see.


 * It's funny that Gill can't imagine anyone playing the Broome game for real with him as he has an offer from me to do exactly that since August last year. But we never played because he never accepted the offer. This whole situation reminds me of a famous chess player who had a strong personal conviction that men was always better chess players than women. For him to lose a game of chess to a girl was absolutely and totally unimaginable. The curious thing was that he also had a personal rule of conduct: he never ever played against a girl! Well, this principle of course did the trick for him. With this rule in place it was indeed totally impossible for a girl to beat him, because no girl ever got the chance to beat him. So logically, to see him be beaten by a girl was indeed truly unimaginable, exactly as he claimed. I suspect that we might have a similar situation here. As long as Gill refuses to play the Broome game with me or anyone else, he can continue to claim that it's an unimaginable event. iNic (talk) 03:37, 3 January 2012 (UTC)


 * Please call me Richard. I did accept to play the Broome game with you, iNic. I'm waiting for you to make a proposal for the rules. Who pays whom what, when. Do we truncate? What are the consequences of defaulting? Who adjudicates, who tosses the coin? Richard Gill (talk) 12:13, 3 January 2012 (UTC)


 * The playing of the Broome game is moved here.


 * There is some discussion in mathematical economics and decision theory about infinite expectation values. The situation is very similar to that of the Saint Petersburg Paradox. Many authorities state that economic decisions are made on the basis of utility, not on nominal cash value, and that utility is bounded. So infinite expectations are ruled out as a matter of principle. (By the way, a number of authors already pointed out that TEP with infinite expectation values is basicly "just" the Saint Petersburg Paradox). Richard Gill (talk) 15:40, 2 January 2012 (UTC)


 * I do not see why the resolutions on my page do not apply to every possible finite pair of sums that might be in the envelopes. Martin Hogbin (talk) 17:20, 2 January 2012 (UTC)


 * Your discussion works for any pair of amounts of money, but (I think) restricts the use of probability concepts only to the random choice of which amount goes in Envelope A: the smaller or the larger. In Ancestral TEP, probability is also used to describe our uncertainty as to what are the two amounts, e.g., what is the smaller amount. Suppose we believe that the smaller amount is equally likely to be any integer power of 2 Euros. Then it is the case, that if the amount in Envelope A happened to be 2 to the power r Euros, where r is any whole number, then the amount in Envelope B is equally likely to be twice as to be half that amount. The TEP argument is completely correct, as far as its probability calculations go. The paradox has to be resolved either by disputing the reasonableness of my '"equally likely" assumption, or by disputing the validity of the inference "therefore, even without looking in the envelope, you should switch". These two extreme resolutions are exactly parallel to two opposed resolutions of the Saint Petersburg Paradox. There is consensus that the paradox is resolved, but disagreement as to what constitutes a resolution. I especially recommend the discussion [] by philosopher Robert Martin at the Stanford online encyclopedia of philosophy. Richard Gill (talk) 09:19, 4 January 2012 (UTC)

Martin, I looked again at your page User:Martin_Hogbin/Two_envelopes. Your final example is related to what I am talking about here. You say disparagingly that it can be seen as an attempt to obfuscate but I disagree: I think it is close to what the originators had in mind. You make it seem rather unnatural but I think it can be made quite attractive. You start off by saying imagine there are five envelopes containing 2, 4, 8, 16 and 32 pounds sterling. But you should not be thinking about the problem in this way. There are only two envelopes. One contains twice the sum of money as contained in the other. Neither amounts are known. However, suppose we were told that each envelope contains exactly one banknote, and that there only exist banknotes in the denominations 2, 4, 8, 16 and 32 pounds sterling. Then we would know for sure, a priori, that the only possible pairs of amounts in the two envelopes are 1 and 2, or 2 and 4, or 4 and 8, or 8 and 16 pounds. We might be inclined to give each of these four possibilities an equal a priori probability 1/4 under the motto that when there are a finite number of possibilities and there is no reason to prefer any one of them to another, we should give them all equal probability. Laplace's principle of indifference. So the right thing to imagine is not "five envelopes" but "four possible pairs of envelopes". Next, one of the two envelopes gets named Envelope A and one gets named Envelope B. This happens completely by chance and independently of what is actually in the two envelopes. So we now have eight possibilities, all with equal probability 1/8: A and B contain 1 and 2, or 2 and 1, or 2 and 4, or 4 and 2, or 4 and 8, or 8 and 4, or 8 and 16, or 16 and 8. In particular, Envelope A might contain 1, 2, 4, 8, or 16 and it does so with probabilities 1/8, 1/4, 1/4, 1/4, 1/8. Five probabilities, adding to 1, but not all equal. For each of these five possibilities we can investigate what B can be. The answer is rather easy: if A contains 1, then B certainly contains 2. If A contains 2, or 4, or 8, then B is equally likely to contain half or double that amount. If A contains 16 then B certainly contains 8. Thus in three out of the five cases (having altogether chance 3/4) it is the case that E(B|A=a)=5a/4. In one case (having probability 1/8), E(B|A=a)=2a, and in one case (also having probability 1/8), E(B|A=a)=a/2. We see that the computation E(B|A=a)=5a/4 can be correct "with large probability". If we restrict ourselves to discrete probability distributions with only a finite number of different possible outcomes, then it cannot be correct for all a, since it cannot be true for the smallest nor for the largest possible value of a. However it can be exactly true for all other possible values. Now imagine that banknotes might exist in denominations of 2 to any integer power between plus or minus one googol, inclusive. Give all adjacent pairs equal probability 1 divided by two googols. For all practical purposes it is now certain that E(B|A)=5A/4. So I think your analysis does not yet succeed in resolving all common readings of TEP. Richard Gill (talk) 09:48, 4 January 2012 (UTC)


 * I think the version above is outside the scope of the WP problem statement, which says, 'You are given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other. You may pick one envelope and keep whatever amount it contains'. I agree that it might be relevant to the ancestral forms of the paradox but this then brings us to another annoying feature of the problem, which is that there are different versions of the problems, creating even more resolutions.


 * Although we must try to make the paradox stick, we must not try too hard. A resolution to the above version is therefore simply to say that E(B|A=a)=5a/4 is always incorrect, even though it might be possible to make it arbitrarily close.  It is then up to the (imagined) proposer of the paradox to try to persuade us that being very close is good enough to make us swap.  Of course, we will not swallow that either. Martin Hogbin (talk) 22:13, 4 January 2012 (UTC)


 * It's a mathematical fact that as long as we restrict ourselves to so-called proper prior distributions, it is just not possible that for all a, given A=a, B is equally likely to be 2a or a/2. Hence it is impossible (with a proper prior distribution) that E(B|A=a)=5a/4 for all a. As long as we restrict ourselves to proper distributions with finite expectations, it is also not possible that E(B|A=a) > a for all a. These facts are all well documented in the more mathematical literature on TEP. For the lay reader, I think it is enough to notice that as soon as we put an upper bound to the possible amounts of money, both of these things become impossible. And one can find authorities who find infinite expectations, or im-proper distributions, both ludicrous. So for the purpose of identifying a mistake in the logic, and for the average Wikipedia reader, the variants Contexts 2 and 3, combined with Aim 2 are easy enough to deal with. As I said, I can create mathematical examples where any of these bad things are as close as you like to being true. They all have in common that according to the prior distribution of the smaller sum of money, the expectation value is in the very far right hand tail of the distribution. In such a situation, "typical" values from the distribution are almost always far, far smaller than their expectation value. The expectation value is just not a useful thing to look at it. You don't expect anything like the expectation at all! For instance, my example with 2 googol possible values for the smaller amount of money has an expectation value, relatively speaking, pretty near to the maximum amount of 2 to the googol'th power Euros. But the probability is overwhelmingly large that an actual amount chosen according to the this distribution is (relatively) much, much smaller than the expectation value. My work-in-progress shows that this phenomenon is universal: the closer one gets to "for all a, given A=a, B is equally likely to be 2a or a/2", the more the expectation value of the amounts of money concerned are in the extreme right tails of their distribution. So indeed, being very close should also not make us swap. But this is original, new research. And anyway of little interest to the general reader. Richard Gill (talk) 10:19, 5 January 2012 (UTC)


 * By the way, denominations of bank notes are typically roughly logarithmically distributed: e.g. 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, ... . So imagine we are playing this game with a very rich man during the period of mega-inflation in Germany, early 1920's. There are two real envelopes and each contains one or a small number of banknotes. Would we want to switch envelopes, without looking? No. Would we want to look, and then decide whether or not to switch? Yes: because most likely the contents of both envelopes are worthless but if Envelope A happen to contain yesterdays' typical amount of Deutchmarks it would be very favourable indeed to double it (today we could still buy some peanuts with that), while halving it or throwing it away hardly makes a difference. This example illustrates that when we consider the situation when a huge range of different amounts of money are possible, we would be wise to think about the personal utility to ourselves of all possible outcomes, and our complete (personal or subjective) probability distribution of all possible outcomes. Just computing expectation values of numbers of Euro's is foolish. See Saint Petersburg paradox and Robert Martin's Stanford Encyclopedia of Philosophy essay http://plato.stanford.edu/archives/fall2004/entries/paradox-stpetersburg/ Richard Gill (talk) 10:53, 5 January 2012 (UTC)
 * My opinion is that all the original setters of the paradox intended us to assume money and utility to be linear and that we were expected to resolve the paradox by some other means. It is easy enough to dream up bizarre utility functions, for example, suppose the game takes place in a country where robbery is fairly common but, worse than that, if you are found with over a certain sum of money on you you are likely to be kidnapped and held to ransom.  I do not believe for a moment that this is what any of the original setters had in mind.


 * The St Petersburg lottery, on the other hand, is regularly played in a modified form on TV as Deal or no deal. Everyone who deals does so for less than the average sum remaining.  Martin Hogbin (talk) 18:13, 5 January 2012 (UTC)


 * I suggest then you study what the original setters of the problem actually wrote! They were Nalebuff and Gardner, and they were both very concerned with improper priors, infinite expectations, and the Saint Petersburg paradox! But yes, they did take utility to equal cash value. Our modern ("standard") TEP starts, I believe, with Martin Gardner in his Scientific American column, reproduced together with further discussion in his 1989 book Penrose Tiles to Trapdoor Ciphers; and with Barry Nalebuff, in his 1988 and 1989 articles; and with a manuscript by Sandy Zabell at about the same time, which I have not yet seen. Nalebuff and Gardner both refer to a problem which was "going the rounds" at the time, Nalebuff refers to Zabell in his second paper. Note: Nalebuff (1988) added his own asymmetric Ali Baba variant, but he also mentioned the "standard version". In his 1989 article he gives a series of resolutions of the standard version, some of which he approves of, some of which he finds unconvincing; he concentrates on the "standard" (symmetric) version. Gardner on the other hand admits to not knowing a good resolution. Both Gardner (1989) and Nalebuff (1989) are well aware of the prehistory of the problem (Schroedinger, Littlewood, and Kraitchik in the 40's and 50's). Both Schroedinger (reported by Littlewood) and Littlewood (1953), in problems involving a pack of infinitely many cards, use improper priors, and their resolutions are that improper priors are ludicrous. Gardner and Nalebuff certainly agree on this point. Kraitchik (1942) does not tell the reader what is wrong with his variant, the Two Neckties argument, he prefers to leave that a puzzle for his readers. After that, Gardner (1982) converted the two neckties to two wallets, but did not offer a solution either (he admits in both his books that he is just puzzled!). So Nalebuff is not only one of the first to propose the problem, he is the first to offer a full analysis of the problem. He does refer to the paper by Zabell, not yet published at that time, and reproduces Zabell's solution; this is the solution that with a proper prior, it cannot be the case that given A=a, B is equally likely a/2 or 2a, whatever a might be. However Nalebuff does not take this as the final word, since he points out that there do exist priors such E( B | A=a ) is larger than a, for all possible a. Now, Nalebuff argues strongly that priors with infinite expectation cannot be allowed, referring to the Saint Petersburg paradox. His final, favourite, solution, in the regular two envelopes problem (the symmetric one), is that if the prior is proper and has finite expectation (otherwise the problem is stupid, in his opinion) then it cannot be the case that E(B|A=a) > a for all a. He gives a long and complicated mathematical proof of this fact; I have a much shorter one in my work-in-progress. Please also note that Nalebuff and all his discussants are explicitly concerned with Context 2 or 3, and Aim 2. It looks to me that the philosophy writers, who focus on Context 1 and Aim 1, completely misunderstood the problem! No wonder they still haven't agreed on how to resolve it. Second Note: the Sandy Zabell paper is called "Loss and gain: the exchange paradox". Unfortunately we don't have a copy of it yet. We badly need to find it, in order to fill in our picture of the history of TEP. Richard Gill (talk) 10:12, 6 January 2012 (UTC)


 * At least the other early Zabell paper is now in the dropbox. iNic (talk) 18:31, 12 February 2012 (UTC)

Utility
Richard, I have no objection to resolving the TEP by reference to improper priors or infinite expectations, my objection is to doing so by the use of utility functions. However mathematically you may dress this up it is essentially saying that the player does not swap because he does not want to.

I do agree that for very large values and expectations the infinite and utility arguments may start to merge. £2googol is exactly twice £googol but from a utility viewpoint there is nothing to choose between them. Nevertheless, I think you agree that this is not a resolution envisaged by any of the problem's originators. Martin Hogbin (talk) 16:50, 6 January 2012 (UTC)


 * Nalebuff's resolution - and he is the only originator with a fairly complete resolution - is that improper priors and/or infinite expectations are stupid. While with a proper prior and finite expectation, it is impossible that E(B|A=a) > a for all a. So in that sense we do not need to mention utility, and in that sense the originators did not envisage it being called into play. Now, there are many arguments against infinite expectations. We don't need to go into this here. But some of the important arguments why infinite expectations are unrealistic have to do with utility, and Nalebuff surely knows those arguments well (he is a mathematical economist). So in the background, arguments about utility are part of the reason why we should not accept infinite expectation values when we are effectively equating monetary value with utility. I recommend you study his paper carefully! Richard Gill (talk) 18:27, 6 January 2012 (UTC)
 * But you do not need utility arguments to show that infinite expectations are not a valid reason for swapping. Twice infinity is infinity. Martin Hogbin (talk) 18:37, 6 January 2012 (UTC)
 * Challenge: find a reliable source on TEP which says this in a convincing way, and put those arguments into the article. You will have to find out if this is a majority opinion or a minority opinion, among the reliable sources. Problem is there are not too many articles which give a neutral overview of the whole TEP area. Everyone has their own hobby horse to ride, everyone wants to push their own favourite solution. According to Wikipedia policy we need to refer to such neutral overview writing (secondary or tertiary sources), not to research papers pushing an alleged novel point of view or novel synthesis (primary sources). In my opinion the only articles approaching the "neutral overview" category are those of Nalebuff (1989), Falk and Konold (1992), and Nickerson and Falk (2006). In particular, the other papers (co)-authored by Ruma Falk (which are all very nice, that's not the point!) push personal opinions of the author, which moreover seem to change from year to year. Richard Gill (talk) 08:59, 7 January 2012 (UTC)
 * I guess it is just me who objects to utility based solutions. It seems too easy if we are allowed to adopt any utility function we choose in order to resolve the paradox.  Just assume the ascetic utility of 0 for all sums of money.  There is now never any  reason to exchange - problem solved. Martin Hogbin (talk) 10:28, 7 January 2012 (UTC)
 * It is not easy. You have to explain why expected utility *must* be finite. Maybe the moral is that we should not take any notice of decision theory, anyway. Why is it thought to be rational or optimal to make economic choices on the basis of *expected* utility, or *expected* monetary value? From this point of view iNic could be right: TEP teaches us that conventional decision theory is useless. Keynes agreed. Richard Gill (talk) 17:44, 7 January 2012 (UTC)
 * You seem to have missed my point, which is that if it is open to the solver to propose any utility function that suits them the the problem is trivial, just propose the function where all money has zero utility. Martin Hogbin (talk) 00:10, 8 January 2012 (UTC)
 * Sorry, I did get your point, but took it further. If money is irrelevant then other factors can come into play. But anyway: you can't say: the solution to the e.g. the Broome example is that the utility of money is zero. Because no-one will agree with you. Money has some use, and on the whole, more money has more use. My own personal opinion is that if expectation values are infinite, then they are not useful as a tool for making decisions. Mathematical expectation values represent the average of many, many independent repetitions. At least, if they are finite. Otherwise, not. And anyway, why should we be interested in the average of many many repetitions if we only play a TEP game once? This is the reason why two people can agree to play the Saint Petersburg game together, for a modest entrance fee, but a casino, which would have to allow many many players to play it many many times, won't offer the game. But anyway, all this leads to discussions about the foundations of economic decision theory which have been going on forever and will go on forever. Richard Gill (talk) 08:10, 8 January 2012 (UTC)
 * Agreed, let us call a halt to this discussion. Martin Hogbin (talk) 09:39, 8 January 2012 (UTC)

Playing the Broome game for real
OK cool, so let's play for real. I have tossed a coin according to the Broome distribution and put money accordingly in envelope A and envelope B. The envelopes are ordinary files in this case to make it simple. Please download both files right away. They are locked so you can't open them yet. Now you only have to decide which envelope to open. Please let me know which one you want to open and I will hand over the key so you can open it. iNic (talk) 03:38, 4 January 2012 (UTC)


 * Cool, indeed! I would like to open Envelope A. Richard Gill (talk) 09:15, 4 January 2012 (UTC)

OK please open envelope A using this key: 5436. iNic (talk) 16:46, 4 January 2012 (UTC)


 * Ah ha, 512 mu! So, the other envelope contains 256 mu with probability 3/5 and 1024 mu with probability 2/5. I'm pretty sure I can do nothing with any amount of mu's written on a pdf file, locked or unlocked, but I think that both 256 and 1024 are more attractive numbers that 512. So I would like to exchange my envelope for the other. Then I'll print out the contents and pin it up to my wall. 256=16^2=2^8. So you had to do 8 or 9 biased coin tosses (chance of heads = 1/3) before stopping (first "head"). That is fairly large - only about 1 in 20 times would the amounts in the envelopes be as large as this or larger. Richard Gill (talk) 10:45, 5 January 2012 (UTC)

So you want to exchange because you think that 512 is ugly and not because you will gain more on average by switching? Are you inventing a new kind of decision theory now where subjective attractiveness is a key factor? Please give us an account of your new theory. iNic (talk) 12:10, 5 January 2012 (UTC)


 * The key factor is utility, which is personal. In this case I will certainly gain by switching. I make my decisions using my personal utility of the different possible outcomes and my personal appraisal of probabilities of the different possible outcomes. There is also a cost in spending time thinking, so I avoid that. It is not necessarily rational to insist on being rational. I do not find decision theory useful and I don't have an alternative theory. In this particular case I have been sent two locked pdf files each containing the representation of a number and the letters "m", "u". My initial reaction was that the letters stood for the Zen Buddhist concept of Mu (negative). I thought that was a very good joke. Of course they could stand for "monetary unit" but so what, they are just two pdf files, I very much doubt you are going to send me some real money by PayPal. Anyway, I'd much rather pin up one of the documents on my office wall. And because that is going to be how I use the content of my envelope, I'd rather have 256 or 1024 than 512. I think this is a very rational decision on my part. As I have been saying all along, expectation values of amounts of money are not necessarily a good guide to decision making. If you send me the key to Envelope B, I will delete Envelope A from my computer. Richard Gill (talk) 13:13, 5 January 2012 (UTC)

I don't get it. You apply your personal utility function and conclude that you "certainly gain by switching." OK that's fine but then on you say that you "do not find decision theory useful" and that "expectation values of amounts of money are not necessarily a good guide to decision making." But if you don't believe in decision theory or find it useful why do you use it at all? And if you think decision theory is just crap why are you devoting so much of your time to a decision theory problem like TEP? You even explicitly say that it's "a cost in spending time thinking, so I avoid that." Well, you are certainly not avoiding spending time (and thus money) thinking about TEP. You also say about your decision to switch to envelope B that "I think this is a very rational decision on my part" while you at the same time say about this situation that it's "not necessarily rational to be rational." It's hard for me to imagine something more contradictory than this. How can it be irrational to be rational in any case? To me this is just postmodernism and I'm clueless there. What "MU" stands for will be evident when you open the other envelope or if you decide to cash in the amount you already have. What do you want to do? Rest assured that I will send you the real money you win. iNic (talk) 12:47, 6 January 2012 (UTC)


 * Good questions. Since I contradict myself a lot, you see that I am not rational. So you need not take any notice of anything I say. I don't use decision theory in the sense that of making decisions by finding the action with the optimal expected utility, but I do make decisions by thinking about all possibilities and their consequences. If it is an important decision I'll sleep on it and let my subconscious mind make its choice. I've found it important in life to follow your heart. Even if your heart makes wrong decisions from time to time, you can live with that. But if your intellect makes wrong decisions, it causes later anguish. I find TEP interesting because it touches on the meaning of probabilitiy, and on communicating about probability issues with ordinary people, like lawyers, judges or journalists. And that is what I am concentrating on, professionally, at the moment (forensic statistics, DNA evidence, etc.). You keep me intrigued by what MU could mean, and real money is always welcome, so let me stick to my decision (directed by my heart and intuition), to switch. Richard Gill (talk) 18:39, 6 January 2012 (UTC)

Yesterday you stated that the "key factor is utility" while today you are not "making decisions by finding the action with the optimal expected utility." A 180 degree shift of opinion. A couple of days ago you said that the Broome game is "totally unrealistic" (because of its infinite expectation), and that you can't imagine anyone playing the real Broome game with you, with real money. And yet here we are now playing the real Broome game with real money. But now suddenly you say that utility and expected values are not good for anything for you, and never was as you want us to believe. Well if that is indeed the case, why did you say it was problematic to play the Broome game in the first place? Not even imaginable. The Broome game is only troubling for people that believe in decision theory. Could it be the case that during the days when you believe in utility theory and maximizing expected values you don't believe in the reality of Broome games, and the days when you believe that Broome games can be played for real you don't accept the validity of decision theory? Let's say I now tell you that envelope B for sure contains 256 MU would you ignore what information and still just follow your heart? If I instead told you that B contains 256 MU with probability 3/4 and 1024 MU with probability 1/4 would you still switch to B? iNic (talk) 04:17, 7 January 2012 (UTC)


 * Read Keynes, for a good critique of the idea that the best decision is the one which optimizes expected utility. I admit the relevance of utilities (note the plural) and probabilities, but like Keynes, I do not believe that choosing the decision which optimizes my subjective expected utility, is necessarily wise.


 * Aha you are a Keynesian now, cool. I think we are making some progress. How do you think Keynes would have solved the Broome game? Keynes for sure would not say that it's impossible to play the Broome game for real with real money. You say that both probability and utility are relevant in your view but without any idea how to combine them they are pretty useless taken by their own. Keynes risk formula can't be used here because you don't risk anything. iNic (talk) 23:10, 8 January 2012 (UTC)


 * I don't at the moment consider our Broome game to be a real game with real money. We have not in advance decided what the currency unit is to be, what is the price I should pay in order to play the game with you, and what are the consequences of defaults (refusing to pay up). I expect (this is where my prior beliefs come in!) that even if the game ends with you paying me real money, the amount will be so small I will not be terribly interested in it, as cash to spend. I'm playing for fun; because I enjoy taking risks. You could say that risk-taking within the action of switching has in itself an extra utility to me. When playing with someone else's money, I am definitely not risk adverse; on the contrary!


 * You don't have to pay anything to play. TEP is a once in a lifetime opportunity to get money for free, didn't you know? I will not refuse to pay you, I promise. True, you don't know the currency unit yet but according to ordinary decision theory that is not needed anyway. The math is the same wether it is in US dollar or Yen, or anything else. This is the first time you talk about a lower limit. You only talked about the necessity of an upper limit for the amount in the envelopes until now. Lower limits are not considered at all in ordinary decision theory nor in Keynesian risk theory to my knowledge. On the contrary people are considered to be less and less interested in money the larger the amounts are that are given to them. This is the heart of utility theory. Lower limits are part of your own theory? I think it seems like a reasonable idea. But it is brand new. iNic (talk) 23:10, 8 January 2012 (UTC)


 * I am not troubled by the consequences of the Broome game for decision theory, because I already do not believe that the standard theory of economic decisions is either descriptive or prescriptive. On the one hand ("descriptive"), it's pretty clear that real people do not make economic decisions in the way which economic theory suggests; on the other hand ("prescriptive") it's not clear that this is irrational. The usual axioms of rational economic choice are not particularly compelling, in my opinion. Probably we both agree on this!


 * Some days ago you were not troubled by the Broome game because it was a game that you thought could not be played for real. Now when we are playing it for real you are not troubled by the Broome game because the theory the Broome game proves wrong is worthless anyway. So can we agree now that it is possible to play the Broome game for real?


 * Now regarding TEP: I see it as a mathematical conundrum within a particular mathematical theory of decision making. It needs resolution within this theory. I think that there already exist adequate resolutions.


 * Your adequate resolution right now is that ordinary decision theory is worthless, right? iNic (talk) 23:10, 8 January 2012 (UTC)


 * The Smullyan variant is a paradox within semantics or philosophy, I do not find it very interesting, but anyway, I find it adequately resolved in the literature. I do find it professionally interesting that there is so much controversy about TEP, especially among non-specialists in decision theory. In particular, among amateurs and philosophers. This illustrates how difficult it is for ordinary people to reason with uncertainty. Nothing new there; but TEP offers a good case study. I use it when giving talks about probability and statistics to lawyers. Also Monty Hall Problem, for the same reason. It helps people realise that notions of "chance" are not universal. The Dutch Supreme Court, and no doubt others too, repeatedly make decisions which show that they have totally no understanding of what they are talking about. Similarly, governments and other policy makers. This is a big problem in society. Finally, I repeat my wish to switch envelopes, because I'm interested to find out what will happen when I'm actually in possession of either 256 or 1024 MU. Richard Gill (talk) 09:24, 7 January 2012 (UTC)

You didn't answer my last two questions. If I tell you that the other envelope is 256 MU for sure will you really switch anyway? Let's say your boss hands over your salary in an envelope and give you the opportunity to take another envelope instead with someone else's unknown salary. You are free to take the other persons salary instead of your own but your boss warns you and say that that other salary is only half your salary. Would you still take it instead of your own just for the thrill of it? Please keep in mind that your boss might read your answer here and use it as a trick to cut the university expenses! iNic (talk) 23:10, 8 January 2012 (UTC)


 * I've already said that I'ld like to switch. I know that Envelope B is more likely to contain 256 MU than 1024 MU. This is not an issue for me. Richard Gill (talk) 11:01, 9 January 2012 (UTC)

I have some good news. It is equally likely that envelope B contains 256 MU as it contains 1024 MU. The probability is exactly 1/2 for each case. iNic (talk) 12:59, 9 January 2012 (UTC)


 * For me it is not equally likely. I still believe there is some chance that you filled the envelopes according to the Broome recipe (which gives 3/5, 2/5). Anyway, I would still like to open Envelope B. Richard Gill (talk) 17:16, 9 January 2012 (UTC)

I can assure you that they are equally likely. After all, I filled the envelopes so I should know, right? Why don't you believe me? This new information is only to your advantage so there is no reason to not believe it. A more normal reaction would be to be happy that the chances for doubling the amount you have has gone up slightly from 0.4 to 0.5, while the risk for losing half of what you have has decreased by the same amount. So why this disbelief? iNic (talk) 23:06, 9 January 2012 (UTC)
 * As an aside, if I were to bet on the matter, I would bet that iNic has not actually followed the Broome prescription here but has just made up a number. No doubt you can both see why. Martin Hogbin (talk) 09:39, 12 January 2012 (UTC)


 * No, I have followed the Broome prescription. iNic (talk) 10:26, 12 January 2012 (UTC)


 * I don't mind, I just want to open Envelope B and find out what a MU is. Richard Gill (talk) 14:57, 12 January 2012 (UTC)
 * Yes, perhaps you will be able to buy us all a drink. Martin Hogbin (talk) 22:18, 12 January 2012 (UTC)


 * Richard, this is a poor argument for switching to B. If you are only interested in finding out what MU is you can as well cash in what you already have in envelope A. I have explained this to you before. iNic (talk) 10:32, 13 January 2012 (UTC)


 * I'm most interested in opening Envelope B. I make no claims to be rational or consistent. I do not have an explanation for everything that I choose to do. Richard Gill (talk) 15:25, 15 January 2012 (UTC)


 * Richard, as you don't seem to be so interested in the money or want to explain why you want envelope B instead of A, is it OK if I let Martin take over the game from you from here? iNic (talk) 11:02, 16 January 2012 (UTC)

Ultra Simple Solution
Is there a problem with my ultra simple solution to the paradox, please? Take any set of envelope values, and for each member of the set, calculate how much it would win and lose. For example, consider the set:

{1,2,4,8,16,32}

Denoting wins and losses as {+win, -loss}, the wins and losses for each value are as follows:

{1{1,n/a},2{2,-1},4{4,-2},8{8,-4},16{16,-8},32{n/a,-16}}

The wins sum to £31, and the losses sum to -£31. Any of the win/loss outcomes is equally probable, so the expected return is always zero. --New Thought (talk) 16:39, 22 December 2011 (UTC)


 * Basically, for each compatible pair in the set of possible envelopes, the expected win of the small number is equal to the expected loss of its double. --New Thought (talk) 22:27, 22 December 2011 (UTC)


 * Also, if you accept that 0 < envelope < infinity (otherwise, as the article states, you hit the problem that infinity behaves strangely), you get an ultra simple concept to solve the "mystery" - it is just simply that amounts high in the finite range will have a large expected loss, but no expected gain. I would like to see these explanations in the article, because what they lack in mathematical vigour, they make up for in explanation power --New Thought (talk) 22:20, 24 December 2011 (UTC)
 * Remember that the objective is not to show that you should not swap (that is quite obvious) but to show exactly where the error lies in the proposed line of reasoning. Martin Hogbin (talk) 00:51, 26 December 2011 (UTC)
 * There is another objective: only to use solutions which have been put forward by reliable sources.
 * Perhaps you'd care to enlighten me as to the extent to which something is completely self-evident before a non reliable source is allowed to say it. I assume that I'd be allowed to say that the prime factors of 1,164,267 are 3^3*13*31*107? If so, why am I forbidden to make statements of equal sophistication about the Two Envelopes Problem? For what it's worth, I once found an error and sent a correction to the "expert" author of a more difficult mathematical problem in Wikipedia. --New Thought (talk) 20:51, 2 January 2012 (UTC)
 * See the rules on "own research", WP:OR and especially WP:CALC. Super-elementary arithmetic is allowed, but nothing more. My opinion, and that of many other mathematicians, is that the rules are much too narrow: they seem to forbid simple logic, elementary algebra, elementary calculus. For years, the Monty Hall Problem article was stuck because simple logical deductions which could not be explicitly sourced in the literature on Monty Hall problem itself, could not be deployed. The problem with a subject like Two envelopes problem is that the sources are written by authors from many different fields: mathematical recreation, mathematics education, philosophy, economics, decision theory, statistics, probability. Different sources disagree strongly what the problem is, let alone its solution. Editors of the page are similarly diverse. And last but not least, readers. All this means that any apparently new ideas tend to be hotly debated and the only way to achieve concensus that something belongs in the article is by showing that it already has a notable position in the literature. Richard Gill (talk) 07:40, 4 January 2012 (UTC)

Where is the error
From step 6 to step 8, the swapping argument  E(B)  to be  "(2A + A/2) /2 = 5A/4"  does no more treat envelope A and envelope B correctly to contain two interchangeable amounts, unknown which one is large and which one is small, never fixed but strictly interconvertible, like for example A=1 and B=2, or interconverted A=2 and B=1, giving the same total. Correct would be:  In this example, envelope B can be expected to contain with equal probability 2A (in case only that "A" is the smaller amount of 1 e.g.), so "2A" = 2 and A/2 (in case only that "A" is the larger amount of 2 e.g.),  so "A/2" = 1. So the expected amount of envelope B will be (2 + 1) /2  giving  "1 1/2". By the way this is exactly the total of both envelopes divided by 2, it is (A+B) /2. This same "(A+B) /2" is valid to be the expected amount of envelope A too, and A is expected to contain  "(A+B) /2" also. So A as well as B can be expected to contain 1 : 1 the same amount. No paradox.

If nothing is "asymmetrically fixed" but remains completely symmetric, and A and B are strictly interchangeable, the above formula treats A correctly to either be the small amount or to be the large amount, and so "(2A + A/2) /2" will give "(2 + 1) /2" in this example, resp. "(A+B) /2", and both envelopes are likely to contain the half of the total of (A+B). And if you swap from A to B, or from B to A, you will either gain or lose exactly the same amount in every case, just the exact difference of those two envelopes. No paradox in this symmetric example that the TEP just "pretends" to present. If there is symmetry, you will never gain the double of what you can lose, and never will lose only the half of what you can win. In symmetry, gain and loss will always be identical 1:1.


 * But have a look now on Nalebuff's AliBaba-variant:  Here one envelope is filled first with an arbitrary amount, and this envelope is given to Ali, and only thereafter the second envelope for Baba will equally likely be filled with either the double or half the already given and fixed amount of Ali's envelope. In this variant envelope A is no more "interchangeable" with envelope B, but as a "fixed amount" is strictly "determining and forcing" the to-be-contents of the "dependent" envelope B to contain either twice the fixed amount of A or half the fixed amount of A. So, just in this very special and "known to be asymmetric" variant with a fixed determining amount in envelope A, swapping from A to B on average will give 5A/4, being a one-way scale from  "fixed A : B"  only, meaning the scale of  "B : fixed A"  in return to be  "B : 4B/5"

Contents of envelope B to be expected "5A/4" is valid exclusively in this tricky and extremely asymmetric AliBaba-variant only. Where swapping from every "already fixed A" to any "dependent B" of course will always gain the double of any possible loss, and any possibe loss will be only the half of any possible gain. "5A/4" is valid only in this tricky determining/dependent AliBaba-variant. Whereas gain and loss in the symmetric configuration forever will invariably be 1:1, just the difference of both envelopes. No paradox.

And, prepared already from steps 2 to 5, the TEP uses this asymmetric "AliBaba-configuration" as a trick to,  nearly "unnoticed", from step 6 on, explicitly present and suddenly pass off a strictly "fixed determining forcing amount" in envelope A, and a strictly "dependent"  5A/4-amount in envelope B,  possibly leading even to infinity. Without saying so. Silently and nearly unnoticed. While with a correct formula of  (2"A small " + "A large "/2) /2,  with interchangeable A and B, where 2A can only mean "2 small A", and A/2 can only mean "large A/2",  gain and loss forever firmly will be 1:1, just only the plain difference of the two actual envelopes, without any paradox. Gerhardvalentin (talk) 21:16, 28 December 2011 (UTC)


 * Gerhard, please study my "Ancestral TEP" just above here. It is not a solution to TEP, it is a way of understanding the problem. And it is a way of understanding the problem which makes the derivation E(B|A=a)=5a/4 completely rigourous. Our task is not to come up with new solutions of the TEP paradox but to survey the existing proposed solutions, of which there are many. In my humble opinion the multiplicity of solutions comes from the multiplicity of ways to imagine a context within which the argument is placed, and the actual aim of the writer of the argument. I can see 6=3*2 main possibilities. Most of them can be found in the literature. There are 3 different implicitly understood contexts, or probability models. Context 1: the amounts of money in the two envelopes are fixed, say 2 and 4 Euros, and the only randomness is in the random allocation of one of these two amounts to Envelope A. Context 2: the amounts of money in the two envelopes are initially unknown, and our uncertainty about them is described by a (proper) prior probability distribution of, say, possible values of the smaller amount of money. For instance, it could be 1, 2, 3, ... up to 100 Euros, each with equal probablity 1/100. Context 3: we have no information at all about the possible amounts of money in the smaller envelope, so we use the standard non-informative Bayesian prior according to which the logarithm of the amount is uniformly distributed between -infinity and +infinity. There are two different implicit aims of the writer. Aim 1: to compute the unconditional expectation of the amount of money in Envelope B. Aim 2: to compute the conditional expectation of the amount of money in Envelope B, given any particular amount, say a, which might be imagined to be in Envelope A. Note that this second aim does not depend on actually looking in Envelope A. In any of the three contexts we can imagine being informed the amount in Envelope A and thereupon doing the calculation. If the result of the calculation is that we would want to switch envelopes whatever the imagined a might be, then the calculation is useful even when we can't look in the envelope: it tells us to switch, anyway. From studying the origins of TEP (two-necktie problem, two-sided cards problem) and knowing the authors of the papers which introduced all these problems, I think we can deduce that the writer is a sophisticated mathematician who is deliberately trying to trick the unwary not-mathematicially-sophisticated reader. This implies, for me, that the intended context of the problem is Context 2 (proper Bayesian) or perhaps even Context 3 (improper Bayesian); and that the aim of the writer was to compute the conditional expected amount in Envelope B given any particular imagined amount a in Envelope A. He uses the ambiguity of ordinary language to seduce us to take the conditional probabilities that the second envelope contains twice or half what's in the first, given any particular amount in the first as 50/50, whatever that amount (in the first envelope) might be. It's a mathematical theorem that this last fact is impossible, at least, with a proper prior distribution over the amounts possible. We are easily seduced, since these conditional probabilities are in fact entirely correct, had we done formal calculations using the traditional improper prior which is conventionally often used to describe total ignorance. Richard Gill (talk) 18:03, 1 January 2012 (UTC)
 * The annoying thing is that we are obliged to do our best to make the paradox stick before we can fully resolve it. Martin Hogbin (talk) 18:41, 1 January 2012 (UTC)
 * Yes, and the annoying thing is that the paradox sticks in many different ways, depending on who is the reader. Different readers see different contexts and different aims. One person's resolution is another persons' nonsense (and probably vice-versa, too). Richard Gill (talk) 09:10, 4 January 2012 (UTC)
 * Cumbersome, yes. The chance that envelope B is the greater amount is exactly 1/2, and that B is the smaller amount is exactly 1/2 also. But no one has ever "forced" envelope B, with equal chance of 1/2, to contain either the double amount or half the amount   "just of envelope A".  No one, and never. That's the sophism. So the "double amount of envelope A" may never have had a chance to exist anywhere, especially if it should range in the "upper region", and "half the amount of envelope A" may never have had a chance to exist anywhere, especially if it should be ranging in the "lower region". Never has had a chance to exist, because no one had ever "forced" them to exist with equal chance of 1/2. And for the reader it is cumbersome  *not*  to be told that only the difference of A and B can be gained or can be lost, exactly the same amount, with equal probability, 1:1. Gerhardvalentin (talk) 11:49, 5 January 2012 (UTC)
 * Gerhard, if the two amounts of money are a priori *equally likely* to be any successive pair of integer powers of 2 Euro, then whatever is in Envelope A, it *is* equally likely that Envelope B contains half or twice that amount. For suppose that there are 8 Euros in Envelope A. It is equally likely that the two envelopes contain 4 and 8, and the larger amount went into envelope A, as that they contain 8 and 16, and the smaller amount went into envelope A. Hence given there is 8 Euro in Envelope A, it is equally likely that B contains 4 or 16. I just used the principle of indifference: if there is no reason to prefer one possibility to another they should be considered equally likely. Well, there are well known problems when applying the principle of indifference to a situation with an infinite number of possibilities. This and the connections to Saint Petersburg paradox are just two problems of infinity which arise when one takes superficially attractive principles too seriously. Richard Gill (talk) 15:49, 5 January 2012 (UTC)
 * Yes, also Keynes sounds right, yes. But here is an example of 10'000 pairs of envelopes: Into the first envelope goes some amount, say 2000 times 4, 2000 times 6, 2000 times 8, 2000 times 10 and 2000 times 12. And into their paired "friends" of all the 10'000 pairs, the second envelopes of every pair, gets equally likely the double or the half amount of the first envelope (see AliBaba). Then you are SURE that the total of all 10'000 "first envelopes" will be exctly 80'000 and you can expect the total amount of all 10'000 "second envelopes" will be about 100'000. And the TEP argument is right with 5A/4. No doubt. Any two "A" will have one joint "B" with its double amount, and one other joint "B" with half of its amount. (But watch out! This is only valid in asymmetry, and only in the one defined asymmetric direction, but never back. The other direction gives you 4B/5 only, in asymmetry. Whereas in symmetry both directions are 1:1.) But if, from every pair of two envelopes, you select one envelope then at random and call its amount "A", and call the amount in the unchosen envelope "B", then the total of all 10'000 "A" will be about 90'000 and the total of all "B" will 1:1 also be about 90'000. And the randomly chosen A and their paired friends (A+B) will be containing the following amounts (note: symmetry, "A" chosen randomly)
 * pairings (A+B),  "A" chosen randomly:  1 +   1/2         does not exist  2  +    1           none (although there are 500 pairs of envelopes altogether having one envelope containing "2"   2  +    4           500 (although there are 1'000 pairs having one envelope containing "2"   3  +   1 1/2     none   3  +    6           500   4  +    2           500   4  +    8         1'000   5  +    2 1/2     none (although there are 500 pairs having one envelope containing "5"   5  + 10            500   6  +   3            500   6  + 12          1'000   8  +   4          1'000   8  +  16           500 10 +     5           500 10 +   20          500 12 +    6         1'000 12 +   24          500 16 +     8           500 16 +    32        none 20 +   10          500 20 +   40        none  24 +   12          500 24  +   48        none Altogether      10'000  pairs of envelopes ("A" selected at random), containing altogether  180'000


 * So, if in symmetry envelope with contents of "2" or "3" or "5" as per above example should be chosen, there will be no "B" containing the half, but more important, the same applies in the other direction with large amounts. In the "lower region" we lack "the half". But in the "upper region" eventually chosen envelopes containing 16, 20 and 24 never can nor will have any "B" containing the double, because they never were intended ("forced") to exist with any positive chance, and so in the upper region we lack "the double". They exclusively will contain the half only, as this example shows. There is a difference, indeed, but never some "equally likely" in the lower and in the upper "region". The principle of indifference is OK, but why not calling the obvious reason that "forced" asymmetric arrangement of "A and B" clearly will give 80:100 (1 : 5/4) as the switching argument of the TEP says, whereas any symmetry never will give you double gain and half loss. In symmetry, without willful asymmetric arrangement, the argument "double gain and half loss" never will work any more, all is regulated by the argument of "the same difference" between amount of A and amount of B, 1:1. Just crazy to ignore it. In symmetry, there never "will" be some "2A and A/2 equally likely" as it is guaranteed in known and obvious asymmetry only. In symmetry, just offering the difference between A and B as possible loss resp. possible gain, the chance that you will double your gain and halve your loss is not 1:1 but 2/3 to 1/3 only. We should not confound "B will be double or half A" with the chance to "double or half the contents of envelope A". And if you put exclusively "1" in one and the millionfold amount into the second envelope, then on average in asymmetry you are going to gain the 500'000 fold by switching to "B". No question. But in symmetry you will win 999'999 and you will lose 999'999 with the same probability, 1:1 and never 1:500'000. The expected value has to be based on reliable ground, not on lazy ground, on gaining and losing the actual difference A-B. Not on the illusion of an asymmetric arrangement with "expected gain" in switching in one direction only, but never back again, and forth, and back, and forth. Regards,  Gerhardvalentin (talk) 19:49, 5 January 2012 (UTC)


 * Gerhard, I disagree. You write a lot of words but you don't address my argument. Suppose we imagine Envelope A containing 512 mu. And suppose we don't know anything about how the envelopes were filled. A priori, the pair of contents (256, 512) and (512,1024) are equally likely. It is equally likely in the first case that the envelope with the larger amount became Envelope A, as that in the second case the envelope with the smaller amount became Envelope A. So as far as I am concerned, if I learnt that Envelope A contained 512 mu, then for me Envelope B is equally likely to contain 256 mu as 1024 mu. The same argument can be written down for any amount which might be in Envelope A. So if I want to increase the amount of mu which I have on average, I should exchange, and I don't even need to look in Envelope A to know this.


 * Richard, I completely disagree. You are mathematician, and I see quite another aspect. In the paragraph above, three times you wrote the words "equally likely". I strongly have to disagree, for what you say is not specific "enough". A priori and, after you may have seen A to be exactly the amount of 512, to you it may "seem to be equally likely that ...".  But never "it is" equally likely that... (Only in AliBaba it really "is" equally likely, but nowhere else.) And if you had even in the slightest interest in, at least then you would try to check it. And if you just would want to, you easily can have a short-hand proof which assumption can make any sense and which assumption has to be thrown away once and for all, the assumption of "it seems equally likely" or the assumption "it is equally likely". Suppose you know that two envelopes hide positive amounts, one of them the twentyfold amount of the other, so the other one only 1/20 of that amount. If you select one of them and ponder that swapping to the other will give you (1/2 x 20)+(1/2 x 1/20) = 401/40 or the 10  1/40 of your chosen amount, at least then you should distrust the supposition of "is" equally likely to contain ...  – Especially as you know that the same "error" must be applicable to the other envelope, also. Then you even don't need another crosscheck, need no other proof any more, the probability of on average getting the 10,025-fold amount is only given in the known asymmetry of AliBaba, but can never exist in symmetry. It's plainly wrong to cling to "as nothing is known, the probability 'is". Quite the contrary: As nothing is known, you never can apply any probability. At least you would have to admit and say "if A forces B to equally likely be 2A or A/2", only then one can say that ... - and otherwise NOT.  Only if actively FORCED.  –  Forced or not, that's the crux. No infty nor googol needed.
 * Pardon me if I clearly say that it is moronic to "suppose" any "equally likely" in symmetry. If you don't know nothing about, you never are allowed to base on lazy ground. Never allowed to suppose things hat are prohibited to be supposed. Regards, Gerhardvalentin (talk) 12:19, 6 January 2012 (UTC)
 * To be correct: The probability that "A" is the smaller amount of "s" (and swapping to "B" will double the smaller amount of "s" to "2s", gaining one additional "s") is exactly 1/2, and the probability that "A" is the larger amount of "2s" (and swapping to "B" will halve the larger amount of "2s" to "s", losing "1s") is 1/2 also. Gain and loss in each case is the same amount "s" in a pair of envelopes containing altogether a total of "3s". This assumption is and remains valid in each and every case. But the additional assumption that at the same time also the probability that amount "A" will be doubled or halved by swapping to "B" of being 1/2 each,  only is valid in the case that "B" had been forced to contain with equal probability "2A" or "A/2". Otherwise "B" only is equally likely to contain  "2s or s",  but never  "2A or A/2".  Once more: "2A or A/2" never is 1:1, except that this is known to actively having been forced. Only "2s or s" is and remains 1:1  in each and every case.  Gerhardvalentin (talk) 14:13, 6 January 2012 (UTC)


 * Gerhard, I was not telling you my own opinions. I was telling you the standard argument from standard subjectivistic probability, including use of the principle of insufficient reason of the great Laplace. So don't be angry with me. You said "suppose you know". But we don't know, we don't know anything. My argument starts with "Suppose we know nothing beyond what has been told us in the problem statement". I am using probability in its subjectivist meaning, not its frequentist meaning. This is important! Please learn about the two common probability concepts. On wikipedia, for instance. Secondly, I gave you a reasonable counter-argument, also well-known in the literature. Richard Gill (talk) 17:36, 6 January 2012 (UTC)


 * There is a resolution, and it has been well known for 20 years (Zabell, Nalebuff ...). It is that this argument implies that all possible integer powers of 2 mu are equally likely, but there are infinitely many of them. Apparently my prior distribution is improper. Such prior beliefs are well known to lead to paradoxical conclusions. For instance, according to this improper prior, the a priori probability that Envelope A contains any amount between 2 to the plus or minus one googol mu is exactly zero, the probability that it is infinitely small or infinitely large is 1. According to this improper prior distribution, we expect Envelope A to contain an infinitely large or infinitely small amount of money, with probability 1! With a proper prior distribution of beliefs concerning the contents of the two envelopes, it is impossible that, given A=a, B is equally likely to be 2a or a/2, for any possible a.This is a mathematical theorem which needs a short clean mathematical proof, not a lot of words and hand-waving. Presumably, Zabell had a proof. Samet, Samet and Schmeidler (2004) have a proof, which applies to all then known exchange paradoxes, but it is quite complicated. I have a short and clean proof, it seems not to be known yet, but I did not get it published yet. Richard Gill (talk) 10:33, 6 January 2012 (UTC)

Richard, the scale of "1 : 2" is small enough for us to stumble across. I repeatedly offered a short and clean proof by extending that scale from "1 : 2" to "1 : 10" or "1 : 20" or "1 : 100" or even to "1 : 1 million". Given that one of the two envelopes contains the millionfold amount of the other, meaning that the other contains one millionth only of the one, then by reasoning that swapping from one randomly selected envelope to the unchosen envelope will give the  [1'000'000 + (1 / 1'000'000)] / 2 = 500'000.0000005  fold  amount of your first choice shows the inanity and, at the same time, shows  where the error of reasoning is. It is absurd to say "equal probability" without having previously  ensured  that this is actually true, like in asymmetric AliBaba.  Only "there" it is completely valid, but never in symmetry. To be specific: It "has to be true", but moreover you have to KNOW that it is true, and you have to KNOW which envelope contains the determining amount and which one contains some dependent amount. You have to KNOW it. And only under that condition, only then it's just only valid for "this special a", in asymmetry, and never "for all a" in symmetry. Moreover: If you don't KNOW it, it NEVER will be given in symmetry for "any" a. Believe it: NEVER in symmetry There never will exist "one single a". Given A=a, B is equally likely to be 1'000'000a or a/1'000'000, for any possible a? Quite the contrary: for no "a" whatsoever. You may "feel that it seems to be equally likely". But just feeling so does not make it become true. And without positively knowing that it is true, "equal probability" is just a phantasm. Regards, Gerhardvalentin (talk) 23:02, 8 January 2012 (UTC)


 * Gerhard, my own theorem says that indeed, it can never be equally likely for all a, if we have a proper prior distribution on the smaller of the two amounts of money. On the other hand, it can be the case if we have a suitable improper prior. You seem to be forbidding people to base probabilities on "feeling". But that is exactly what subjective Bayesian probabilities are all about. Decision theory and economic choice theory is based on assuming that people make choices according to subjective probabilities. Anyway, I am also saying that it can be equally likely, for almost all a. It doesn't make a difference to me if we replace 2 by a million. Finally, the problem is not to argue with me, but to understand the literature. For which you have to understand something about Bayesian probability theory. It's about beliefs, not about relative frequencies. Richard Gill (talk) 14:52, 12 January 2012 (UTC)


 * I have to protect Bayes. I think that Bayes would confirm that - regarding the "scale of A:B" - our belief only can be "X : 2X" and "2X : X", with equal probability. Nothing more. A purboted "belief" in some "X : 2X" and "X : X/2" is a clear "never ever" (except for Ali). –  So outside Ali,  "X : 2X" and "X : X/2" is not Bayes, it's a creation ex nihilo. Gerhardvalentin (talk) 22:44, 12 January 2012 (UTC)

Broome example: explain the paradox
Real example, for Gerhard and Martin. iNic and I are playing this game, at the moment. It's called the Broome example. iNic has got a biased coin which has a 1/3 chance of coming up "Head", and a 2/3 chance of coming up "Tail". He tossed it until he got the first "Head". Suppose he saw r times "Tail" before the first "Head". So r could be 0, 1, 2, ... . With probability 1, r is finite (he doesn't have to toss for ever, because if he did toss the coin for ever, in the long run one third of all the outcomes would be "Heads", so there certainly must be a first time). Then he put 2 to the power r monetary units in one envelope, 2 to the power r+1 in the other. I picked an envelope at random, and looked in it. It contained 512 m.u.'s. Notice that 512 equals 2 to the power 9. So r must have been equal to 8 or 9. If it was 8, I have the envelope with the larger amount, if it was 9, I have the envelope with the smaller amount. I deduce that iNic must have seen 8 or 9 "Tails" before the first "Head". A simple calculation (Exercise for Gerhard and Martin!) shows that with probability 3/5, it was 8, and the other envelope contains 256; with probability 2/5 it was 9, and the other envelope contains 1024. The expected value of the contents of Envelope B is ((2/5)*2+(3/5)*(1/2))*512 = (11/10)*512 > 512. Would you switch? Do you realise that whatever you saw in Envelope A, you would be advised to switch? So there was no need to look in my envelope at all, I would have switched anyway! Please think about this carefully, and explain the paradox. According to Nalebuff, this is the final and most critical variant of the paradox. Richard Gill (talk) 08:47, 7 January 2012 (UTC)


 * You are not really playing a game since you can only win, you have not paid to participate. How much would you be prepared to play to enter this game? How will you be sure that iNic will always play to the rules? Will he always be able to? Martin Hogbin (talk) 16:22, 8 January 2012 (UTC)


 * Those are two very good questions, Martin! I too think that the game should be played with real money and with rules agreed on in advance, including an entrance fee. Including agreement of the sanctions if either player defaults. I have earlier argued that if a real casino would offer this game they would truncate the number of coin tosses, and that the amount of money they would charge as entry fee would depend critically on the truncation level. Truncating the game at so many coin tosses turns it into TEP with finite maximal amounts of money in the envelopes, so any paradox can be resolved. I do think that a truncation level and corresponding entrance fee can be fixed which would a) make the game attractive to a gambler who just wants to play once or twice, and b) at the same time attractive to the casino in the sense that they have a good chance of making a steady income from the game. Just like other games at a casino, where individual players play against the house (casino=casa=house?). I think the reason that typical casino games are attractive both to the house and to players is because the house plays many many times against many players, so the long run average is all that is important to them, provided they have enough capital that the chance of going bust is really small. On the other hand, individual players have relatively small capital and only play a few times. They get thrills from taking risks, so there is positive utility in playing even if they lose. And if you are poor, the relative utility of a huge amount of money to a small amount is much larger than the relative cash values. That's because once you are rich you can easily get even even richer, while if you are poor, you stay poor even if you gain some small amount of money. This is why ordinary people buy lottery tickets. There are similar reasons why ordinary people take out insurance. At the same time, both lottery companies and insurance companies make money. They've found win-win situations. On the other hand, if just two people play together, just once, the long run is not important for either person, and both persons' personal utilities will play an important role in determining whether or not they can both accept to some set of rules and play the game together. I am pretty sure that in a once off situation, expectation values are not particularly important. Richard Gill (talk) 18:33, 8 January 2012 (UTC)
 * I am sure that you see the points that I was making. The Broome version has the same problem as the standard TEP.  Think of any arbitrarily large sum of money and there is a finite chance that it will be required to be put in the first envelope.  The game therefore cannot be played for real unless the proposer has unlimited resources (or is a banker).  The 2/3 ratio between the probabilities of consecutive sums (hence 2/3 and 3/5) is just a diversion. Martin Hogbin (talk) 22:45, 8 January 2012 (UTC)

Anyone want to take this game on
This is for real money. I will guarantee to go into any casino and come out with more money than I went in with, all fairly and properly won according to the rules of the casino. Anyone want to bet me £100 that I cannot do this. We would need to do things properly and have a neutral umpire to hold the funds until the bet is decided. Any takers? Martin Hogbin (talk) 16:22, 8 January 2012 (UTC)
 * too easy. If you make a single bet on most (but not all) of the numbers on a roulette wheel, you will normally come out ahead. --New Thought (talk) 09:47, 9 January 2012 (UTC)
 * Casinos make money. A small amount of people come out of a casino with more money than when they went in, most come out with less, the casino almost always gains. I don't think the people who came out with more were smarter. I think they were just luckier. So it seems to me safe to bet that the casino wins. But my utility of money is not linear. I can't risk £100, even for you, Martin. I'll happily risk £15. But we didn't talk about how much money you take into the casino. Now it's a theorem (for roulette) that your optimal play, if you want to increase your capital by just £1, is to bet precisely the amounts, at the longest odds, which will give you a net overall gain of £1, at every separate play, till either you achieve your target or go bust. It's clear from this that the more money you have initially, the bigger the chance that you can increase it by £1 before going bust. If you have an initial capital of £N, then at best, you have a chance of N/N+1 of eventually increasing it by exactly one, and a chance of 1/N+1 of eventually going bust, if you play with the optimal strategy. So, someone who has a large capital has a very large chance of being to increase it at a casino, against a small chance of losing it all. Therefore, before I make a bet with you, Martin, I have to have an idea whether you are very rich or very poor. Because if you are a multi-millionair, you could go to the casino with a £1 million (that would be peanuts for you) and follow the optimal strategy for a net gain of £1. You'ld almost certainly manage to to do that. The chance you'ld lose that million (peanuts for you) is about one in a million. So I would demand that you go into the casino with less than £100. In that situation I'm willing to bet £15 that you won't increase your £100. With large probability, if you lose at your first attempt to increase your £100, I'll eventually win. While if you win the first time, I've obviously lost. So essentially I'm betting against your first bet. Richard Gill (talk) 19:03, 8 January 2012 (UTC)
 * I would be happy to go in with less than £100, and I would guarantee to come out with more that I went in with, and we can repeat the bet as many times as you want. Martin Hogbin (talk) 22:47, 8 January 2012 (UTC)
 * One time is plenty for me. OK, so now we need an umpire. Richard Gill (talk) 17:21, 9 January 2012 (UTC)
 * You might like to look at New Thought's reply above before going any further. It was not what I was going to do but it would work . Martin Hogbin (talk) 17:43, 9 January 2012 (UTC)
 * Indeed, I had not thought enough about this. Brilliant proposal by New Thought. But looking at my calculations again, I already knew that with an initial capital of £100 you have a very big chance of achieving your goal of £101, if you play wisely; maybe something like 97% at roulette. So I'm only willing to bet at interesting odds. For instance, I'ld certainly wager £1 against another £100 of yours, that you won't succeed in increasing £100. But I don't think you'll accept that. Richard Gill (talk) 19:44, 9 January 2012 (UTC)
 * I was planning to play a Martingale (betting system) on red on roulette. With a minimum stake of £5, I could afford to double up until I reached a £40 bet giving me (ignoring the zero(s)) a 15/16 chance of coming back with a gain of £5 (but of course a 1/16 chance of losing £75) essentially a fair (except for the casinos edge) odds-on bet of £75 at 1:15.  You were going to give me even money on this bet!


 * As I am sure you realise, this is the game you are playing with iNic (and the game the bankers have been playing with our money for the last few years), a win every time is assured - until it all goes horribly wrong. Martin Hogbin (talk) 20:46, 9 January 2012 (UTC)


 * Your martingale strategy is indeed the optimal strategy according to Lester Dubin's "bold play theorem" according to which if you need to grow your capital to a certain target at a roulette table, you should every time bet the amount which allows you to reach your target in one go, or, if you don't have enough to do that, bet everything. Indeed this is what bankers have been doing. Every year they got huge bonusses for playing risky games with our money, till the bubble burst, but that didn't hurt the individuals who had been getting richer every year. But did hurt us. Richard Gill (talk) 15:06, 12 January 2012 (UTC)

No we are not playing a Martingale game. We are playing the Broome variant of TEP which doesn't depend on any theory that can be used for fooling a casino. BTW, contrary to popular belief it is indeed possible to repeatedly play a Martingale game and fool a casino over and over any number of times, without ever running into any disasters. But that is another story. iNic (talk) 22:40, 9 January 2012 (UTC)


 * What then is the maximum sum you might require to put in the first envelope according to the Broome formula? Martin Hogbin (talk) 22:44, 9 January 2012 (UTC)


 * Martin is not fooling the casino. If the casino has thousands of players like Martin, the casino will make a profit. Martin was only fooling me because I did not do my sums right and forgot the bold play theorem. Richard Gill (talk) 15:09, 12 January 2012 (UTC)

What is the maximum number we can encounter for a normally distributed random variable? iNic (talk) 03:25, 10 January 2012 (UTC)


 * There is no upper limit. Martin Hogbin (talk) 09:10, 10 January 2012 (UTC)

OK so please go ahead and prove that the Broome variant of TEP is a Martingale. I still don't get it. iNic (talk) 11:05, 10 January 2012 (UTC)


 * You are the envelope filler and you are following the Broome formula. You toss a (three sided!) coin and depending the fall of the coin you either double the sum you need to put in and toss again or stop and put the money in the envelope.
 * How much money do you need to have to guarantee to do this every time? There is no limit. Or to put it another way, whatever sum of money you have available it will not be enough to guarantee to fulfill the Broome prescription.
 * It is exactly the same playing a Martingale in a casino. You can only guarantee to win if you have unlimited funds and the house has no limit. Martin Hogbin (talk) 16:48, 10 January 2012 (UTC)

But it's exactly the same situation for the normal distribution or any other distributions with unbounded support. The most common distributions have unbounded support. You have a problem with that? These distributions are used in modeling and prediction of real situations all the time without any disasters. And this without any arbitrary truncations of their support. You seem to see a problem that doesn't exist, and thus doesn't need a fix. iNic (talk) 22:22, 10 January 2012 (UTC)


 * Whether a given distribution is valid or not depends on what claims you are making for it. Your claim was that you could play the Broome game for real.  Clearly you cannot do this. Martin Hogbin (talk) 22:49, 10 January 2012 (UTC)

Of course I can. I'm playing it right now, didn't you know? And you can play it too. Science is very democratic in that sense. If your argument were valid then no one would be able to apply any of the most important probability distributions for real. If you could ban all real uses of these distributions in society to force your argument to become true, society as we know it would fall apart. I'm quite happy that I live in a society where you don't set the rules. iNic (talk) 03:55, 11 January 2012 (UTC)


 * That was exactly my point about the Martingale. I was offering to play the Martingale and win.  I could do it for real - until I ran out of money.  It is no different for you, one day you will not have enough money.  Thus you are not actually playing the Broome game but a truncated version of it.  We have to guess where you will truncate it.


 * Confusing. Yesterday you said that "Clearly you cannot do this." Now you admit that it is possible to play the Broome game. I'm playing the Broome game now with Richard and to do that you say you have to guess where I will truncate it. Please let me know what guess you have done for me. iNic (talk) 11:58, 11 January 2012 (UTC)


 * No, its quite simple, you are not playing to the Broome formula you are playing to the truncated Broome formula.  We have to guess your truncation point but, so long as there is one, there is no advantage in switching because I risk having your maximum value envelope in my hand and thus would lose heavily by switching.  Martin Hogbin (talk)


 * Aha I think I understand your point now. You are making a distinction between actual infinity and potential infinity. Is that correct? The ancient greeks only accepted potential infinity, not actual infinity. It seems that you have exactly the same view. You can imagine any finite quantity, however big, but you can't imagine an actual infinity. How many points do you think there is on the real line between, say 0 and 1? iNic (talk) 15:00, 11 January 2012 (UTC)


 * I do not think I am making such a distinction. It is better to say that you will require unlimited funds to play the Broome game and you do not have unlimited funds, even in imaginary money. Martin Hogbin (talk) 17:18, 11 January 2012 (UTC)

Not a pointless game

 * OK let's say that you participate in a competition where the aim is to collect as many points as possible. You collect points by opening envelopes that have been filled with points according to the Broome distribution. You always have two envelopes to choose from where one envelope contain twice as many points as the other. You are allowed to look into the envelope you select before you decide if you want to collect the points in the other envelope instead. If the envelope you open only contain one point (1/6 of the cases) then of course you should switch to the other one which for sure will contain two points. But the most important decisions for you will be when the envelopes contain a huge amount of points. There is no upper limit on the number of points that an envelope can contain. You and the other contestants will play the game over and over thousands of times so a good strategy will for sure beat a bad strategy. When the competition is over the one with the most points will win a nice car. What strategy will you have? If you say that this is an impossible game and refuse to participate you will for sure not win the car. iNic (talk) 21:03, 11 January 2012 (UTC)


 * Looks like I go home with a goat then. To ask for strategy for an impossible game is pointless.  You might as well ask what my strategy would be for a game in which there are two envelopes and each envelope contains twice the sum in the other one. Martin Hogbin (talk) 23:16, 11 January 2012 (UTC)


 * This game is in no way impossible. Why would this be impossible? You're afraid we will run out of points? Don't worry, that will never ever happen. We can invent as many points as we want and need, forever and ever. Or do you postulate that this game is impossible simply because you don't have a good answer? iNic (talk) 00:08, 12 January 2012 (UTC)


 * Yes I am afraid you will run out of money, because when you do I will lose a lot of money, making my decision to swap a disaster.


 * I do not understand what you mean by playing this game 'for real'. I we use real money or even if we have to write an exact sum of money down in some way, there will come a point when you will run out of money.


 * It is exactly the same as playing a martigale. With real money I could quickly go bust, if we use imaginary money I could probably play for my entire life and win every time but there would eventually come a time when I would lose.  This might take a very long time, but when it happened, it would, on average, wipe out all my winnings.  It is no different for the Broome envelope filler. Martin Hogbin (talk) 09:34, 12 January 2012 (UTC)


 * Have you read and understood the point game? I don't think so. No money is involved there. It's a game for collecting points. Not money. No entrance fees. One prize: a car. Don't worry, you won't go bust. It's a free game. Nothing can wipe out your points. But you have to play wisely to get the car in the end. iNic (talk) 10:38, 12 January 2012 (UTC)


 * That is not really the TEP but it makes no difference. My worry now is that the number of points in the second envelope will be so large that it cannot be represented in any way, thus my total becomes null and void or, at best, undefined.  My competitor, who sticks, will get a very large win, which I will miss out on.  Game over for me.  Martin Hogbin (talk)


 * This is exactly TEP. Your idea on how to escape TEP here is very interesting. Which number is so big that if doubling that it can't even be represented? What about putting the characters "2*" (two times) in front of the number of points in the first envelope? That can always be done. So you don't have to worry about representation at all. You need to come up with something a little bit more creative to escape the paradox. iNic (talk) 17:36, 12 January 2012 (UTC)


 * I do not need to be any more creative. If the first envelope contains the largest sum that can be represented in any way then, by definition, it is not possible to represent twice this sum in any way. Martin Hogbin (talk) 22:09, 12 January 2012 (UTC)


 * Please tell us what the largest number is that can be represented in some way. Grahams number is quite big, but it can nevertheless easily be represented. Numbers so big that they can't even be represented in any way is truly mind boggling. Anyway, let's say X is this largest number that can be represented. Now, why would "2X" not be a valid representation of twice of X? iNic (talk) 01:04, 13 January 2012 (UTC)


 * To answer your last question first. If X is this largest number that can be represented then by definition no larger number than X can be represented. 2X is larger than X so it cannot be represented or it would mean that X was not the largest number that could be represented.


 * Don't you realize that "2X" is the very representation you say can't exist? It's a reductio ad absurdum of your claim. iNic (talk) 00:53, 14 January 2012 (UTC)
 * Of course 2X can exist, but only if X is not the largest number that can be represented. If X is the largest number that can be represented that means that no larger number than X can be represented, obviously. What else do you think 'the largest number that can be represented, means'.  There is no reductio ad adsurdam here just a limit, a real number but a very large one.  We assume that larger numbers exist but we have no way of representing them. Martin Hogbin (talk) 18:05, 14 January 2012 (UTC)


 * I completely agree that, 'Numbers so big that they can't even be represented in any way is truly mind boggling'. This is why we tend to put them out of our minds but we have take them into account in our calculations. Martin Hogbin (talk) 10:15, 13 January 2012 (UTC)


 * How about representing different infinities on paper, as Cantor did? Is that nonsense to you? If so, all math must be nonsense to you, as it all rests upon Cantor's set theory. iNic (talk) 00:53, 14 January 2012 (UTC)


 * This has nothing to do with different infinities. Money and points are both countable.  You must put a definite sum in the envelope, an actual number.


 * No, points are not always countable. The number of points from 0 to 1 on the real line are uncountably infinite for example. The number of integers are indeed countable, countably infinite. Every kid knows that there is no biggest number, because postulating one leads immediately to contradictions. Your dilemma now is that according to your own explanation of TEP, admitting that there is no largest number also leads you into contradictions. Sad situation indeed. iNic (talk) 01:37, 16 January 2012 (UTC)
 * Nobody envisages there being transcendental numbers of points in the envelopes and there is no way this can happen with the Broome formula, starting at 1. Martin Hogbin (talk)

St Petersburg

 * As Richard says, this is just the St Petersburg lottery. I can play that with you if you like, how much will you pay me for a ticket? Martin Hogbin (talk) 09:23, 11 January 2012 (UTC)

Broome TEP is not SPP. If Broome TEP would be the same as SPP, some of the solution proposals for SPP could be helpful for Richard now when he's trying to find an argument for to switch envelope or not. But none of the ideas developed during the last 300 years for solving SPP is of any help to him when playing Broome TEP. iNic (talk) 11:58, 11 January 2012 (UTC)


 * I disagree. Many of the arguments used to resolve SPP *have been used* to resolve Broome TEP. See Nalebuff. I think that the reason there are a number of *different* arguments is due to the fact that conventional decision theory (or economic choice theory) is neither prescriptive nor descriptive. On the one hand it is not clear that the axioms of rational choice really are rational, nor is it clear that people really do act on them. So it seems a matter of taste how you resolve SPP or Broome TEP, but most people can find a resolution which satisfies them, and it will work for both paradoxes. Richard Gill (talk) 14:34, 12 January 2012 (UTC)

So why don't you use any of these arguments now when you play Broome TEP with me? You ignore all these arguments and say on the contrary that decision theory is useless in practice. You say in effect that you have two different states of mind, either your mind is "within" classical decision theory and in that case a Broome game can't be played "for real" (whatever "real" means in this state of mind). OR, your mind has escaped the classical decision theory framework and now suddenly a Broome game can be played for real (where "real" here obviously must mean something else than in the first case). While your mind is outside the theory you are a free man and can speak out and say that decision theory is just crap, which you have done. The situation is somewhat reminiscent of an atheist trying to solve a theological problem. When solving the theological problem the atheist must of course believe in a god. But why trying to solve a theological problem if you are an atheist in the first place? That is just crazy. Leave these problems to those who really believe. Same with decision theory. iNic (talk) 17:36, 12 January 2012 (UTC)


 * I you want to really believe that games like the Broome game involving infinities can be played for real then there are far more dramatic paradoxes than the TEP. Let us put lumps of gold in the envelopes in place of cash.  I can double my gold using just one envelope by cutting it into a finite number of pieces and then reassembling then into two solid pieces each of the original size.  See the Banach–Tarski paradox.  If course you cannot do this in real life any more than you can play a game requiring an infinite bankroll.  Martin Hogbin (talk) 10:36, 13 January 2012 (UTC)

OK so if you decide to go from point A to point B you have to go half the distance first. When there you need to traverse half of what is left. And so on, ad infinitum. After an actual infinity of steps you finally reach B. But according to your philosophy pf mathematics nothing at all in real life can be actually infinite. If this is correct then it's physically impossible to go from A to B. And as these two points are arbitrary, this proves that you can't move anywhere. All movement must be illusory. I assume you agree. iNic (talk) 00:53, 14 January 2012 (UTC)


 * I am not sure what Zeno's paradox has to do with the TEP. In mathematics we can have can, with care, have infinite numbers of infinitesimal quantities, that is how calculus works but you cannot have an infinite quantity of real money or even a real physical representation of an infinite number.  However large your piece of paper or computer memory you eventually run out of space.  We can imagine that numbers larger than the largest that can be physically represented exist but we cannot, by definition represent them in the real world.  You therefore cannot play a game involving a physical representation of an infinite quantity in the real world. Martin Hogbin (talk) 18:20, 15 January 2012 (UTC)

But I already told you that to simply move from A to B you have to physically traverse infinitely many steps. An actual infinity of steps, not just a potential infinity! Let's transform this mundane everyday physical activity of moving into a game. Let's say you are in a competition where the goal is to come as far from B as possible, on the path from A to B. You are offered one of two tickets that will magically move you to one of the infinitely many steps towards B:$$ \left\{ \frac{1}{2},  \frac{1}{2} + \frac{1}{4}, \frac{1}{2} + \frac{1}{4} + \frac{1}{8}, \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16},  \cdots  \right\}$$. The two tickets goes to two adjacent but otherwise randomly chosen steps. You have to pick one of the two offered tickets but you don't know which one will move you the least. So you just take one of the two at random. Let's say you selected the ticket that took you to step 3 which is 1/8 from B. Now you are offered to take the other ticket instead. You know that the other ticket will move you to step 2 or step 4, but you have no clue which is the most likely alternative so you assign both alternatives probability 1/2. If the other ticket will move you to step 4 you will end up 1/16 from B. If the other ticket will move you to step 2 you will end up 1/4 from B. So either you will move 1/16 closer to B by switching or else you will move 2/16 further away from B. So either you gain 2/16 or else you lose 1/16. Your reward is directly proportional to the distance from B which means that you can win twice of what you can lose. Therefore it makes perfect sense to take the other ticket instead. But if you had taken the other ticket from the beginning, an analogous reasoning would come to the same conclusion: to swap. This version of TEP doesn't require an infinitude of resources that we don't have. It is played on a finite (bounded) section of the real line and we only use trivial and uncontroversial properties of the real numbers. iNic (talk) 15:28, 16 January 2012 (UTC)


 * What is the average value, in distance, of a ticket? Martin Hogbin (talk) 16:05, 16 January 2012 (UTC)

Somewhere between zero and the full distance between A and B. It is for sure not infinite. iNic (talk) 16:39, 16 January 2012 (UTC)


 * There is an infinite number of tickets and a finite distance, so the average distance of a ticket is 0. Paraxoxes of infinity (the number of tickets here) abound.  That does not mean that everything to do with infinity is a paradox. Martin Hogbin (talk) 23:39, 17 January 2012 (UTC)

I'm baffled. Your new ad hoc escape plan, as you can't blame it on any infinite expectations anymore, is to blame it all on the density of points on the real line! So how can we escape paradoxes when using the real line in the future? Can we use it at all? Maybe we should replace it with a real line consisting only of a finite number of points? How would that be constructed? It's all very funny because this very topic was one of the topics a non-mathematician and a mathematician quarreled about in 1654 that eventually led to the theory of probability. After the quarrel the mathematician complained in a letter to a fellow mathematician about the non-mathematician: "... although he is very clever, he is not a mathematician, which, as you know, is a big flaw." And he continued "He even can't grasp that a mathematical line can be divided indefinitely but instead imagines the line as consisting of a finite number of points." iNic (talk) 10:50, 20 January 2012 (UTC)


 * So am I baffled. I clearly say above, 'There is an infinite number of tickets',thus I agree that there is an infinite number of points in the real line.  In you example we are only considering rational numbers so the number of points is countable. Martin Hogbin (talk) 11:11, 20 January 2012 (UTC)

Yes they are rational numbers, meaning we are dealing with a countable infinite number of points. That's the smallest infinity there is, and as you have problems with that you probably have problems with infinities with greater cardinality. So can you please tell me your solution to this problem again? I didn't get it. iNic (talk) 12:16, 20 January 2012 (UTC)


 * I do not have any problems with infinity but I am aware that that there are many paradoxes of infinity. If you prefer all, these paradoxes are unresolved but they are no more interesting that noting that 1/0 = 2/0.  In your example the average ticket has zero value. Thus my initial expectation for both envelopes is 0 thus there is no reason to swap. Martin Hogbin (talk) 22:56, 20 January 2012 (UTC)

OK this is really really bad news. Decision theory apparently breaks down, not only when the expected value is infinite or for games that require infinite resources, but also for finite games where the expected value is simply zero! This strikes right at the heart of rational decision theory and game theory. A fair game is defined as a game with zero expected value and for rational decision theory the crucial question is always: "Is the expected value (or utility) for my action above or below zero?" So the question is, can we trust decision theory and game theory at all now? As every finite expected value E(X) > 0 can easily be transformed into an expected value that is E(X') = 0 by a simple shift of the unit of measurement, X' = X – E(X), there are no safe values. iNic (talk) 12:51, 21 January 2012 (UTC)


 * Not really. The point is that the expectation for both envelopes is 0 so, there is no reason to swap even with your transformation. Martin Hogbin (talk) 13:18, 21 January 2012 (UTC)

Sure, based on the obvious symmetry the unconditional expectation is the same (finite) number for both tickets. But we are talking about the conditional expectation now. Conditional on how far you travel using the first ticket the expected value of the other ticket is larger. Until now, you and others have said that this paradoxical situation can only happen in case the unconditional expectation is infinite. Have you already forgotten? But now we see that this paradoxical situation is possible whatever the unconditional expectation is, it doesn't have to be infinite at all. TEP is thus not related to infinities, which means that we can't blame on "the strange properties of infinity" anymore. So you need to explain exactly when and why decision theory breaks down, without blaming on infinities. iNic (talk) 15:08, 21 January 2012 (UTC)


 * iNic, you are forgetting that we are told that the amounts of money are positive. So we know their expectation values are positive (possibly plus infinity). We cannot shift the unit of measurement. But if you like we can generalize. Theorem. Let A, B be a pair of random variables whose joint distribution is symmetric under exchange of the pair, and suppose that for all a, E(B|A=a) > a. Then E(A) is either plus or minus infinity, or not defined (+infinity-infinity). Proof. I prove the theorem by contradiction. Suppose E(A) is finite. It follows then that 0=E(B)-E(A)=E(B-A)=E(E(B-A)|A). But the assumption E(B|A=a) > a for all a is equivalent to E(B-A|A)>0 with probability 1, and that implies that E( E(B-A|A))>0. We have been led to a contradiction. My assumption that E(A) is finite is therefore invalid. Richard Gill (talk) 14:04, 22 January 2012 (UTC)

What? Do you really claim that the rewards are not positive in this game? Well, none of the rewards are negative and not a singel ticket leads to a reward that is zero. No ticket will take you the full distance from A to B. This means that all tickets indeed give a strictly positive reward. Who fooled you to believe that all rewards strictly positive implies that the expectation must be strictly positive too? How can you do this embarrassing mistake? I'm baffled. iNic (talk) 08:35, 24 January 2012 (UTC)

Then you claim that we can't shift the unit of measurement. This statement is, if possible, even more strange. Of course we can. This is precisely what we do when we have a game with expected value greater than zero and are told that we have to pay the expected value to play. This shifts he scale so that the game becomes 'fair,' which in the classical theory is the same as having zero expected value. Or are you saying that we no longer are allowed to have games with zero expected value? That would be a fatal stab in the heart of decision theory. I still don't understand your explanation of why the ticket game leads to a paradox. You only present a proof of some theorem no one has asked for. I suppose this theorem of yours is as irrelevant as the theorem that "proved" that the expected value always need to be infinite, which obviously is wrong. The ticket game reproduces TEP perfectly without having an infinite unconditional expectation. Up until now this was considered mathematically impossibile by you and some other scholars, because you had "proved" it. iNic (talk) 08:35, 24 January 2012 (UTC)

As I know that your memory can be a little weak sometimes this is what Martin asked you on this page 10 days ago:
 * "Is it well-known, or obvious, that in any Broome-like version of the game that if there is an advantage in switching for all finite initial values then the expectation must be infinite?"

Your answer was this:
 * "Well known and obvious to mathematicians. It is mentioned in many of the publications on TEP. And: I just wrote you the complete proof!"

You repeat the useless proof and add a comment in the arrogant style many mathematicians has adopted since the time of Pascal:
 * "Please take the trouble to learn some elementary probability theory so that you can at least read simple mathematical proofs. See my essay on probability notation User:Gill110951/Probability notation."

iNic (talk) 08:56, 24 January 2012 (UTC)

An interesting observation
Take a game where an envelope is filled with a sum F(n) with probability P(n), where these are both functions of n. Consider the case as n tends to infinity. If (P(n+1)F(n+1))/(P(n)F(n)) > 1 then it is always worth switching. If this is the case, however, the expectation must, by L'Hopital's rule be infinite. Martin Hogbin (talk) 09:41, 11 January 2012 (UTC)


 * So if you are given the opportunity to take one of two envelopes filled according to a scheme like that, you would always switch to the other envelope? iNic (talk) 11:58, 11 January 2012 (UTC)

I would first want to be sure that the filler of the envelopes had unlimited cash. Martin Hogbin (talk) 13:51, 11 January 2012 (UTC)


 * OK and if the filler of the envelopes has unlimited cash, would you switch whatever you saw? iNic (talk) 15:00, 11 January 2012 (UTC)

Am I allowed to open my envelope before deciding? Martin Hogbin (talk) 16:28, 11 January 2012 (UTC)


 * Sure. iNic (talk) 21:03, 11 January 2012 (UTC)

And what sum do I see in my envelope when I open it? Martin Hogbin (talk) 23:16, 11 January 2012 (UTC)


 * Some amount compatible with your F(n) and P(n). iNic (talk) 00:08, 12 January 2012 (UTC)


 * Martin, it is not clear to me what you mean by "take a game where an envelope is filled with a sum F(n) with probability P(n)". Are you talking about TEP games? Is P(n) the marginal probability distribution of the amount of money in Envelope A, or the probability that the smaller of the two amounts is F(n)? In the Broome game we have a probability 2^n/3^(n+1) to put 2^n in one envelope, 2^(n+1) in the other (n=0,1,...), thereafter a random envelope is selected and called Envelope A. The probability of finding 2^n in Envelope A, for n>0, is (2^n/3^(n+1) + 2^(n-1)/3^n )/2 = (5/6).2^(n-1)/3^n. Richard Gill (talk) 14:42, 12 January 2012 (UTC)
 * I am thinking of a Broome-style game and hoping to show that it is only always worth swapping if the expectation is infinite. let me think some more. Martin Hogbin (talk) 18:46, 12 January 2012 (UTC)
 * That is not quite right, more thought required. Martin Hogbin (talk) 23:51, 13 January 2012 (UTC)

But if I open my envelope I see an actual figure. Tell me what I see and I will tell you whether I switch or not. Martin Hogbin (talk) 09:14, 12 January 2012 (UTC)


 * But then you have to define your F(n) and P(n). Please define these and I can tell you what you will see. iNic (talk) 10:28, 12 January 2012 (UTC)

Sorry, I was assuming that we would use the Broome formula. Martin Hogbin (talk) 13:49, 12 January 2012 (UTC)


 * I don't know because I don't know if you will open envelope A or envelope B. But let's say you open envelope A and see 512 MU, same as Richard has seen in his envelope A. He is currently struggling to find a good argument for opening envelope B. If you want to do the same perhaps you can help him out here? iNic (talk) 17:36, 12 January 2012 (UTC)

Having seen 512 and having chosen to suspend reality by accepting that the filler of the envelopes has unlimited cash there is a case for swapping. On the other hand I might still reason any limit, even an arbitrarily large one, means that there is no advantage in swapping. I think I would swap out of idle curiosity, but I see no paradox. Even having suspended reality the argument for swapping is countered by an equally good argument not to swap. Martin Hogbin (talk) 18:38, 12 January 2012 (UTC)


 * But if I tell you that 512 is for sure not the maximum amount? Will that help? If I in addition tell you that it is equally likely the other envelope has 1024 as 256, would it make your decision even easier? Remember that you say above that if your condition is fulfilled then "it is always worth switching." Now you say that there is an equally good argument for not swapping. Which argument do you have in mind? You only mentioned your argument for switching. iNic (talk) 20:23, 12 January 2012 (UTC)

Yes, if you tell me that 512 is not the maximum amount or that it is equally likely that the other envelope has 1024 as 256 I would swap immediately. Martin Hogbin (talk) 22:05, 12 January 2012 (UTC)


 * OK good, but can you now convince Richard that this is the correct way to argue? iNic (talk) 00:40, 13 January 2012 (UTC)

I do not think Richard will need convincing that if it is equally likely the other envelope has half or twice the sum in the chosen one you should swap. This is Falk's coin tossing version and Nalebuff's Ali-Baba version. No one doubts that you should swap in that case. The point is there is no way to arrange things such that for any possible value that might be in the originally chosen envelope, it is equally likely the other envelope has half or twice the sum in the chosen one (and still maintain the symmetry between the envelopes). Martin Hogbin (talk) 10:03, 13 January 2012 (UTC)


 * Well, this is exactly what I have done. iNic (talk) 01:55, 14 January 2012 (UTC)
 * Not quite. Martin Hogbin (talk) 18:07, 14 January 2012 (UTC)


 * If the amount in one envelope is twice what's in the other, the amounts are positive and finite, and the envelope names (A, B) are assigned completely at random after the envelopes have been filled and closed, then it's impossible that conditional on what is in Envelope A, whatever that might be, Envelope B is equally likely to contain half or twice. This theorem has been known since 1989. There's a short proof in the wikipedia article. It runs as follows. Round down the amounts of money in the two envelopes to integer powers of 2. This keeps their relationship (one is twice the other) intact, and it keeps the symmetry of the situation intact. We might just as well have assumed from the start that only integer powers of 2 are possible amounts of money in either envelope. Let p(n) denote the probability, after this simplification of the problem, that the smaller amount is 2 to the power n. Some easy algebra using the definition of conditional probability and our model assumptions, shows that in order that Envelope B is equally likely to contain 2 to the power n plus or minus 1, given that Envelope A contains 2 to the power n, it is necessary that p(n-1)=p(n). Since this holds for any n such that 2 to the power n is a possible amount of money in either envelope, it must hold for all n (postive or negative integers). But there are uncountably many integers. It's not possible to give each a positive probability and at the same time to have all of them equal. Richard Gill (talk) 19:20, 22 January 2012 (UTC)


 * Then I have done the impossible. Again. iNic (talk) 21:33, 22 January 2012 (UTC)


 * You say you have done the impossible but you give no proof. You say you followed the Broome recipe but at the same time you are saying you didn't. On the other hand I give a proof that you cannot do the impossible, and you do not show me the error of my reasoning. So you may believe in mutually contradictory things at the same time, like Alice in Wonderland (even believe in seven impossible things before breakfast), but I have no reason to change my own opinion. Richard Gill (talk) 15:13, 25 January 2012 (UTC)


 * Yes isn't it cool? I followed the Broome recipe and yet I managed to increase your chances of getting the better envelope. So what you have to decide now is if you want to cash in your 512 MU or if you want to trade that for 256 MU or 1024 MU with 50% probability each. So what do you do? iNic (talk) 16:36, 25 January 2012 (UTC)


 * I want to switch. Richard Gill (talk) 19:01, 26 January 2012 (UTC)


 * OK why? iNic (talk) 09:42, 27 January 2012 (UTC)


 * Because I want to know what's in Envelope B. And I want to print the pdf and pin it to the wall of my office. Richard Gill (talk) 14:54, 28 January 2012 (UTC)


 * Hmmm I see. You are difficult indeed! Please consider the following situation instead and forget about envelope B for a while. I have tossed a fair coin and depending on the outcome put 256 MU in one envelope and 1024 MU in the other. I have labeled the envelopes HEAD and TAIL and, as before, made the envelopes into downloadable locked files to make it easy. Please download both files right away. Now I ask you: would you prefer to cash in your 512 MU you already own or would you rather go for a random pick of one of the two envelopes labeled HEAD and TAIL instead? Is one of these options more rational given that you want to get as much money as possible? I think Martin knows the answer already. iNic (talk) 01:01, 29 January 2012 (UTC)


 * Trouble is I am not so interested in money and anyway I don't believe that you are going to fix the value of 1 MU till after I've made my choice. So I think that I am rational in being indifferent to the choice, or perhaps more precisely, to letting my choice depend on what others might think extraneous features of the situation. (Remember: it is my choice, not yours; you cannot dictate my utilities and subjective probabilties.) Anyway, in the new situation, since I like taking risks (excitement has utility), I'll go for opening the new HEAD or TAIL envelope. I just tossed a coin and it came up HEADs. So that's what I want. I downloaded both files already. Richard Gill (talk) 11:03, 29 January 2012 (UTC)


 * Even if you are not interested in money you need to pretend that you are in this context. No problem, I can reveal the amount in the envelope you currently have, Envelope A. It contains approximately 1.618 dollar. This means that heads and tails will give you twice or half of this. Still you find no rational argument for not sticking to Envelope A? iNic (talk) 10:29, 30 January 2012 (UTC)


 * I am afraid that I am not rational and for many reasons I have lost patience. I want to open Envelope B, close this discussion, and move on. Write a new version of my own paper, get it published, and look at a new problem. Richard Gill (talk) 13:50, 31 January 2012 (UTC)


 * You are not as irrational as you think. It's easy for you to end this discussion, just tell me if there is a monetary advantage, on average, taking HEADs or TAILs instead of cashing in the 1.618 dollar you already have. Come on, I know you can answer this one properly. iNic (talk) 02:09, 1 February 2012 (UTC)

Role reversal for iNic
I am going to offer you the Broome game. I have filled two envelopes according to the Broome formula and you can pick one. Unfortunately I do not have much money and I can tell you that no envelope contains more than £8. Would you swap whatever sum you saw in your envelope?

You are allowed to play the game many times. Would you win more on average by swapping or sticking? Martin Hogbin (talk)


 * What's the point asking me to play a TEP variant? I'm not denying that TEP can be played, and I never did. There exist optimal strategies for the game you propose, and it is neither to always stick nor always stay. This is a poor TEP variant as it doesn't preserve anything of the paradoxical nature of TEP. So this is indeed a pointless game. iNic (talk) 00:16, 12 January 2012 (UTC)

In what way is it different from the way in which you are playing it?


 * How or in what way could your variant turn into a paradox? No way. There lies the difference. iNic (talk) 10:33, 12 January 2012 (UTC)


 * I agree. There is no TEP paradox here. Martin Hogbin (talk) 13:46, 12 January 2012 (UTC)

I am sure that we will agree on the answers but perhaps you could humour me and answer my two questions above. I have answered your questions. My scenario is undoubtedly possible. You probably know what is coming next but let us see. Martin Hogbin (talk) 09:13, 12 January 2012 (UTC)


 * I think the optimal strategy is to switch when the first envelope is low and stay when it's high. Is it the same as what you have in mind? iNic (talk) 10:33, 12 January 2012 (UTC)

I agree. So your answer to my first question is, 'No, you would not switch regardless of the sum you saw'. What about my next question? You are allowed to play the game many times. Would you win more on average by [always] swapping or [always] sticking? Martin Hogbin (talk) 13:46, 12 January 2012 (UTC)


 * Always swapping or always sticking will get you no advantage in your non-paradoxical version of TEP. iNic (talk) 17:45, 12 January 2012 (UTC)

What about if you were not allowed to look in the envelope (which I think is the standard version of the game)? Would you swap or stick? Martin Hogbin (talk) 13:46, 12 January 2012 (UTC)


 * I would stick. iNic (talk) 17:45, 12 January 2012 (UTC)

So, I have a good night at the casino playing martingales and we repeat the game with a maximum envelope value of £32. I presume you would still stick as there is still nothing to be gained by switching.

What maximum envelope value would you need before you considered swapping to give you an advantage? Martin Hogbin (talk)


 * No maximum value would make me swap. As I don't look in any envelopes I have no idea how big they are anyway. iNic (talk) 20:08, 12 January 2012 (UTC)

So we agree then, however large the envelope filler's bankroll there is no argument for swapping, and no paradox. Martin Hogbin (talk) 22:01, 12 January 2012 (UTC)


 * This is when you don't open any envelope. But if you open the first envelope and have a look there is indeed an argument for swapping, namely when the envelope contains the smallest possible amount. Then the other envelope must contain twice as much. And if you find one of the other possible amounts the other envelope contains 10% more on average, which makes for a perfect argument for switching whatever you will see. But if you know that you will switch whatever you will see you don't have to look at all and just swap directly. But now you are in the same position as when you started. iNic (talk) 00:36, 13 January 2012 (UTC)

No, the argument above applies equally if you do not look. Let us start at the beginning again.

I have filled two envelopes according to the Broome formula and you can pick one. Unfortunately I do not have much money and I can tell you that no envelope contains more than £8. You have agreed that for some sums that you might see you would swap and for some sums that you would see you would not swap. It is therefore not true that whatever sum you might see you would swap. You have also agreed that on average there is no gain in swapping. So, even if you do not look there is no argument for swapping. Increasing the envelope filler's bankroll does not change this. Martin Hogbin (talk) 10:23, 13 January 2012 (UTC)

I think my reply above might be the wrong way round but it makes no difference whether you look or not. IF you look and see the smallest possible amount then, of course, you swap, and stay swapped. There is no paradox in that.


 * True, you are wrong. Of course it makes a difference if you look or not. In 1/6 of the cases you will detect that you have the smallest amount and will then switch. If you don't look you will miss this opportunity always, which makes for an inferior strategy. If I know which is the maximum amount, or if you can tell us which is the biggest conceivable number, then we know when not to switch. But the player knowing any upper limit is not part of TEP, this is all your invention. And you will never be able to provide the largest conceivable number, that will explode if we add one. That is just a silly idea. So the paradox is still with us, for sure. iNic (talk) 01:35, 14 January 2012 (UTC)

You still have to tell us exactly what paradox you think is unresolved. You have to give the exact setup and the exact line of reasoning that leads to a paradox. Do that and I will show you where your error lies. Martin Hogbin (talk) 18:10, 14 January 2012 (UTC)


 * All TEP paradoxes are unresolved. So please pick anyone you want. I don't mind if we continue to discuss your example above. You contradicted yourself when you tried to explain it. Your argument rested on the assumption that it should not matter if you look in the first envelope or not. Well it does, and you discovered your fault yourself. But you never adjusted your original argument accordingly. So you still haven't shown where the error is in your own TEP version. I think that would be a great start. iNic (talk) 00:59, 16 January 2012 (UTC)


 * I see no unresolved paradoxes. If you claim are are some, it is up to you to give the precise setup and line of reasoning that leads to a paradox.  There really is no point in discussing this any further unless you do that.  Martin Hogbin (talk) 09:54, 16 January 2012 (UTC)

You must tell me the claimed paradoxical line of reasoning
The point about the TEP is that it proposes a situation and then a line of reasoning that leads to an absurd or paradoxical situation. The objective is to find the error in the proposed line of reasoning. I can see no line of reasoning in the Broome setup that leads to a paradox. If you claim that there is one it is up to you to propose a line of reasoning that leads to a paradox, I will then show you the error in your reasoning. Martin Hogbin (talk) 12:20, 13 January 2012 (UTC)


 * If I can show you a finite setting where the TEP reasoning can be applied, will you be able to see the paradox then? I doubt that you will be able to find the error in the reasoning, but we'll see. iNic (talk) 01:35, 14 January 2012 (UTC)


 * Martin, in the Broome situation the logic error is in Line 8: "This is greater than A, so I gain on average by swapping." The implication "so I gain on average by swapping" can be be waranted for finite (unconditional) expectation values but not for infinite expectation values. It's a simple mathematical fact. And pretty obvious. On average your expectation value is infinite already. There is no way to improve your unconditional expectation value. If on the other hand the unconditional expectation value is finite, and you are able by some intervention to increase the conditional expectation under certain conditions which have positive probability to occur, then overall your unconditional expectation is increased if you indeed make this intervention in the situations you have determined as favourable. This is a mathematical theorem, it follows from the law of total probability which leads to the key relation between conditional and unconditional expectation values: E(E(X|Y))=E(X). It's an elementary result from decision theory. You can say that it is the central principle of dynamic programming, encapsulated mathematically in the Bellman equation, and in the method of backward induction. See also Markov decision process. In game theory it is's the basic principle for solving multi-stage or sequential games. For instance, the Monty Hall problem has two stages for each player. Suppose in the Monty Hall problem you want to optimize your overall or unconditional chance of getting the car. Then one way to achieve this aim is by choosing the door, at the last stage of the game, which maximizes the conditional probability of getting the car. Richard Gill (talk) 07:55, 14 January 2012 (UTC)


 * I understand that in the Broome example played in theory the (unconditional) expectation is infinite (as is the expectation of the conditional expectation over all values in your envelope). Thus, to say you gain by increasing it by some factor is meaningless.  As I said earlier somewhere twice infinity is infinity.  This resolution applies equally to the standard TEP (which I also say somewhere above) so in that respect the Broome distribution adds nothing new to the subject.


 * INic is claiming to be playing the game for real. My point is that the game cannot be played for real because the envelope filler has to fulfill the infinite expectation of the player and no finite bankroll is sufficient to do this.


 * My real argument with iNic is that I (like you I believe) think that all versions of the TEP paradox have been properly resolved. If anyone claims that there is an unresolved version of the paradox it is up to them to clearly state the exact setup and then the line of reasoning that lead to a paradox. Until iNic, or someone else, does this there is, in my opinion, nothing to resolve.


 * Finally I think it can be shown that for any Broome-like distribution the expectation must be infinite if it is to the player's advantage to swap for any finite sum that they might hold. Martin Hogbin (talk) 16:40, 14 January 2012 (UTC)


 * iNic, please show me the, 'finite setting where the TEP reasoning can be applied'. Martin Hogbin (talk) 18:15, 14 January 2012 (UTC)
 * I'm in agreement with you Martin. In particular, if E(B|A=a) >a for all a, or in common shorthand, E(B|A) > A, it follows by taking expectation values again on both sides of the equation, that E(B) > E(A) *or* that E(B) = infinity. But we know by symmetry that E(A) = E(B). Hence E(B|A) > A implies E(A) = E(B) = infinity. Here is something amusing about the Broome game (as played on paper, hence exactly). Suppose that many many times a pair of envelopes is filled in this way. Suppose the player (who gets envelope A) switches every time. Then it is the case that when we look at just those occasions where he initially got some amount a, the average of the amount which he got back in return by switching is larger than a. And this is true for every possible a. So it may appear to him that he has done better by the switch. When he switches he gets envelope B. But now we can say exactly the same thing. For whatever amount b in the new envelope, the average of the amount in the other envelope is larger! So on average, for any initial amount a he does do better on average by switching. At the same time, for any final amount, he is on average worse off by switching. So the Broome example as a piece of pure mathematics is merely a mathematical paradox of infinity. Ordinarily, when you average a set of numbers, it doesn't matter which order you take them. But if we are comparing averages of two sets of positive numbers (which are actually the same set of numbers), both of which average out at infinity, the difference of the averages is not defined, and the average of differences can be made anything we like, depending on which way we order the numbers in the two sets. That takes care of the Broome game as pure logic, ie, separately from any interpretation in the real world. Some people prefer to object to the Broome game because it cannot be played in the real world for real money. At least, it cannot be played many many times; a casino cannot offer it to players, without some modification first. Of course there is nothing to prevent me and iNic playing it just once.  Two people can play the Saint Petersburg game together, just once, too. But a casino will never offer it, untruncated. Since we have now moved from the world of logic and pure mathematics to the world of real money, there are as many ways to argue about the impossibility of the Saint Petersburg game as there are ideas about how one should mathematically model economic behaviour. There will never ever be one answer. Richard Gill (talk) 18:52, 14 January 2012 (UTC)
 * PS mathematically, the resolution of all known variants of TEP depends on the same small circle of mathematical properties, which I listed in my "unified solution". Still, there is some difference in flavour to the case of finite expectations and the case of infinite expectations. Richard Gill (talk) 18:57, 14 January 2012 (UTC)
 * Is it well-known, or obvious, that in any Broome-like version of the game that if there is an advantage in switching for all finite initial values then the expectation must be infinite? Martin Hogbin (talk) 19:01, 14 January 2012 (UTC)
 * Well known and obvious to mathematicians. It is mentioned in many of the publications on TEP. And: I just wrote you the complete proof! Here it is again:
 * Suppose E(B|A=a) >a for all a, or in conventional shorthand, E(B|A) > A. It follows by taking expectation values again on both sides of the equation, that either E(B) > E(A) or that E(B) = E(A) = infinity. But we know by symmetry that E(A) = E(B). Hence  E(B|A) > A implies E(A) = E(B) = infinity. I use here the standard fact concerning expectation of conditional expectation: E(E(X|Y))=E(X), provided only that E(X) is well defined. And E(X) is well defined if and only if at least one of E( max(X,0) ) and E( min(X,0) ) is finite.
 * Please take the trouble to learn some elementary probability theory so that you can at least read simple mathematical proofs. See my essay on probability notation User:Gill110951/Probability notation. Nalebuff already gave a rather long proof nearly 25 years ago. Later, other authorities gave shorter proofs, including mine. All this is well known and completely standard. In fact Broome TEP is simply a neat textbook example of the situation when one cannot compute expectations via conditional expectations. Define X=min(A,B). In the Broome example, X equals the absolute value of the difference between A and B, and its expecation value is +infinity. A-B equals plus or minus X, with equal probabilities 1/2, independently of the value of X. The expectation value of A-B is not well defined (one finds +infinity -infinity and this cannot be given a sensible value between + and - infinity, inclusive).  We see that E(A-B|A) = A-E(B|A) < -A/10 hence E(E(A-B|A))=-infinity while E(A-B|B) > B/10 hence E(E(A-B|B))=+infinity. In words, create two table with rows indexed by values of A and columns indexed by values of B. Place in one of the tables the values of a-b, and in the other table the probabilities that A=a and B=b. Normally speaking, we can compute E(A-B) by multiplying the values by the probabilities and summing. In this case when we do that operation row-wise we get + infinity, when we do it column-wise we get - infinity, and if you do it other ways e.g. diagonalwise you can get anything else you like between +infinity and -infinity. Richard Gill (talk) 11:35, 15 January 2012 (UTC)


 * PS here is a simpler example of the same phenomenon. Fill an infinite table with the values +1 above and on the diagonal, -1 below. The row sums are all +infinity, the column sums are all -infinity, the sum of the row sums is different from the sum of the column sums. Broome-TEP is a little trick based on this simple idea to tease non-mathematicians. Richard Gill (talk) 11:46, 15 January 2012 (UTC)
 * Sorry, I for got about your proof. It was just that I had a different proof for Broome-like distributions. Martin Hogbin (talk) 18:38, 15 January 2012 (UTC)


 * This is some nice mathematics, Richard, but unfortunately it's all totally irrelevant for TEP. In fact, contrary to popular belief TEP has nothing to do with infinities at all. This can be shown in a number of ways. One is to play a game that is a combination of TEP and MHP. Anyone want to play? iNic (talk) 01:14, 16 January 2012 (UTC)

No thanks, I want you to show me any version of the TEP that you claim is unresolved. I have asked you several times to do this and you have failed to do so. I therefore am quite satisfied with my position, and that of Richard and the article, that all versions of the TEP have been satisfactorily resolved. Martin Hogbin (talk) 09:59, 16 January 2012 (UTC)


 * But not even you and Richard can agree upon what should be the correct treatment of the Broome TEP. Your strategy is that there is no need to solve the problem because according to your opinion it can be dismissed as a game that can't be played "for real." Only what you think are "real" problems need to be solved. This is not the opinion Richard has. He admits (now) that it is indeed possible to play the Broome TEP. His strategy is instead to believe in the idea that decision theory breaks down when playing games with infinite expectations. So in a way your two ideas on how to "solve" (in real: dismiss) the Broome TEP are totally opposite to each other. You claim that Broome TEP is a non-problem because no game can be infinite, while Richard claims that the Broome TEP is a non-problem just because it is an infinite game. None of you have an idea on how to really solve the problem, you only have two different ideas on how the problem properly should be dismissed. And your two ideas are not only different but contradictory. Your different strategies on how to dismiss the Broome TEP also leads to different real behavior in the real Broome game we are currently playing. You have a clear opinion why you want to switch to envelope B while Richard doesn't have any good argument at all. This is because according to your view the only risky thing with swapping is if the A envelope happen to contain the maximal amount possible. I have assured you that this is not the case and then you see no risk in swapping. On the contrary, it's advantageous to do that. Richard, on the other hand, has the view that in cases like this, when using a distribution with infinite expectation, decision theory breaks down and you can't possibly have any rational view on what to do. This is why he doesn't want to give any rational arguments whatsoever for why to switch to B. He wants to open envelope B for sheer curiosity, but that's all. So there you have the example you requested. A TEP that is still unresolved is right before your eyes. You only have to open your eyes to see it. iNic (talk) 12:12, 16 January 2012 (UTC)


 * There is an "internal" solution of Broome - TEP. One can identify exactly where the apparently logical chain of reasoning breaks down. It's at step 8: SO I gain on average on swapping. That deduction is untrue. One does not gain on average, overall, by swapping. Conditionally, one does gain. It's a fact that if you repeatedly make two random numbers A and B according to the Broome recipe (which is completely symmetric regarding A and B), then on those occasions when, for instance, A=8, B is on average larger; on those occasions when B=8, A is on average larger. That is "paradoxical" in the sense of being counter-intuitive, but intuition often goes astray when dealing with infinite or undefined expectation values. There is also a link to the foundations (philosophy) of economic decision theory and the Saint Petersburg problem. People in that field have an ongoing discussion about utility and infinite expectation values. Infinite expectation values spoils what otherwise would be a pretty theory. So people like to come up with reasons why infinite expectation values should be disallowed. This discussion spills over into the literature on TEP. It provides external reasons why one just doesn't want to discuss Broome TEP. Martin, as a layman, is happy with the external reasoning. As a mathematician I am happy with the internal reasoning. But in any case, it doesn't matter: the question is what the reliable sources say about TEP. That is what wikipedia has to report. We cannot restrict attention to what we think is logical or what we think is realistic. We have to discuss what are the notable ideas out there in the literature, right or wrong. Richard Gill (talk) 11:09, 17 January 2012 (UTC)


 * If I understand you correctly I am happy with both resolutions. The internal resolution is just the fact that twice infinity is infinity, so there is no gain in swapping. For the entirely conceptual game this is fine with me.


 * My argument was with iNic's claiming to play the game and demonstrate the paradox 'for real'. In this case my point was simply that if you play 'for real' you can only play a truncated version of the game, even with numbers in files in place of money, and to prove anything you need to play a very large number of times.


 * The Spencer paper shows a middle resolution that if you try to get round the restrictions of reality the problem is resolved by the fact that it is not clear which envelope contains the most.


 * As sums in the envelope get larger there is a kind of progression of possible resolutions. Small - simple probability, large - simple probabilty or utility, stupidly large - ambiguity, infinite - infinity. Do you agree? Martin Hogbin (talk) 00:03, 18 January 2012 (UTC)


 * I agree that this is one way to organise some of the discussions of TEP which you can find in the literature. However, though they can be thought provoking and though they link to other fascinating discussions, I think they are superfluous. The problem is to explain where the TEP reasoning breaks down. This depends on the imagined aim of the original writer and the imagined context of the writing. The most plausible context (presently called "second variant" in the article), to me, is that the writer does not look in Envelope A but does have a proper prior distribution of possible values in the pair of envelopes. The writer imagines looking in Envelope A, seeing some amount a there, and imagines what he would then expect to find in Envelope B. His mistake is to believe that given A=a, it would be equally likely that B=2a as that B=a/2, whatever the amount a might be, and whatever his prior beliefs might be. To compute the good conditional probabilities, one needs to refer to the prior distribution. It's a mathematical fact that no proper prior distribution will make half or double equally likely, conditionally, whatever. So, Step 6 is irrelevant and misleading (we need conditional probabilities, not unconditional), and Step 7 is wrong  (we need conditional probabilities, not unconditional). Original TEP is resolved. Now Broome comes along with a new paradox. He shows that it is possible, "in principle", to fill the two envelopes in such a way that E(B|A=a) > a for all a. Step 6 can be replaced by the correct calculation that shows that in this situation, the conditional probabilities that the other envelope is smaller or larger are 3/5 and 2/5, if the amount in Envelope A is larger than the minimum value. Step 7 is replaced by the correct calculation showing that the conditional expectation is larger. Does this imply that one should switch envelopes? No, because the usual mathematical argument to justify switching is not available: since E(A) is infinite, we cannot conclude from E(B|A=a) > a for all a that E(B) > E (A). So Step 8 is wrong. Broome TEP is resolved. There is absolutely no need to dress this logical/mathematical analysis up in some kind of story about whether or not we could play the Broome game for real. Moreover, everything I mention here was already said in Nalebuff's paper, and nobody has found mistakes in that or added anything (except that we now have cleaner and shorter proofs of the main mathematical facts). Well... there has been one change:  the "red herring" brought up in the philosophy literature, where people have totally misunderstood what TEP was about (presently called "first variant" in the article. Then there have been some minor embelishments and variations, for instance Tom Cover's proof that when you do look in your envelope, you can increase your chance of finishing with the larger envelope by comparing the amount you see with a random probe; and Smullyan's infamous TEP without probability, which is a brain teaser concerning semantics, not concerning probability or mathematics. It's a word game exploiting an ambiguity of ordinary language. TEP is resolved. Discussion about it will never cease. Richard Gill (talk) 13:24, 19 January 2012 (UTC)


 * And how do you solve the TEP above where the expected value is not infinite but zero? None of the ideas on how to dismiss the problem developed so far works in this case. It's in want of a new ad hoc escape plan. True, the discussion about this problem will never end, at least until the true solution is found. iNic (talk) 12:04, 20 January 2012 (UTC)


 * We are told that there is a positive amount of money in both envelopes. The smaller of the two is positive. The expectation value of a positive random variable is (strictly) positive. Richard Gill (talk) 13:30, 22 January 2012 (UTC)


 * No, this is not true in general. iNic (talk) 10:42, 30 January 2012 (UTC)


 * This is always true. If X is a positive random variable, i.e., P( X > 0) >0, then there is some c>0 such that P(X > c) >0 (here I have used the sigma-additivity of probability measures, which gives us continuity properties: P(X > c) must converge to P(X > 0) as c converges to zero from above). It follows that E(X) > c P(X > c) >0. Richard Gill (talk) 17:02, 31 January 2012 (UTC)


 * If you invert your theorem (replace ">" with "<" and "0" with "∞" in all relevant places) you will have a proof that the Broome distribution can't have an infinite expectation. And besides, if you think that ∞ > E(A) > 0 and ∞ > E(B) > 0 how can you explain that E(B) > E(A) as well as E(B) > E(A)? You haven't solved the problem, you only add even more mysteries to it. iNic (talk) 01:06, 1 February 2012 (UTC)

Richard and I have no disagreement.


 * You say it's impossible to play Broome TEP for real while Richard admits that it is possible to play it for real. Maybe you live in different realities? iNic (talk) 16:11, 16 January 2012 (UTC)

You need to be clear on exactly what setup you are talking about. For some reason you do not want to do this. Let me start you off:

The Broom setup?

Are we to take it that the orginiser has infinite funds (which is not possible in the real world but which can be considered mathematically) or would you prefer to consider the game that you are playing with Richard, where there is an upper limit on the possible sums in the envelopes?

Does the player look in the envelope before deciding? Martin Hogbin (talk) 12:42, 16 January 2012 (UTC)


 * You are invited to play the Broome game with me if you want. You will win real money. Is that a real enough situation for you? iNic (talk) 16:11, 16 January 2012 (UTC)

You need to be clear on exactly what setup you are talking about. For some reason you do not want to do this. Martin Hogbin (talk) 16:31, 16 January 2012 (UTC)


 * Two people can obviously play the Broome game with numbers written on paper put in envelopes. Once in a trillion trillion trillion times, if the coin tossing bit is done by a very high quality pseudo random number generator, it will never stop while the two people are still alive. I doubt however that the numbers would become too large to write on pieces of paper. Playing it for real money can be done too, once or twice. By real money, I mean that the monetary unit is decided in advance. There is now a slightly larger but still probably negligeable chance that the amounts are so large that the guy who fills the envelopes defaults. In fact, in this situation, the guy who fills the evelope would obviously want to charge the other guy for playing. And they should agree whether or not the player's envelope can be opened (and if so, if a fee should be paid for this). I'm sure two guys could come to some mutually satisfactory arrangement, especially if the monetary unit is rather small (e.g., one dollar cent). The chance of default is so small that they probably won't bother to make formal arrangements in advance to cover this situation. But it might happen. On the other hand, I have argued before that a casino could not offer the game for real money for its clients without some serious pre-agreed truncation, and the entrance fee would depend heavily on the truncation level. Richard Gill (talk) 15:06, 28 January 2012 (UTC)
 * Two people can only play the truncated game for real just as I can only play a truncated martingale for real. Bill gates could probably play martingales on small-stakes tables for his entire life without going bust but this does not alter the fact that the martingale strategy does not work.  Similarly although it might be possible to play the Broome game for a very long time the Broome game cannot be played for long enough to show that there is an advantage in swapping. Martin Hogbin (talk) 23:29, 28 January 2012 (UTC)
 * If you play the game a long, long time, you'll not prove anything. The difference between what Player A ends up with and what Player B ends up with, divided by N, will continue to fluctuate more and more wildly, between more and more extreme positive and negative values, for ever. And it makes no difference whether they never switch, always switch, of whether A looks in his envelope first and decides to switch only according to some rule depending on what he sees there. One can play the Broome game with numbers on a computer for quite a long time. I recommend every tries it (I recommend you use the statistical object-oriented working environment R, www.R-project.org). One will quite quickly learn that on all those occasions that Player A had 2 m.u. in his envelope, Player B had 1 m.u. on 3/5 'th of the occasions, 4 m.u. on 2/5 'th of the occasions; the average value of what Player B had was therefore 8/5+3/5=11/5=2.2 m.u. So in the long run, Player A certainly does better *on those occasions* to switch. The same holds for any other particular value: if Player A would switch whenever he has a m.u. in his envelope, he does better on average on those occasions by switching. This also therefore holds if he switches, whatever: in the long run, on all those occasions when actually he first had a in his envelope, on average he always increases this: to 2a, if a=1, and otherwise to 1.1a. Yet paradoxically, B has not done badly from all these exchanges, since he can say exactly the same: for any initial amount b in his envelope he has done better, on average, by getting A's envelope. I think this is a paradox in the sense of being very hard to imagine that it could be true. Yet it is true. Richard Gill (talk) 09:38, 29 January 2012 (UTC)


 * Very true, and it's strange that it's so very hard to find sources that try to solve this paradox. Most papers put all their efforts into finding arguments for dismissing the problem in one way or another, instead of actually trying to solve it. iNic (talk) 16:19, 30 January 2012 (UTC)


 * You cannot "solve" the Banach-Tarski paradox. You can ban it, by not accepting the axiom of choice. It seems to make people uncomfortable, that's why they look for arguments to dismiss it. Though: nothing wrong with that: there are good arguments why you do not need to worry about Banach-Tarski in applied mathematics. There are good arguments why one need not worry about the Broome example in real world economics. Richard Gill (talk) 13:53, 31 January 2012 (UTC)


 * Come on, no version of TEP relies on the axiom of choice, or any of its other equivalent incarnations. The Banach-Tarski paradox is a very well known consequence of the axiom of choice, it's not about "infinity" in general. But if you still have problems understanding the Zeno paradox and the infinity of points involved there, then you are entitled to claim that the finite Broome is as strange and hard to understand as that one. iNic (talk) 01:06, 1 February 2012 (UTC)


 * Sure. If you are a philosopher then you will always be able to find out an inadequacy in all existing resolutions of the Zeno paradox, and you will proceed to come up with some new insight into it, which only fellow academic philosophers will understand. Moreover they will go on and write new papers disagreeing with you too, and so on. That's the name of the game. That's the sense in which TEP is never going to be resolved. But it is resolved in mathematics, and it is resolved for ordinary laypersons (ie for readers of wikipedia). The trouble with TEP started when it migrated from being a little trick played by professional mathematicians on amateur mathematicians, to being a philosophical problem. I think that Smulyan's "TEP without probability" is actually rather good because it is a new and pure paradox based on the philosophers' reading of original TEP (namely unconditional expectation, everything very simple, just a confusion of naming the same things in different ways, and different things in the same ways). The philosophers thought that TEP was about a mix-up with words. Their resolutions (of type "first resolution") are on these lines. However they are stupid and unconvincing. Smulyan chucked out the probability altogether in order to focus on the verbal mix-up which the philosophers seemed to think was the main point of TEP. Richard Gill (talk) 10:23, 8 February 2012 (UTC)

Paradoxes of infinity
Richard I think we agree completely but I want to be sure. My view is this:

For all finite versions of the game (no infinite expectations or infinite numbers of envelopes etc) the paradox is fully resolved. The resolutions are as shown on my two envelopes page.

For versions involving infinities there are the usual paradoxes of infinity. These are little more remarkable that noting that 1/0 = 2/0 and considerable less interesting that the Banach-Tarsky paradox. They can never be fully resolved just accepted.

If you agree with this there is one further point that I would like to make. Martin Hogbin (talk) 10:52, 29 January 2012 (UTC)

Large numbers and unprovable theorems
iNic, you might like to try and find a copy of this paper: 'Large numbers and unprovable theorems' Joel Spencer, American Mathematical Monthly, Vol 90, No 10, December 1983. I only have a paper copy so I cannot send you one. However it starts:

Describe, on a 3 * 5 card, the largest positive integer that you can. This is essentially what you need to do in your game with Richard, except that you are using pdf file rather than a card.

The paper ends:

''To travel beyond [this stage] now depends on what one allows in Generally Accepted Mathematics. There is always the danger that if too much is allowed, the system will become inconsistent and the 3 * 5 card will no longer define an integer. The game of describing the largest integer, when played by experts, lapses into hopeless arguments over legitimacy.''

This is where your game with Richard ultimately ends up. Martin Hogbin (talk) 13:10, 16 January 2012 (UTC)


 * I will put G there. The Graham's number. Twice as much is 2G. Will it fit? Aha OK it did. Damn! Now it's your turn to come up with a number that does fit on the card but twice the number doesn't. iNic (talk) 16:05, 16 January 2012 (UTC)


 * Read the paper, you are a long way short of winning! Martin Hogbin (talk) 16:19, 16 January 2012 (UTC)


 * I read the paper now. I could not find a single idea on how to represent larger and larger integers that was incapable of handling a factor of two. On the contrary such a small increase in size is ridiculously small and trivial for these ever growing and ever faster growing integers. So you still have to show us an example of what you mean. I still maintain that whatever number you come up with we can double it and we can still write that number on the back of a stamp. iNic (talk) 18:15, 16 January 2012 (UTC)


 * Do you know some physics? Do you know the order of the probability that all oxygen molecules in the room you are in now will spontaneously be in a part of the room where you don't are? If that continues for a while you will die. I really think you should start to be worried about this possible scenario. It's a real threat. If you do the calculations you will see why. iNic (talk) 16:46, 16 January 2012 (UTC)


 * I suppose that iNic sees a different problem with TEP than anybody else has ever seen before. Fortunately, according to just about all reliable sources, just about all known variants of TEP are resolved. Any wikipedia editor's personal opinion is not relevant. We just have to get a decent overview of the actual literature. Some of us editors will be happier reading the academic philosophical literature, some of us will be happier reading the academic mathematical literature, some of us will be happier reading the popular literature. We have to trust one another ("good faith") and all of us have to develop some global understanding of what has been done in the fields where we are less competent. Richard Gill (talk) 18:07, 16 January 2012 (UTC)