Talk:St. Petersburg paradox/Archive 1

why is it a "paradox"?
One aspect of the paradox that is simply common sense and that may deserve some mention, is a fundamental reason this situation seems like a paradox.

The average value of the game converges to infinity as the number of gameplays goes to infinity.

In simple, common-sense thinking infinity equals "really big."

But the value converges to infinity at a very, very slow pace. In practical terms, that means that the typical value of a game is going to be very, very low for many, many game plays.

Therefore much of the paradox revolves around the contrast between the calculated average value of a gameplay (infinite) and the typical average value of trying the game a few times (a few pennies).

In short, it's a paradox of infinity--what we expect infinity to be (big!) and what it actual is (a continual rising without any upper bound, which can be fast or slow but in this case is very, very slow).

If, for instance, we were to state the paradox in terms of millions of dollars rather than pennies it wouldn't seem so paradoxical (the paradox isn't always stated in terms of "pennies" but it always seems to be stated in terms of some very small unit of currency--seemingly to emphasize the difference between the amount actually won after a few plays, small, and the average value of a play, infinite).

And if the value of the game grew at a much faster pace then the result would not seem so paradoxical. For instance, if a game were designed where the player typically won on the order of $10 after ten plays, $1000 after 20 plays, $100,000 after 30 plays, and so on, growing exponentially, then there would be no sense of paradox. Getting really big really fast is common sense for "approaching infinity". But the St. Petersburg game gets really big only at a very, very slow pace.

Putting all this another way, common sense says that if the average value of a game is infinite, that must mean that a person who plays it a few times is going to win a whole lot of money, because infinity is "really big".

What the mathematics behind the statement "average value is infinite" really means in plain English, though, is simply this: with more and more game plays, the average value of each game play will rise without any upper bound. So it's perfectly reasonable for it to stay small for a long, long time (over many thousands of gameplays) and in fact that is exactly what it does.

A simple computer simulation of the St. Petersburg Paradox shows that over thousands, tens of thousands, and then millions of simulated plays, the average value of each play grows and grows--it grows slowly but inexorably. It grows suddenly by a large amount, then declines slowly, before growing again by a slightly larger amount. What happens, again in layman's terms, is that over many thousands and then millions of plays the "rare" events that have a huge payoff begin to happen with some regularity, and when they do the payoff is so huge that they greatly increase the overall average value.

In sum, much of the paradox lies between what we naively think "infinite average value of a gameplay" ought to mean and what it really does mean.

It might be helpful to show a graph of average value over several thousand or even million gameplays, which would demonstrate both that the average value does rise inexorably and also that it rises slowly. This seems to be a key to the paradox.

Bhugh 23:34, 29 September 2005 (UTC)


 * "The average value of the game converges to infinity as the number of gameplays goes to infinity." This is wrong. Mathematically, the AVERAGE expected value of ONE PLAY of the Game of St. Petersburg is INFINITE. The average, as in the arithmetic mean, as in the expected value, as in your expected per-game return, never deviates. It doesn't go up, it doesn't go down. As you accurately describe, what changes is the typical outcome of the game as compared to the typical outcome from 500 or 20,000 repeated plays of the game averaged together. The median payout of "one game of St. Petersburg" is very small, while the median payout of "20,000 games of St. Petersburg averaged together" is much larger. The average (mean) payouts of those two are both exactly the same (infinite). The St. Petersburg graphic is a bit deceptive in this regard. Since it just shows one sample run, the early points on the graph can't claim to accurately represent any sort of true theoretical average. If you print out a million of those graphics, it is likely that some of them will start out with a huge spike (in the first 50 or so games), and spend the next 20,000 games (the entire rest of the graph) marching downward. The graphic is a typical or possible outcome of 1,2,3,...,20000 games -- it does not display the true theoretical mean. Rather, the graphic shows (and the article ought to describe) that the typical (e.g. likely or median) payoff of St. Petersburg is very small ($1.50), whereas the expected (e.g. average or mean) payoff is infinite. The median payout of a single game of St. Petersburg is $1.50 (half of your games pay $1, half pay $2 or more). The median payout of a 2-game-averaged-cluster of St. Petersburg is $1.75 (one-half pay $1.50 or less, one-half pay $2.00 or more average per subgame). The 3-game median payout is $2.00. The TYPICAL (median) value of the supergame (obtained by averaging a cluster of multiple games of St. Petersburg) climbs slowly and inexorably to infinity as the number of subgames in your supergame increases. --Brokenfixer 00:29, 16 January 2006 (UTC) (clarified) --Brokenfixer 00:51, 16 January 2006 (UTC)


 * Original statement: "The average value of the game converges to infinity as the number of gameplays goes to infinity." 
 * Reply: "This is wrong. Mathematically, the AVERAGE expected value of ONE PLAY of the Game of St. Petersburg is INFINITE." 
 * Well, you say "This is wrong" but in fact those two statements are not contradictory. The fact that the "average expected value" has some certain value does not mean that you are going to hit that value every time you play the game.


 * This is true for the Petersburg game but also other games as well. If the average expected value for the game is $1 it doesn't mean that you are going to win $1 every time.  Rather you are going win more than $1 sometimes and less than $1 sometimes.  After, say, 3 plays it is very unlikely that your actual average value will be $1.  It will almost certainly be more or less than $1. After 30 plays your average will (probably) be closer to $1, and 300 plays it will (probably) be closer yet, closer yet with 3000, closer yet with 3 million, and so on.  In short, "The average value of the game converges to $1 as the number of gameplays increases."


 * This is just an informal way of explaining what we know of as the law of large numbers.


 * Translating this into terms of the Petersburg Paradox: The fact that the average expected value of the game is "infinite" does not mean that we will win an infinitely large prize each time we play.  Rather it means that the more we play, the higher our average winnings will become.  There is no upper limit to the average winnings--they will continue to increase indefinitely and without any upper bound as the number of plays increases.


 * This is, in fact, exactly what happens when the game is played.


 * And in fact it is no paradox at all if you understand what is meant by the term "average expected value" in light of the law of large numbers.


 * What does make the situation seem paradoxical is that informally we think of "average" as something you would sometimes hit below, sometimes above, sometimes right on. The idea that we can approach the average always from below and always "way" below (because no number, no matter how large, is close to infinity!) is not in accord with our everyday experience.


 * The other aspect that seems paradoxical is the fact that, as pointed out in the main article, the value of the game plays grows so slowly. Naively we expect a game with "infinite" average value to pay out a lot of money fast and for average earnings to grow fast.  But there are a lot of other ways to get to infinity.  One of them is very slowly and unsteadily and that is the way the St. Peterburg lottery gets there.


 * If players typically won a million dollars after 10 plays, a billion dollars after 20 plays, and so on, we wouldn't see any paradox. But if typical winnings are $5 after 1,000 plays, $6 after 1 million, $7 after a trillion, and so on--well, the average expected value is still infinite but we're getting there a whole lot slower.


 * And that makes it seem paradoxical.


 * —Preceding unsigned comment added by Bhugh (talk • contribs) 16:38, 11 November 2006


 * A couple paragraphs earlier, I argue that the key to the paradox is that (1) the game cannot be realized in real-world terms (because no real-world agent can commit to a potential $10^google payout), (2) the expected utility of money is very non-linear, and (3) the naive treatment does not in any way account for the outlandish (indeed infinite) risk of this game -- yet according to modern portfolio theory, risk has demonstrable cost (a real-world high-risk high-expected-value investment can have equal value to a low-risk low-expected-value investment). St. Petersburg's naive mathematical treatment of $100,000,000,000,000 as simply 100,000 copies of $1,000,000,000 is inappropriate. --Brokenfixer 00:29, 16 January 2006 (UTC)


 * The statement that the expected value of the St. Petersburg game is mathematically infinite is highly misleading. The accurate statement is that the expected value of the game grows logarithmically as the bankroll of the opponent. To understand the difference, consider a different problem. Lets say a certain casino introduces a new novelty game called "Leningrad." You analyze the game and find a highly non-obvious strategy that gives the player a 2% edge over the house. (I'm told casinos occasionally do make such mistakes.) Let's say further that the local gaming commission requires casinos that introduce new games to keep them operating and not limit bets or bar players. You propose to sell your secret. What's it worth? A naive analysis would say it's worth is infinite.  A more refined analysis says it is worth whatever the casino is worth, less one's expenses in playing the game (travel, room and board, bodyguards).  Not much different from the St. Petersburg game, right? Wrong! The value of the Leningrad game grows pretty much linearly as the value of the bankroll (the casino company) grows. If the value of the casino doubles, the value of the Leningrad game secret doubles. If the bankroll of the St. Petersburg opponent doubles, however, the value of that game only goes up by 50 cents, or one half the per-toss bet. That is what logarithmic growth means. It's the antithesis of infinite growth.--agr 21:48, 16 January 2006 (UTC)


 * The expected value of the Saint Petersburg game is infinite. That's the whole point. The Saint Petersburg game is used in the context of game theory, analyzed as an abstract or theoretical game. The infinite expected value illustrates that in real-world applications, more sophisticated real-world hypotheses and methods are necessary -- such as taking into consideration the limited bankroll of your opponent. Saint Petersburg game, as written and as used, implicitly assumes an infinite bankroll, which demonstrates the need for bankroll analysis. --Brokenfixer 20:23, 17 January 2006 (UTC)


 * If the St. Petersburg game was merely being used a a cautionary tale to beware of assumptions like an infinite bankroll, I would have no problem. The martingale roulette system is often used in this way as a teaching tool. But there is a vast literature that seems to take the notion of an infinite expected value as the correct interpretation of the game and then seek other reasons why no one seems willing to pay large sums to enter it.  Scholars who I'm sure would dismiss the martingale system as rubbish seem to insist that the St. Petersburg paradox presents deep difficulties. --agr 02:55, 27 January 2006 (UTC)


 * The "Leningrad secret" needs to be converted into a game before comparison can be made. If you convert Saint Petersburg game into a "secret" (the "Saint Petersburg secret" which is the ability to play the St. Petersburg game against a casino), then the Saint Petersburg secret is worth exactly the same amount as the Leningrad secret -- namely the entire bankroll of the casino that offers the game. It doesn't matter whether the Casino offers Leningrad or St. Petersburg -- they go bankrupt either way. Repeated play of any positive-expectation no-loss-risk game will return the entire opponent's bankroll. A single event of the Leningrad game appears to be "you get $1.02 for each $1.00 of your bet, with no house limit on your bet." As written, though, Leningrad is limited by your own bankroll - so if you only had $1 it would take months (working 8-hours/day 5-days/week) to bankrupt a casino. How about making a variant of the Leningrad game, the Serf Leningrad game, where you are allowed to bet as much as you want even if you aren't good for it. The Serf Leningrad game can be restated as: "Flip a coin. Ignore the result and win your opponent's entire bankroll." Meanwhile, consider the Petrograd game: "Flip a coin. Ignore the result and win the logarithm of your opponent's bankroll." Your remarks show that limited-bankroll Saint Petersburg is a randomized variant of the Petrograd game as opposed to the Leningrad game. --Brokenfixer 20:23, 17 January 2006 (UTC)


 * Flip a coin and win the casino's bankroll is vastly different from flip a coin and win the log of the casino's bankroll. Both are infinite for an infinite bankroll, but the former grows large as the bankroll grows large, while the latter never does for any conceivable bankroll in the universe as we know it. --agr 02:55, 27 January 2006 (UTC)

Change to dollars?
I added a section with a more careful math analysis. I'd also like to change the description of the game to dollar bets instead of cent bets. I believe it is hard for readers to think about what they might actually pay to enter the game when it is presented in cents. People routinely pay 50 cents to play an arcade game just for the entertainment value. Any objections? --agr 11:47, 19 December 2005 (UTC)

More about utility
Since the St Petersburg paradox was central to the devolpment of the idea of utility functions, we should elaborate on that a little more, rather than on the not so interesting practical restrictions (okay, nobody has infinite money, but if the theorists had stopped there, the economic decision theory would have stopped as well, wouldn't it?) If I have time, I write a little more about this. Marc 129.132.146.72 13:32, 23 January 2006 (UTC)

I just skimmed through the Stanford page we are linking, and I must say that some of the statements there are at best controversal.

"If someone prefers $1 worth of birds in hand to any value of birds in the bush, then that person needs psychiatric help; this is not a rational decision strategy."

This is kind of their typical style of arguments to dismiss any potential resolution of the paradox. To me it looks like they are really keen of keeping it an unsolvable paradox rather than a paradox for a naive approach to decision theory that has lead to important developments later.

In short, I think we can and should easily do better than them...

Marc 129.132.146.72 13:45, 23 January 2006 (UTC)


 * The St Petersburg paradox is certainly of historical importance in the development of utility theory, but it is completely fallacious. There is nothing paradoxical about the unwillingness of people to pay large amounts of money to enter a game that their instincts tell them isn't worth much when those instincts are correct. If the "theorists" (including Bernoulli who should have know better) had stopped there, better examples which actually illustrate the effect would no doubt have been developed. What is most interesting to me about the St Petersburg paradox is that so many economics and philosophy professors still teach their gullible students that the game's value is "theoretically infinite."  It is a better illustration of a meme than of utility theory.--agr 14:19, 23 January 2006 (UTC)

Slow down, save a life! ;-) We agree, I think, that the St.Petersburg paradox is the starting point of utility theory. (If there are better examples for utilit theory, then they should be under utility theory, naturally, so this does not matter here.) A paradox is (in this context) simply a clash between a well-established model and real life. Of course real life describes real life correctly, but so what? Here it is about the (at that time) well-established idea that expected returns describe the value of a game and a (hypothetical!) situation that clashes with it. (theoretically infinite hence means "by applying the model that decisions are made and should made according to the expected return only", which of course is wrong, which we know - e.g. - by the St.Petersburg paradox!) The St. Petersburg paradox is hence historically important. If we want to have an article which is helpful for understanding its point, it neither helps to mumble over it in a mystifying and authorotative approach as in the Stanford link, but it also does not help to question whether the lottery is feasible for practical reasons. - By the way, our discussion here reflects in some parts the state of the art until the 1950s, before people started to understand the utility theory correctly. By now it is so well-established in every area of economics and the St.Petersburg paradox is so widely known that our article has to reflect this! Marc84.72.31.59 19:00, 23 January 2006 (UTC)


 * I don't think you can rescue the claim that the expected value is infinite. it simply isn't. It's not a question of practicality. Under the most outlandishly favorable assumptions, the expected value never gets at all big. The most one can say accurately is that many scholars thought it was infinite and developed theories of utility based on the apparent paradox and that those theories are still considered important, based on other evidence. --agr 00:42, 25 January 2006 (UTC)


 * "What is most interesting to me about the St Petersburg paradox is that so many economics and philosophy professors still teach their gullible students that the game's value is 'theoretically infinite.'" The reason that modern economics and philosophy professors teach that the expected value of the Saint Petersburg Game is infinite is because the expected value of the Saint Petersburg Game is infinite. That was its original design purpose. Your arguments are in support of a contention that the original Saint Petersburg Game only has philosophical and historical relevance rather than current real-world economic implications. For example, this might imply that the underlying theoretical assumptions of the Saint Petersburg Game are so outlandish that they cannot be realized in any real-world terms. --Brokenfixer 17:37, 25 January 2006 (UTC)


 * The original Saint Petersburg Game itself predates modern utility theory, and uses the word money to denote payouts. Saint Petersburg justifies and motivates modern utility theory, which provides a framework to convert money into utility in a non-linear fashion. Lo-and-behold, assuming non-linear utility of money, Saint Petersburg with "infinite" monetary payoffs yields a finite expected _utility_. But while the economists are busy popping champagne corks, philosophers point out that additional definitions utterly failed to address the underlying paradox. In modern economic terms, the Saint Petersburg Game is easily rephrased to pay out in utility units (linear by construction) instead of dollars. Thus if $1 is worth 1 hedon to you, but $1,024 is only worth 200 hedons to you, then the Saint Petersburg banker would pay you 1,024 hedons (which might be $500,000 or whatever). In Bernoulli's formulation of the Saint Petersburg Game, utility=money; in modern analysis, a Saint Petersburg Game should be declared to pay out in utility units instead of money. A second issue, the discussion of limited/unlimited bankroll, is swept aside by making the banker a visiting space alien doling out hedons, or having God act as banker. Next, philosophers scramble to argue for  or against a universal absolute upper bound (maximum limit) on utility. Anyway, these issues are currently addressed in modern textbooks - Implying that the modern scientific community misinforms a gullible populace should be published as Original Research. --Brokenfixer 17:37, 25 January 2006 (UTC)


 * The Saint Petersburg game has to stop when the opponent's assets can no longer cover the bet. If that isn't obvious to economists and philosophers, it is certainly obvious to the ordinary folk whose unwillingness to pay large sums to enter the game is the nub of the supposed paradox. Because the jackpot grows exponentially, it quickly becomes bigger than the total assets of any conceivable opponent—in just a few dozen plays. The value of the game is half the initial bet times the number of plays. Since the number of possible plays is finite, the value of the game is finite. In fact it only grows logarithmically with the assets of the opponent.


 * I'd love to think this was original research, but it is pretty old and standard math. The Saint Petersburg game is just a variation on the famous Martingale (roulette system) betting strategy: Bet a dollar on red, if you lose double your bet. Keep doing this until you win. The analysis is essentially the same: your expected winning for each play is half the initial bet and the game can't go on forever because you will exhaust your bankroll if there is a long enough run of black.


 * I've edited the mathematical analysis section to make clear that the notion that the game has infinite expected value is still widely accepted in economics and philosophy.--agr 16:10, 26 January 2006 (UTC)


 * Wow, interesting that one can still discuss so much about this old stuff. My colleagues and me, we are all very amazed... :) Obviously the expected value is infinite by mathematical proof which does *not* involve any check of the feasibility of the game whatever. So, if somebody says "Hey, that's unrealistic, because - limitied bank roll, limited time to play - etc." Then I could just answer. "Yes, so what. It's math. It's theory. It ain't reality!" :)


 * Another thing is whether or not it is an unresolved paradox. A paradox is either logical ("This sentence is wrong.") or - as it is often used as well - it is a fact derived from reasonable assumptions that obviously conflicts with reality. In the latter case (e.g. here), one might then ask whether the initial assumptions where maybe not so reasonable. That lead in the case of the St. Petersburg paradox to the development of utility theory.


 * Now, a more technical remark: "In modern economic terms, the Saint Petersburg Game is easily rephrased to pay out in utility units" This is only true for unlitmied utility functions. There is a lot of evidence nowadays that this is not a particularly good assumption. (By the way, bounded utility functions do not have do be constant above a certain value, but can be monotone increasing everywhere. - That is a point that the Stanford encyclopedia got wrong...) There is also another way out: Allowing only for lotteries with finite expected return, since that would be a reasonable restriction to real-life lotteries. Then one can prove that sublimear utility functions exclude any ocurence of St.Petersburg-type paradoxons. (That has been proven by Arrow.) So, basically adapting one of the assumptions for the game to real-life, makes the lottery disappear without thinking about whether it is feasible. Hence the paradox is gone, since what is left is a pure mathematical fact that there is a lottery (i.e. probability measure) with infinite expectation value. - And that is not so paradox, I think...


 * Marc 129.132.146.72 12:10, 27 January 2006 (UTC)

Subsection on expected number of tosses
I have reluctantly removed the well-intentioned subsection, written by Taisuke Maekawa of Japan, because knowing the expected number of tosses dose not provide a way to evaluate the game. The expected number of tosses is indeed 2 as he says, but since the value of the pot double with each toss, the value of the game is not the expected number of tosses time the initial bet. Here is the deleted section:

(2)

In fact, from the problem sentence.

Because 2k &minus; 1 dollars are gotten where the frequency of toss is assumed to be k, it only has to calculate 2k &minus; 1 dollars after figuring out the expected value of k at one game.

Let the expected value assumed to be μ. The expected value μ of frequency of coin toss when playing one game is the sum total of frequencies multiplied by the probability that the frequency appears from tossing once to infinite times.


 * $$\mu=\sum_{i=1}^\infty \frac{i}{2^i} = 2.$$

From the expected value of the frequency μ, the amount of money expected is 2 dollars

because 2μ &minus; 1 = 2.

Thus, the amount of money that can be expected is obtained.

--agr 14:19, 23 January 2006 (UTC)


 * Could somebody with a little knowledge of Japanese correct this also on the Japanese version as well - I saw the wrong formulas there, but that's as much as I could get ;-) and there was no discussion page...

Marc84.72.31.59 19:19, 23 January 2006 (UTC)

--T.Maekawa 26 Jan 2006

Your criticisms have no worth.

You misunderstand.

Could you understand?


 * Please look at the paper you submitted as a reference http://www.date.hu/efita2003/centre/pdf/170.pdf. On page 868, in the middle of the page, it shows the same calculation that you do, but it says "The average number of runs of a play ... amounts to 2 (SCHEID 1992, p. 1141). This does not imply that the average prize is a2 = 2(2-1) =  2." This appears to contradict what you have written. Perhaps this is the misunderstanding.---agr 03:13, 27 January 2006 (UTC)

T.Maekawa 27 Jan 2006

Whatever you would write,the knowing personkind know that is mathematically correct. These are empty arguments. I don't care anymore. OR please show me a proof that the analysis is mathematically incorrect.

Calm down, nothing personal. The proof that it is incorrect has been already given above. Marc 129.132.146.72 11:54, 27 January 2006 (UTC)

Think again. It's simple. T.Maekawa 27 Jan 2006

I thought again. Sorry! Marc129.132.146.72 15:53, 27 January 2006 (UTC)

The "expected frequency of toss" is called the "expected number of tosses" (technical term). It is an elementary fact of probability that the expected number of tosses of a game does not enable one to find the expected value, except under very special conditions ("linearity" of payoff) that do not apply here since the payoff is exponential, not linear. A linear payoff has the form ak+b where k is the number of tosses in a game and a, b are constants. Zaslav 00:52, 30 January 2006 (UTC)

Yes, but unfortunately Mr. Maekawa seems to be resistant to this (or any other) argument... :( Rieger 07:15, 30 January 2006 (UTC)


 * I removed this section again. Perhaps Maekawa San can find a colleague with a better command on English who explain our concerns. --agr 04:03, 8 February 2006 (UTC)

Update of page
I did a major update. In particular I changed the structure: I found it a little bit unnatural to divide between "economical" and "mathematical" analysis, since both use math and just study two different solution strategies ("utility" vs. "finite game" vs. "iterated game"). I hope nobody is angry with my changes and I am very much looking forward to ideas how to improve the page further! Marc129.132.146.72 15:50, 27 January 2006 (UTC)

Yet, Didn't you see another (my) answer? I am not angry but I feel sorry that the another mathematical analysis was deleted. It is really simple and most proper answer. You made deletion. But this wikipedia is free to rewrite. Since there are people such as you, it is really sorry. I think I or the other people will repost the answer someday. I'm very tired now. T.Maekawa 28 Jan 2006


 * Dear Taisuke, there can be several valuable opinions about anything, but mathematical computations. There was a flaw in your computation as has been pointed out to you before. Please do not post this again. Thank you.

Marc 83.77.229.46 20:37, 29 January 2006 (UTC)

There's no flaw, which can be proved with probability theory. I just would like to show the truth. T.Maekawa 8 Feb 2006

I'm not so happy with the two latest changes: (1) In my opinion, the listing of a googleaire is quite unnecessary and adds just a lengthy paragraph (since the word has to be explained - I didn't know it either). hence I would strongly recommend to delete it again. (2) The discrepancy between prediction of the naive expected value decision model and the real life decisions in the finite St.Petersburg game is being watered down with every change. Yes, it's not infinite here and $3 there, but it's still a factor of order 10.


 * The reason for the googleaire example is to show that the value of the St. Petersburg game is bounded by the laws of physics as well as the laws of macroeconomics and that therefore the assumption of an infinite casino is unjustifiable. I think that is an important point to make.


 * As for your second concern, if I understand it right, I think the problem is the sentence "In practice, no reasonable person would pay more than a few dollars to enter." Certainly in the context of the infinite casino version, that is a safe thing to say. Any sum offered pales in comparison to an infinite payoff. But in the finite game, it seems to me one must be much more careful. The first obvious question is how much is "a few"? $3? $10? $20? The second question is how do we know? Have there been proper surveys? What information were the participants given if there were?


 * Suppose someone created an actual St. Petersburg Lottery with a million dollar max payout per ticket and auctioned tickets off in large numbers, say on an eBay style web site. Each ticket would have a serial number. After the auction closed, the tickets would be issued to the winners and then a drawing would be held that would determine the value of each ticket based on the rules of the St. Petersburg game. (There is an easy and safe way to do this using a cryptographic hash function: hash the drawing results with the serial number and use the bits in the hash as the coin toss results.) A proponent of the naive expected value decision model would argue that the selling price of these tickets would be close to their expected value of $10. I'd mortgage my house to buy as many tickets as I could get if they were selling at $3.


 * What about tickets with a billion dollar cap? They should certainly sell for more than the million dollar tickets. Maybe utility theory would come into play and they would sell below their $15 par, but one would have to do very careful studies to prove there was a discrepancy, much less attribute it to declining utility vs. other explanations (e.g. risk premium).


 * In short, i think it far from obvious how one can use the finite casino St. Petersburg game to disprove the naive expected value decision model. --agr 22:28, 1 February 2006 (UTC)


 * First, please do not mix up St. Petersburg with its iterated variant. ("Buying as many tickets as possible...")
 * Second, there are surveys on how much people would like to pay, and it's much less then 10 Dollars. (I don't remember the number, but it was more like 3 Dollars.) The point is that there is no reason why the expected return should be THE rational criterion. And the St. Petersburg paradox illustrates this drastically.
 * Third, this thing is a game in the mathematical sense. Practical limitations do not apply. (Disclaimer: I'm applied mathematician, so I think I'm allowed to say this...! ;-) Hence the distinction between the "real" paradox and the "finite" one.
 * Finally, the googolnaire is not necessary to prove the limitations of a practical implimentation of the game, since that's pretty obvious from the second last line (and all the theory before). - Don't you think so? It does not clarify, it just adds a fancy word that needs a lengthy extra explanation.
 * Okay, and now I go redo the changes of our Japanese hobby-mathematician again... :)


 * Let me deal with googolnaire first. It's just an example, of course. The paragraph isn't there too explain the word but to point out that the laws of physics place an upper bound on the value of the game. I think that helps make clear that the assumption of an infinite casino differs from the usual infinite-quantity assumptions one makes in other areas, e.g. electrical engineers assuming insulators have infinite resistance, air-conditioning designers assuming the atmosphere has infinite heat capacity, ichthyologists assuming the ocean is an infinite sink for solutes. These are approximations that produce reasonable results in that doing the same calculations with realistic finite numbers does not materially affect the outcome. Not so with St. Petersburg. Perhaps the next to last line should suffice, however the claim that the infinite value of the infinite casino version has practical implications is so widespread that I think the added example is needed.


 * Yes, you can view the St. Petersburg game as a mathematical exercises, but it is almost always used to prove principals in economics, where practicality is of the essence. The putative paradox is that people do not assign an infinite value to the game. Well, if the value of the game isn't infinite on any practical basis, why is that a paradox?


 * That brings us to the question of valuing the finite casino version. There are two issues here: do people undervalue the game and if so why. I found one paper on line with references to some surveys I'm curious how they were conducted. I accept your point that the iterated game is conceptually different from a single play, though it is far from clear the value of the former is not a good indicator of the value of the latter, particularly in the case of a $1 million cap version. The $1 million version is not that different in its payout characteristics from many state and national lotteries.  --agr 19:33, 5 February 2006 (UTC)


 * Thanks for the reference, I actually know the paper quite well... :) I didn't have a look into the quoted surveys, but it's a given that expectated return does not describe well peoples behavior, even for finite games. - But let me ask back again: Why should it be rational to play according to expected return? There is no reason (theoretical or descriptive) supporting this. The St. Petersburg Paradox just brings it nicely to the point that the expected return in fact is not a good choice as a descision model, that's all.


 * After all these discussions I would like to work on a new page now. So let's just leave it like it is. The googolnaire doesn't hurt me if you like it and I'm quite happy with the rest now. It's informative, shows many sides of the problem, even more obscure philosophical questions and hence it's a nice article, I guess. - Time to work on something else! Rieger 00:54, 6 February 2006 (UTC)


 * Thanks for your help on this article. Let me just try to briefly answer your question. People's economic behavior is a complex question that deserves rigorous thinking. The St. Petersburg paradox is a poor example to use because the infinite casino version is spurious and it far from clear that the finite version proves anything at all, since it is so similar to real world lotteries that sell quite well at a premium. --agr 15:32, 7 February 2006 (UTC)

Analysis of number of tosses
I found a flaw in my proof. that is the coin toss continue giving head to infinite times. it gives the error of infinitesimal. But $$\lim_{k \to +0}(\mu-k)=\lim_{k \to +0}(2-k)=2$$ T.Maekawa 2006/04/24


 * I'm sorry Taisuke San, but your infinitesmal argument is not correct. See the discussion in Subsection on expected number of tosses above.--agr 12:29, 24 April 2006 (UTC)

That is clearly correct. But I have another way of thinking about infinitesimal which is infinitesimal surely has a quantity. T.Maekawa 2006/04/24


 * If you have a new way to think about infinitesimals, you must publish it somewhere else. Wikipedia is not for original research.--agr 11:35, 25 April 2006 (UTC)

Why do you write such things? Why do you remove correct answer? You must reconsider what you are doing about this problem. T.Maekawa 2006/04/26


 * Please see No original research --agr 10:30, 26 April 2006 (UTC)

Don't compose graffito! Why don't you understand the meaning of this problem and wikipedia's policy. --T.Maekawa 2006/04/28

Accuracy dispute
T.Maekawa insists on including a section in this article titled "Analysis of number of tosses" which claims to show that the expected value of the infinite St. Petersburg game is 2. This result contradicts every written authority on the subject and is mathematically incorrect. Even if it were true, it constitutes original research that should be published elsewhere.--agr 15:43, 28 April 2006 (UTC)

It is foolish that encyclopedia doesn't have correct answer. The answer doesn't break the law of no original research.

There can be various answers for this problem. I just gave the most proper answer. T.Maekawa 2006/04/29


 * Then please cite a source.--agr 18:58, 28 April 2006 (UTC)

I heard that answer is correct from a professional mathematician. T.Maekawa 2006/05/01


 * Sorry, that does not count as a published source for Wikipedia. But if you can put this person in contact with us, perhaps we can explain what our concern is here and this person can explain it to you. --agr 03:40, 1 May 2006 (UTC)

That is not discussion. Just it is contradiction. I will not tell you. T.Maekawa 1 May 2006

You never lose though...
I'm confused as to why this is a paradox. I would rationally agree to pay ANY amount to play this game, because as far as I can tell, it's impossible to lose any money at all playing it. Is this true? Let's say I put down a duodecillion dollars down as my entry fee, more money than has ever actually existed anywhere, and the sequence ends on the first toss.

I have lost nothing. I get back my entry fee, I just didn't gain anything.

So if there's a chance of winning, and even of winning big, and effectively ZERO cost to entry, why would I ever balk at putting down all the money I own? I mean, I could get lots back! And my initial stake is completely safe. It's as if I didn't put anything up at all, and this casino is giving away free money.

If somehow it's possible to lose money playing this game, could you explain it? I don't see it in the article... Fieari 18:26, 28 August 2006 (UTC)

It's not possible to lose money in this game, unless you pay an entrance fee to play. Then you stand to lose that entrance fee. The question is: how much would you pay to play?--agr 18:46, 28 August 2006 (UTC)


 * Ah! My confusion was that I assumed that the entrance fee WAS the starting pot.  That's what tripped me up.  Fieari 19:09, 28 August 2006 (UTC)

How is this an Average?

 * $$=\sum_{k=1}^\infty {1 \over 2}=\infty.$$

I'm rather confused about how this is supposed to represent the average outcome. It looks to me like this is just the sum of all possible outcomes. Shouldn't the average be the sum of all the outcomes divided by the number of trials? i.e. shouldn't it look like this:
 * $$=\sum_{k=1}^\infty {1 \over 2k}=undef.$$

or if I've confused how sigma notation works, it might also be this:
 * $$=\dfrac{\sum_{k=1}^\infty {1 \over 2}}{\infty} = undef.$$

Ziiv 09:01, 23 November 2006 (UTC)


 * Hi Ziiv. You should look at the first line of the calculation first; it goes
 * $$E=\frac{1}{2}\cdot 1+\frac{1}{4}\cdot 2 + \frac{1}{8}\cdot 4 + \frac{1}{16}\cdot 8 + \cdots$$
 * To find the average of something that is 1, 2, 3, 4, 5 or 6 with equal probabilities, you say $$\frac{1+2+3+4+5+6}{6}$$, or $$\frac16\times1+\frac16\times2+\frac16\times3+\frac16\times4+\frac16\times5+\frac16\times6$$. Here, the $$\frac16$$'es can be interpreted as the probability of each outcome. The principle in the calculation of E is the same.


 * Some math teachers might object to the result "$$=\infty$$", insisting that it is undefined, but for the purposes of a general encyclopedia, I think it is fine as it stands.--Niels Ø 12:47, 23 November 2006 (UTC)

Okay, I'm confused too. Isn't there supposed to be a k variable inside the sum?
 * $$=\sum_{k=1}^\infty {1 \over 2}=\infty \,.$$

should be
 * $$=\sum_{k=1}^\infty {1k \over 2}=\infty \,.$$

At least I think. The infinite sum of 1/2 is just.... 1/2. Right? mathwhiz 29  04:57, 27 February 2009 (UTC)


 * Uh, no.
 * $$\sum_{k=1}^1 {1 \over 2}={1 \over 2}$$
 * $$\sum_{k=1}^2 {1 \over 2}={1 \over 2}+{1 \over 2}$$
 * $$\sum_{k=1}^3 {1 \over 2}={1 \over 2}+{1 \over 2}+{1 \over 2}$$
 * $$\sum_{k=1}^4 {1 \over 2}={1 \over 2}+{1 \over 2}+{1 \over 2}+{1 \over 2}$$
 * and so forth. You probably shouldn't edit any articles on mathematical subjects. —SlamDiego&#8592;T 05:22, 27 February 2009 (UTC)

Silliness
Of Cramer, the article said
 * It is interesting to notice that in considering only gains, rather than the final wealth, he anticipated the key idea of the framing effect, found 250 years later, which nowadays plays an important role in Behavioral finance.

Falling victim to a framing effect is not recognizing framing effects.


 * You may say so, but I think it's a funny side story nevertheless... But maybe we sholdn't mention funny side stories in Wikipedia, guess you're right. Rieger 05:28, 19 February 2007 (UTC)


 * It's a bald fact, no matter whether I say it or not. The point is not to avoid telling the story, but to tell it correctly.  If someone feels that it is important to mention explicitly the notion of framing effects here, that's perfectly fine.  But Cramer didn't here anticipate them.  He was one in a line of victims of framing effects, that line stretching millenia before and centuries after him. —SlamDiego 06:45, 21 February 2007 (UTC)

The article also declared that 1728 was “a couple of years” before 1738. —SlamDiego 23:57, 4 February 2007 (UTC)


 * Isn't it? (looking sillily puzzled) Rieger 05:28, 19 February 2007 (UTC)

“one cannot buy that which is not sold”
I'm not sure what "or by noting that one cannot buy that which is not sold" is supposed to mean in this context. --agr 18:40, 5 February 2007 (UTC)


 * The testimony of supposed experts that people will not buy the lottery is notable, but the primary evidence of paradox is that people indeed do not buy the lottery. Well, the original assumption of a linear utility function would apply both for the buyer and for the seller.  Few would buy it because few would sell it  .  The point was made by Samuelson, amongst others, and it is (on my reading) found in the article.  But the summary line seems to have been written confusing that point with an assumption of diminishing marginal utility, which is an orthogonal proposition. —SlamDiego 22:19, 6 February 2007 (UTC)


 * I fail to see this "solution" mentioned in the body of the article at all. Can you please indicate where you see it? I can't find any mention of Samuelson in the article either, nor in the references. The short summary at the top of the article should of course mention only solutions that have been proposed and that are easily detectable in the article that follows. iNic 03:50, 13 March 2007 (UTC)


 * I haven't looked a the body of the article in quite a while, and I don't promise that the solution remains in what is written thereïn; that, or even a misreading of the earlier body on my part, wouldn't mean that the lede should be crippled, but that the body should be expanded, and perhaps that a “ ” tage should be placed in the body.  If, in the interim, you wish to put a “  ” tag in the body, go ahead.
 * I note that this is yet a different excuse for your bad faith editing. So far, you've demanded (1) impossible condensation, (2) a citation which you yourself could have made with a copy-and-paste, and now (3) that the lede be crippled until the body is not.
 * Scare quotes are intellectually dishonest. A solution doesn't become somehow less a solution by virtue of cheap punctuation.
 * —SlamDiego 22:00, 17 March 2007 (UTC)


 * I think they refer to the argument: as this kind of lottery tickets isn't available on the market anyway, we don't have to bother explaining peoples supposed behavior if they could buy them. Even if it's a strange argument (which I think it is), it could be expressed in a clearer way. iNic 01:16, 7 February 2007 (UTC)


 * OK, these comments make the meaning of the words I questioned clearer, but are you saying "this kind of lottery tickets isn't available on the market" because no one has infinite resources or because no one has offered to host the game at some finite price with whatever resources they do have? --agr 05:13, 7 February 2007 (UTC)


 * Both I think. The argument is about pointing out that this is a purely academic and theoretical issue. iNic 11:04, 7 February 2007 (UTC)


 * No, it isn't about finitude of resources at all, and it isn't about claiming that the issue is purely theoretical. Moving the paradox into “a purely academic and theoretical” automatically dissolves it.  The paradox is not one of mathematics per se; it is an empirical paradox of behavior.  To separate the propositions from the real world is to engage in the trivial exercise of creating a purely formal system with two baldly incompatible axiomata. —SlamDiego 01:41, 8 February 2007 (UTC)


 * My remarks on utility functions would be irrelevant in the former case. I said that, if the original, implicit presumption of linear utility were true, then the value of the lottery to the seller would be infinite.  So, what do you think that (under that presumption) the reservation price of a rational seller would be? (I have, BTW, actually seen this d_mn'd lottery being sold and bought, albeït not by people whom I regarded as rational or as honest.) —SlamDiego 07:55, 7 February 2007 (UTC)


 * The claim is not that we don't have to bother explaining how people would behave if they could buy it (though it is an implication that such explanations are not much subject to direct empirical test). Rather, the claim is that the paradox of observed behavior dissolves even under the original utility presumption, so long as we apply that presumption consistently (to seller as well as to buyer).  There's nothing particularly strange about it unless one doesn't apply real-world economics to thinking about the economy of the real world. (Anything and everything would be a paradox under the description of some fantasy world.)  As to expressing it more “clearly”, feel free to have at it.  My insistence is not that the presentation be spartan, merely that it actually be correct, which it was not. —SlamDiego 07:55, 7 February 2007 (UTC)


 * OK thank you, but I think it's better that someone that think this is a great argument rephrase this sentence making it more clear to the average reader. I don't think it's a general opinion that every "fantasy world" must contain paradoxes for example, so the explanation has to be based on some other, more common, intuition. iNic 11:04, 7 February 2007 (UTC)


 * No one has claimed that every fantasy world has paradoxes. Rather, what would be a paradox in the real world is usually just a mere inconsistency in a fantasy. (It is not, for example, a paradox that Charlie Brown is still a child.) A fantasy world in which, ex hypothesi, people both should buy a lottery but do not is about as interesting as one in which unicorns should like cheese but do not.
 * Common intuition is not a protocol of logic, of mathematics, or of science. While it would be good for things to be accessible, it is far more important for them to be correct.
 * —SlamDiego 01:41, 8 February 2007 (UTC)


 * BTW, all of this returns to relevance for the generalized problem identified by Karl Menger. That is to say that boundedness is not the only resolution for the more ferocious St Petersburg paradox. —SlamDiego 10:51, 7 February 2007 (UTC)

The wording in the article does need clarification. I think it is essential to distinguish the finite and infinite cases. Asking a rational person to assume that a casino has infinite resources is arguably nonsensical and "one cannot buy that which is not sold" would apply. If the casino has finite resources then there is no paradox. The expected value is easily computed and quite modest for any conceivable bankroll on Earth. The only question is whether the public would pay the full expected value for a finite game where a profitable payoff is extremely unlikely, would pay less, or whether they would pay a premium to play, as is the case with state lotteries with somewhat comparable structures. The answer probably depends on the skill of the casinos' marketing department. --agr 11:52, 7 February 2007 (UTC)


 * I don't disagree. I thought that the article made the distinction, but I also think that the presentation is cluttered and confused.  As for me, I don't really want to invest much in editing this article at this time; I have other fish burning on the range. —SlamDiego 01:41, 8 February 2007 (UTC)

Okay, I want to make a few points explicitly:
 * The St Petersburg paradox is not a paradox of mathematics per se; it is an empirical paradox of behavior. (It is, thus, more akin to Allais paradox that to Russell's paradox.)
 * There are three classes of relevant evidence here:
 * Introspection
 * Utterance
 * Observed failures to purchase the lottery


 * Intropective evidence is exactly one observation. Joe can tell us things about his introspection, but we only get that utterance, not the introspection itself.  Thine own introspection is the only one upon which thee canst call.
 * Utterance is, by itself, highy fallible evidence. People have said that they would buy X, but then failed to do so; and that they would not buy Y, but then done exactly that. (Thus, if we presume that utterance is an accurate reflection of introspection, introspection too is hightly fallible.)
 * What gives weight to our introspections and to the utterances of others about the St Petersburg paradix is exactly that people do not buy the lottery.
 * The classic St Petersburg paradox involves at least three questionable presumptions:
 * That the utility of money is linear.
 * That producers can sell the lottery.
 * That producers would sell the lottery if they could.

—SlamDiego 01:41, 8 February 2007 (UTC)


 * Why do you question whether producers can and would sell the lottery? I have not found any reports of it being done, but surely some professor has offered to play it with his students. I'd be happy to host the game for a suitable fee. Obviously there are practical problems: gaming laws and designing a cheat-proof way to produce the 50-50 outcomes. (I'd use casino dice dropped down a plexiglass labyrinth, with even faces counting as heads and odd faces tails, or one could pick the three head values at random, perhaps before each toss.) But I can't see why a real casino couldn't or wouldn't offer the game. I can even imagine a slot machine version with the die read by a computer after the drop and then mechanically carried back to the top, to be launched again by the player with initial velocity determined by how hard the lever is pulled. Imagine the hype the casino could generate, with so many learned authorities saying the game's value to a player is infinite.


 * Why won't you attend to what I have said repeatedly? The original presumption of the problem was that the utility of money is linear. Unless that is your utility function (or your utility function otherwises imputes infinite utility to the lottery), your offer to sell is irrelevant to the point.


 * Further, since another presumption was exactly that unlimited pay-off which troubles you, any offer on your part would be either not of the classical lottery, or fraudulent. The finite game that you also want to discuss is worth discussing, but it's not that of the original paradox. —SlamDiego 02:43, 8 February 2007 (UTC)


 * Sorry, it wasn't clear to me that you were only discussing the infinite game. Now I understand your points.  --agr 03:01, 8 February 2007 (UTC)


 * Well, [a] that's the classic problem, and [b] if the lottery is finite then the second presumption (that producers can sell the lottery) is also no longer particularly questionable. —SlamDiego 03:13, 8 February 2007 (UTC)


 * It's interesting that there are studies asking what people would pay to enter the game, but I haven't heard of studies that ask what fee people would charge to host it.--agr 02:32, 8 February 2007 (UTC)


 * That lack of investigation is a consequence of the same blindness that prevents people from considering that buyers would be unable to purchase the lottery if the utility of money is linear, or otherwise implies that the utility of the lottery is infinite. —SlamDiego 02:43, 8 February 2007 (UTC)

I deleted the part of the sentence under discussion here as it's evidently very unclear what it should mean. When someone finds a clear way to express all thoughts here in a few words she is free to add that to the article. iNic 10:33, 8 February 2007 (UTC)


 * Your deletion was vandalism, and a last warning has been placed on your user talk page. If you feel that the point can be briefly made more accessible in that first paragraph, then you should attempt an edit to accomplish that much.  But, as you have repeatedly been told, you cannot justify making an article wrong with the point that it is easier to read when wrong. —SlamDiego 22:03, 8 February 2007 (UTC)


 * Ummm, can we calm this down a bit. Please see WP:Assume Good Faith. On the other hand, i think the text in question is too cryptic needs to be improved, but there is no need to delete it until we agree on a replacement. SlamDiego, it that text a quote from somewhere? Can you provide a citation we can use?--agr 22:59, 8 February 2007 (UTC)


 * One is supposed to begin with an assumption of good faith; one is not obligated to maintain it indefinitely. The text is not a quote, but the only potentially citable source that I have personally encountered was a remark in something by Samuelson.  His remark was hardly more explicit than what I wrote. —SlamDiego 04:00, 9 February 2007 (UTC)

Ooops, seems to be some quite strong feelings involved here! Well that's good! :-) It means this cryptic wording in the article will be clarified soon. BTW, calling someone a vandal that isn't a vandal can get you into trouble as it's a violation of WP:NPA. iNic 23:51, 8 February 2007 (UTC)


 * I wouldn't have called your action “vandalism” if I couldn't make the case. And, as I have said, I have other fish with which to deal.  It is utterly inappropriate for you to willfully damage this (or any other) Wikipedia article with the presumed justification that you can thereby command others to flesh it out. —SlamDiego 04:00, 9 February 2007 (UTC)


 * I don't see how it is vandalism to remove a phrase that has been pointed out as unclear and for which an improvement is searched in an ongoing discussion. I didn't like the phrase either, but I was too lazy to change it, so I was basically just lucky not to mess up with people and not to be called a vandal, looks like... ;-) Rieger 18:36, 21 February 2007 (UTC)


 * Absolutely everything in Wikipedia is unclear in the sense of its meaning not being evident to someone. Anyone has the prerogative to suggest that anything be made more clear, and anyone has the prerogative even to try to make things more clear; but no one is licensed to simply delete content because he or she thinks that it could be clearer, or all of Wikipedia would be deleted. (Because someone would use that license exactly as iNic used the license that he thought that he could claim to have.) And if, at the margin, what keeps you from engaging in actual vandalism is luck, then I hope that you stay lucky. —SlamDiego 19:22, 21 February 2007 (UTC)
 * BTW, you should understand that iNic further insisted that, for the passage to avoid deletion, it must simultaneously be brief and explicitly express everything point that had been made in the discussion. His was not a misguided, good faith edit. —SlamDiego 05:23, 22 February 2007 (UTC)


 * :D Hey, go on! You're truly terrific! This page is getting my daily portion of entertainment! Never expected that of the good old St. Petersburg Paradox... By the way: Thanks to the extended recent discussion we now get a warning message that the discussion page is getting to long. Maybe somebody could clean it up a bit...? Rieger 22:22, 22 February 2007 (UTC)


 * And now you've outed yourself as trolling. —SlamDiego 04:02, 23 February 2007 (UTC)

Dennis L. McKiernan's fair coin
agr, here is a simple 50-50 generator: Take any coin, flip it twice. If it is TH, call that Y; if it is HT, call that N. If it is HH or TT, discard the observation and flip it twice again. —SlamDiego 02:51, 8 February 2007 (UTC)

(Note that, unlike the die (which could have a bias for evens or for odds), this generator has no bias.) —SlamDiego 02:55, 8 February 2007 (UTC)


 * Even better would be to have an electron and measure if it has spin up or spin down in a specified direction. iNic 10:24, 8 February 2007 (UTC)


 * There are apparently magicians who can control coin tosses. Here is one paper that discusses the topic: http://www-stat.stanford.edu/~susan/papers/headswithJ.pdf. Measuring the spin of an electron requires a fairly complex apparatus. It would not be hard to hide circuitry in such a device that limited the maximum number of consecutive heads. See random number generator attack. The gaming industry has developed a large amount of technology for keeping casino dice fair and auditable. --agr 12:39, 8 February 2007 (UTC)


 * I wouldn't presume that the coin was toss by hand nor that it was caught in the hand, just as you didn't have such things done with the die. The point is that McKiernan's scheme eliminates bias because
 * prob(H)&middot;prob(T) = prob(T)&middot;prob(H)
 * and, while a scheme based upon the same principal could be implemented with dice (eg, discard everything but pairs 6-1 and 1-6), it will either involve far more discards, or be far more complex. The gaming industry does not produce fair dice; it produces dice whose biases fall within small tolerances. —SlamDiego 21:50, 8 February 2007 (UTC)

This is getting a bit off topic. Your point is well taken that gaming industry "produces dice whose biases fall within small tolerances'' but those tolerances are quite tight and directly testable. If a real game is to be mounted, it has to be simple and easy to understand. Few lay persons are going to understand the McKiernan scheme or have the patience to sit through discarded throws. In any case, when you run your game, you can set the rules any way you want. :) --agr 22:59, 8 February 2007 (UTC)


 * I hardly think that the layperson can better understand blas tolerances than
 * prob(H)&middot;prob(T) = prob(T)&middot;prob(H)
 * and the less that the players really know what game they are playing, the less that the results are meaningful. Perhaps more importantly, unless one truly eliminates bias, one is needlessly replacing the St Petersburg auction with an approximation thereöf. —SlamDiego 04:14, 9 February 2007 (UTC)

On the orthogonality of the point
The Mengerian generalization of the the original St. Petersburg problem replaces the original, implicit assumption of a linear utility function with any one of a general class of utility functions, and the original lottery with lotteries which bear the same relation to the new utility functions as did the original lottery to the original function. For each of the utility functions, old and new, a lottery can be found whose utility should (under expected utility maximization) be infinite.

All of those cases involved uncapped material pay-offs, but we trivially conjecture a utility function which is infinite for finite material pay-offs. For example, if the cap were $1000, such a utility function would be
 * u(x) = tan(π util / $2000)

(Utility functions needn't be monotonic or continuous.)

We could find many different lotteries that would not be sold, and hence not bought.

Now, “common sense” may of course tell us that someone could be found to sell a lottery with a cap of $1000, but that (which implicitly rejects the utility function assumed ex hypothesi) is beside the logical point. And, evidently, the sense of a great many people had not told them either that uncapped pay-offs were unrealistic nor that a universal, linear utility function was a dubious proposition. —SlamDiego 05:18, 10 February 2007 (UTC)

Running away…?
Just a suggestion: Don't you think that the discussion at this point is running away a little? So many words about just changing half a line... My suggestion: We just reformulate the sentence to make it clearer and that's it. (Personally, I don't think it's a necessary sentence, btw, but some people think it's necessary, so I left it in.) My suggestions:

Rieger 05:25, 19 February 2007 (UTC)
 * Replace “one cannot buy that which is not sold” by "such type of lotteries in reality could not be offered".
 * Erase: "simply"
 * Everybody calm down. - Hey, it's not an important entry (I wrote some of it, so I hope I don't make anybody angry if I say this...), and it's a quite unimportant little detail of the entry anyway... ;-) So, let's just edit it and then use our all energy for something more important...


 * The discussion was only running away if it wasn't moving to resolution. As it is, it seems to have got there.  I wish that it had got there sooner, but that's not an illustration of it having run-away.
 * Your first suggested change seems to have missed the point in question. That point is not that such lotteries could not be offered, but that, even if they could, they would not be offered (and therefore could not be bought).
 * Why erase “simply”? The resolution of Samuelson &amp;alii involves the least new complexity.  It may not be easy for some people, but it is surely simple.
 * After three people invested extended effort in arguing the point, your declaring it to be unimportant while expressing the hope that no one will be offended by your declaration seems rather disingenuous. Although I don't think that there's enough heat left for reïgnition, you shouldn't be throwing gasoline.
 * —SlamDiego 07:37, 20 February 2007 (UTC)


 * What is your basis for saying "even if they could, they would not be offered"? People make foolish offers all the time. And, in any case, the supposed paradox concerns the reaction of ordinary people to a proposed game. Formal surveys have been performed and most people come up with a number they would be willing to pay. I have never heard of any one responding "oh, they would never offer such a lottery." --agr 12:30, 21 February 2007 (UTC)


 * You are not holding onto context. My basis for what I have actually said is that there are limits to the amount of foolishness in this world.  The very reason that this problem was called a “paradox” was because of a presumption of general rationality; if the spread of foolishness in this world were sufficient to wave away seller rationality, then it would be sufficient to wave away buyer rationality, and we could then just say that people wouldn't pay much for the lottery because they were too foolish to pay more.  In the article and elsewhere, I have made a point of saying that few people (rather than no people) buy the lottery.  I have stated in this discussion that I have seen this lottery offered and bought, albeit by people whom I did not regard as particularly rational or honest.  And I have explained above that utterances are not good evidence (“People have said that they would buy X, but then failed to do so; and that they would not buy Y, but then done exactly that.”); that what gives the alleged paradox significance is that in fact we observe few people buying the lottery. (In other words, it is not just the verbal reaction that presents the problem.) The resolution of Samuelson &amp;alii grants the classical presumption of the problem that
 * $$u[(X_1,p_1),(X_2 ,p_2),...]=u[\sum_{i}(p_i\cdot X_i)]$$
 * but shows that, when applied to the sellers as well as to the buyers, that presumption would explain the lack of observed purchases. —SlamDiego 13:14, 21 February 2007 (UTC)


 * A couple of points: First of all, there is an extensive literature on behavioral economics that is based on surveys. Certainly it is better to observe what people actually do in the market, but that is not always feasible. One could also simply frame the paradox in terms of how people respond to survey questions.


 * Second, can you say more about you mean when you say "I have seen this lottery offered and bought"? When and where did this happen? Did the offerer claim infinite resources or did he leave it ambiguous? Is this published anywhere?


 * Third, could you provide a citation to the Samuelson resolution you are talking about? It seems to me we should include his explanation in the article with a citation. That would end this discussion.--agr 13:47, 21 February 2007 (UTC)


 * Hmmm… We are back to abuse of the word “couple”. ;-)


 * The fact that economic literature based upon surveys is extensive isn't contrary to anything that I've said. Even a good scientist may find herself in a situation within which the best affordable data isn't very good, and then do the best with it that she can.  In this case, better data than utterances are pretty much freely available.  If you framed a paradox in terms of how people respond to survey questions, then it would be a different paradox.  The Bernoullis and Cramer argued from considerations of ostensible “good sense”; not by reference to having conducted surveys of utterances.


 * I saw the lottery sold in the '90s in San Diego by one person to another. The fact that the contracted pay-out was potentially infinite was repeated stated.  The seller did not explicitly claim to have infinite resources, yet the buyer acted as if he believed that he might get just that.  I found the whole thing grotesque.  To the best of my knowledge, no one who participated or witnessed had any desire to write at length about it.


 * I don't recall with certainty where Samuelson made the point, though my guess would be that it was in Foundations of Economic Analysis . His wording was transparent to an economist, but would not be so for the typical Wikipedia user.  In any event, a citation won't make the point more or less valid; we are talking very simple economics here. —SlamDiego 15:21, 21 February 2007 (UTC)
 * I have located one instance of Samuelson making the point — perhaps the instance that I encountered before, though that's really not important — and quoted and cited it below. I don't see that the citation adds much, especially as Samuelson presents himself as merely conveying (rather than inventing) the argument. —SlamDiego 15:52, 21 February 2007 (UTC)

"The discussion was only running away if it wasn't moving to resolution. As it is, it seems to have got there." Well, as it seems to me, you are not quite there yet... ;-) I am sorry if I hurt anybody's feelings, but I always feel a little sorry if I see several pages of discussions that may or may not lead to a change of half a sentence in an article which is, by my surely limited insight, already quite good, whereas other articles are still in a hopeless state, but nobody has time to work on them. I feel even more sorry when I hear words of "edit wars" and "vandalism" used in such a discussion. I then wonder how many excellent authors of Wikipedia have given up on working because of such instances... Sure, there are points where discussions, and even long ones, are necessary. However, in my opinion everybody who involves into one should alwyas reflect on whether in this case it is worth his and the other's effort looking at the potential outcomes and their impact on the article. In this case, MY OPINION, is that the effort is getting too much. (Please don't say I'm edit-waring because of this. ;-)

I'll have a look at the article in a month or so and I am curious whether this discussions will have led to fruitful improvements on the article. If so, I would be delighted! Rieger 18:13, 21 February 2007 (UTC)


 * &lt;sarcasm&gt;Congratulations!&lt;/sarcasm&gt; There was enough heat for your gasoline to start a bit of a fire, albeït not much.  Of course, what you've managed to do is to get editing of this article again stirred up, when your professed desire is to have that set aside to get other articles instead edited. —SlamDiego 19:12, 21 February 2007 (UTC)

Samuelson Citation
Samuelson, Paul Anthony, “The St. Petersburg Paradox as a Divergent Double Limit” in International Economic Review v1 (1960) #1 (Jan) p31-7:
 * Paul will never be willing to give as much as Peter will demand for such a contract; and hence the indicated activity will take place at the equilibrium level of zero intensity. Zero cancels out infinity, so to speak.

(Italics in original.) Samuelson expresses this argument as one with which he agrees, but not as his own invention. —SlamDiego 15:46, 21 February 2007 (UTC)


 * Excellent, thanks. We should just use that quote in the article. We don"t have to imply it was Samuelson's idea. Perhaps something like "As Paul Samuelson describes it ..."


 * On the San Diego incident, I'm curious, what was the actual result? --agr 17:38, 21 February 2007 (UTC)


 * Sorry, for another unqualified remark, but don't you think that the Samuelson quotation is not that brillant? I mean, even as somebody doing research in this area, I have a hard time to decipher this sentence, moreover it does not prove its point, but seems merely to be an assumption. A typical reader who wants to get informed will probably only consider this as glibberish. So, in my opinion, you either quote more (and hope that then it becomes clearer) or you better let it be... Rieger 18:04, 21 February 2007 (UTC)


 * The "one cannot buy that which is not sold" argument does not seem to me to be a satisfactory resolution of the "paradox", but if we are going to include it in the article it should be from a cited source. I find the quote a clearer statement of that argument than what we have now. If there is a better quote out there, we should use it, of course.--agr 18:41, 21 February 2007 (UTC)


 * You really ought to be satisfied with a proper economic argument about an economic problem. And, as I said below, the cite would just function as argumentum ad verecundiam. —SlamDiego 18:57, 21 February 2007 (UTC)


 * I already stated “His remark was hardly more explicit than what I wrote.” Samuelson's assertion is indeed not a proof; but it is also not an assumption.  It is, rather, a proposition the argument for which would be obvious to almost any economist (however peculiar it might be to some non-economist applying himself to this economic problem), and which argument is nested within the prior discussion above. —SlamDiego 18:57, 21 February 2007 (UTC)


 * I don't oppose Samuelson being quoted, but I don't think that it will server the readers so much as it will soothe editors who had trouble with the point. Basically, it will function as argumentum ad verecundiam.


 * I don't know what the actual of the coin tosses was. I saw the contract sold, but not the lottery delivered.  As I said, I found it grotesque. —SlamDiego 18:57, 21 February 2007 (UTC)


 * I don't care about the coin tosses. What was the price they agreed upon? And if you'd indulge my curiosity further, what did you find grotesque? If the offerer promised infinite pay out, it was fraud,  but fraudulent transactions must come under the purview of economics, no?--agr 01:26, 22 February 2007 (UTC)


 * I don't recall the price, except that it was something fairly small. (I think that it was under US$100.) The pay-out was not capped, but it wasn't perfectly clear whether the seller was engaged in fraud or in magical thinking. Either would have been appalling in my eyes, and the complementary explanations for the behavior of the buyer likewise involved magical thinking or some darker motive. (I suggest the latter based upon personal knowledge of the buyer.) The fact that some human behaviour falls within the scope of economics doesn't mean that I have the stomach for it, and my personal knowledge of the participants made some sorts of study of them less appealing. —SlamDiego 05:35, 22 February 2007 (UTC)


 * Thanks, again. What's interesting to me is that the seller was willing to offer the game at a relatively low price. The buyer might have overpaid for reasons having nothing to do with his perceived value of the game or he might have been willing to go higher. But the seller could not have believed the game had anything like infinite value. It's only one case, of course, but it does support the survey and introspection data.


 * I do like your term "magical thinking." I think it gets to the heart of what I consider the true puzzle: why so many academicians take the supposed paradox seriously. I suspect the mathematical formalism has an aura of magical power that lets them ignore the underlying logic. One cannot simply sum an infinite series without checking that the terms do not at some point violate a boundary condition of the problem. Failure to do so is a common source of fallacy. The standard example is the gambler's double-the-bet strategy. I think most economists know that strategy and can easily explain why it does not work. The St. Petersburg game is simply a variant on that strategy, and has essentially the same solution, yet elaborate theories are still being created to explain it.  I find that quite remarkable. --agr 16:26, 22 February 2007 (UTC)


 * Yes; whatever the seller thought that he would actually be providing, he didn't value it infinitely. But part of the problem here and in the survey data is that we don't know know what people think would actually be transfered.


 * I'd probably be belabouring points of which you are already well aware with most of what I might say in response to your second paragraph. I will note that I see some relation here to the supposed difference between economy of scale and economy of specialization. —SlamDiego 04:38, 23 February 2007 (UTC)

Expected utility is not Utility!
I just noticed that there were some sub-perfect links on the page: Expected Utility Theory (by von Neumann and Morgenstern) is not simply Utility Theory. It deals with lotteries instead of goods and it has to be linked to explain what goes on here, when we use EUT to explain the paradox. So, please do not re-introduce this mistake. (I think it was correct in a previous version...) Rieger 05:16, 19 February 2007 (UTC)


 * This is not a mistake. Expected utility theory does add propositions to utility theory, but one can have an expected utility theory with constant marginal utility — indeed, the original presumption was implicitly just that — or even increasing marginal utility,[2] and the problem then abides.  The key to Bernoulli's resolution was diminishing marginal utility of money. —SlamDiego 07:20, 20 February 2007 (UTC)


 * I would further note that, under an expected utility hypothesis, the utility of a lottery
 * $$L = [(X_1,p_1),(X_2,p_2),...]\!$$
 * can be taken to simply be
 * $$u(L) = \sum_{i} [p_i\cdot u(X_i)]$$
 * And, in real life, goods and services really are just lotteries. The actual effects of the cell phones and cigarettes remain to be seen.
 * —SlamDiego 07:20, 20 February 2007 (UTC)


 * In the article on expected utility theory risk-averse behavior is mentioned, but in the article on marginal utility, nothing on lotteries and how to handle them is said, as far as I see. Taking the point of an uniformed reader it might be better to point to the more useful reference, although I can understand that you put a lot of effort into the (very nice) marginal utility article and you want to link it therefore. But what about if you work on the expected utility article instead and improve it substantially which should be easy given its pitty state?
 * But anyway, I'd rather let you keeping the modified version than discussing this in length, since - hey - it's not that important either! :) Rieger 18:00, 21 February 2007 (UTC)


 * Merely mentioning risk aversion simply tells the reader that something can be done with expected utility theory to resolve the problem. It doesn't, in fact, tell the reader what that something is.  The term “risk aversion” just runs the reader in a logical circle.  A direct link to the article on risk aversion itself would be better, but that article leaves the shape of the utility function unmotivated except to resolve the problem of gambling, while Bernoulli seems to have an idea of why people should be risk averse beyond avoiding this otherwise avoidable problem.  So, yes, taking the point of an uniformed reader, it would be better to point to the more useful reference, which is to the article on marginal utility.  As to working on the expected utility article, I have many other articles upon which I'd like to work, at least one of which is logically prior to the article on expected utility. —SlamDiego 19:11, 21 February 2007 (UTC)

Elsewhere — of possible interest to some here
I just ran across this abstract:
 * The assumption of bounded utility function resolves the St. Petersburg paradox. The justification for such a bound is provided by Brito, who argues that limited time will bound the utility function. However, a reformulated St. Petersburg game, which is played for both money and time, effectively circumvents Brito's justification for a bound. Hence, no convincing justification for bounding the utility function yet exists.

for “Time, bounded utility, and the St. Petersburg paradox” by Tyler Cowen and Jack High in Theory and Decision v4 (1998) #4 (Nov). —SlamDiego 16:16, 21 February 2007 (UTC)

Cleaning up discussion page
I have to mention it again: This discussion page is getting too long (see warning message), can somebody please clean it up? (If I do it, I'm afraid to be called a vandal or troll...) By the way: see What is a troll for what is a troll and why not to randomly call people a troll. Rieger 20:19, 23 February 2007 (UTC)


 * There is a prescribed method for cleaning up an article's discussion page without erasing any of its content. And you are now trolling with the word “random” and otherwise transparently misrepresenting matters.  If your professed motive — to get people working on other articles — were your actual motive, then you'd cease offering bogus argument. —SlamDiego 07:23, 24 February 2007 (UTC)


 * Your apparent hope to see the discussion page archived is somewhat thwarted by iNic burying new comments deep in old discussion. :-/ —SlamDiego 22:14, 17 March 2007 (UTC)

iNic's bad faith edit
iNic— A citation for the solution that you hate is above, clearly labelled “Samuelson Citation”. It's a bit silly to include a citation, since that's just argumentum ad verecundiam for such a straight-forward solution, but you're free to add a footnote, rather than making the article incorrect. I further note that you've now switched your excuse for this bad faith edit. Before, you instead insisted that any reporting of the solution needed to meet the criteria of being simultaneously brief and capturing all of the content of a lengthy discussion. Now, it's a demand of a citation (where a good faith edit would have just copied-and-pasted from discussion). What will be the excuse in your next attack? —SlamDiego 21:47, 17 March 2007 (UTC)

Great, I note that you claim above that if the article is less than fully correct, then so should be the lede. Your claim is made especially absurd, as “ ” and “  ” tags are available to place in bodies that are incomplete. Quite a few articles at present are nothing but ledes with “ ” tags. For the ad hoc rule that you now attempt to impose to make sense, these would have to be deleted. —SlamDiego 22:09, 17 March 2007 (UTC)


 * Slam, I think you should take a deep breath and try to write about this "solution" by Samuelson in the body of the article in a way that makes it intelligible to others too. Put it in a historical context and explain how it connects to other suggested solutions. The reason I don't write it is just that—that I don't understand the solution. If I understood the reasoning I would have done it long ago. Your wording in the lead about this is both unsourced and very unclear as it stands. Either reason is sufficient for a deletion. Please read the rules of wikipedia. The talk page is not a part of the article itself, and no reader should have to go to the talk page searching for hints about unclear contents in an article. iNic 01:19, 18 March 2007 (UTC)


 * 1. I've repeatedly invited to expand the explanation if you think that it needs expansion.
 * And I've told you that I can't as I don't understand it. iNic 04:22, 18 March 2007 (UTC)


 * You introduced that claim well after you began attacking the article. —SlamDiego 04:38, 18 March 2007 (UTC)


 * 2. The fact that you don't understand the solution doesn't empower you to delete it.
 * It sure does. Especially as everyone else here finds your reasoning very unclear too. iNic 04:22, 18 March 2007 (UTC)


 * No, not everyone else here finds it unclear. —SlamDiego 04:38, 18 March 2007 (UTC)


 * I have already noted that, by such ad hoc rule, all of Wikipedia would be deleted, because someone will be found for each part, who does not understand it. Perhaps, if you are far less presumptuous in approaching someone else you understands the point, she will either explain it to you so that you can expand things to you satisfaction, or write the expansion for you.
 * If you are unsatisfied with wikipedia rules at large this is the wrong forum for that discussion. You can either try to change the rules or simply stop being an editor. To invent your own rules here is not an option, I'm afraid. iNic 04:22, 18 March 2007 (UTC)


 * There is no Wikipedia rule that reads “iNic must understand a passage or he gets to delete it.”' —SlamDiego 04:38, 18 March 2007 (UTC)


 * 3. As I said before, scare quotes are intellectually dishonest. The solution isn't diminished by your placing punctuation around the work “solution”.
 * As long as no one here except you understands this "solution" the quotation marks are quite fitting. iNic 04:22, 18 March 2007 (UTC)


 * agr seems to have understood the solution. What we have is just you and one other fellow complaining, and that other fellow has been caught just-plain trolling. —SlamDiego 04:38, 18 March 2007 (UTC)


 * 4. As you know, I didn't suggest that the reader come to the talk page; I suggested that if you feel that a source is needed, then you copy-and-paste the reference. I don't happen to agree that a citation is necessary, nor will I function as your amanuensis.
 * A citation isn't really necessary, but some text devoted to this "solution" in the body of the article is indeed necessary. Please see WP:LEAD. In addition, at least one source for this section is required. Please see WP:NOR. iNic 04:22, 18 March 2007 (UTC)


 * Again: If you want the citation in the article (because of WP:NOR or for any other reason), then just copy-and-paste the citation from the discussion; this issue is just more of you throwing junk in the hopes that something will stick. And the most that WP:LEAD entitles you to do here is insert a “  ” tag. —SlamDiego 04:38, 18 March 2007 (UTC)


 * (or, of course, to just maintain the point while writing it more clearly.) —SlamDiego 04:46, 18 March 2007 (UTC)


 * (WP:NOR, BTW, is irrelevant because the point isn't novel .  Once attention was again (in the mid 20th Century) drawn to the paradox, many economists made the same point.  It's something that any economist should see on his-or-her own.  But feel free to copy-and-paste the reference into the article (instead of making the article simply wrong). —SlamDiego 05:08, 18 March 2007 (UTC)


 * —SlamDiego 02:52, 18 March 2007 (UTC)


 * Finally, with an enumerated list to which to reply, there was no good reason for you to insert into the body of my comment. You and the troll have made this discussion page very difficult to read. —SlamDiego 04:38, 18 March 2007 (UTC)

Bad faith “ ” tag
Again, iNic, you know whence you can copy-and-paste a citation if you feel that one is necessary. You didn't put the tag there out of any sincere interest in WP:ATT. Your wiki-lawyering will not work. —SlamDiego 03:23, 19 March 2007 (UTC)


 * Dear Slam, I feel sorry for you, I really do. I wanted to help you understand how to behave here at wikipedia. It shouldn't be that hard to understand; anyone could learn it, I thought. Most of it is plain common sense after all. But you have proven to me, for the first time, that it is indeed possible to spend a lot of time here and yet not getting anything. I sincerely thought that was impossible. iNic 01:58, 22 March 2007 (UTC)


 * Note that, instead of inserting that citation that you claimed to have been so important, you are just lashing out. Do you think that it hurts me to see you reduced to bald personal attack, after your attempts at pirouetting, then vindication, then vengeance have successively failed? —SlamDiego 06:16, 22 March 2007 (UTC)

Summary
agr, I thought that you might find this summary useful:

A lottery can be constructed with an infinite expected pay-off. Few people say that they would pay much for it… …but people often say that that won't buy something and then do… …but indeed few-if-any people buy the lottery. Proposed resolutions: 1 (N Bernoulli): People treat small probs as-if they are even smaller than they are. Problem: Seems to be at odds with other empirical data (K&T). 2 (Cramer and D Bernoulli): Diminishing marginal utility. Problem (Menger): Other gambles can present an infinite expected utility, unless utility not merely diminishing but bounded. 3 (you &amp;alii): No one can truly offer the lottery, so no one can buy it. This paradox is set in Neverland. 4 (various economists): If the lottery represents an infinite expected gain to the buyer, then it also represents an infinite expected loss to the seller; hence, no one would sell it. No would could be observed buying it because it would never be for sale. This paradox is set in Neverland. Comment: Mutatis mutandis, this works even if we turn Resolution 2 on its head, and propose creatures for whom finite sums of money have infinite utility.

—SD at 12.72.72.241 09:00, 20 March 2007 (UTC)


 * Thanks. If i understand you correctly, #4 is Samulson's argument. I'm happy to try and summarize that and include the quote you presented above with the citation you gave. I am on a deadline at the moments, so it may take me a bit to get to it.


 * One thing that bothers me about #4 is that it presumes sellers never make foolish offers. But they do, all the time. So it still seems fair to ask what a buyer would pay if such a mistake were made. --agr 03:00, 22 March 2007 (UTC)


 * The reasoning is akin of the following: If you are a member of the Flat Earth Society you will have problems explaining what will happen if you decide to walk over the edge of the world. However, if you are a rational Flat-Earther you know it's insanely dangerous to be close to the edge, not to mention jumping off the edge. But this makes the whole problem "what happens when you come near the edge" meaningless, in the sense that no Flat-Earther in his right mind would go near the edge anyway. Only stupid Flat-Earthers would even think about it. iNic 03:34, 22 March 2007 (UTC)


 * Craziness would be one explanation, but it's a special case of a more general answer. —SlamDiego 04:10, 22 March 2007 (UTC)


 * agr, please note a couple of things:
 * That the same concern can be raised in reference to Resolutions 2 and 3.
 * That I have been saying “few” rather than “no” pretty consistently most places, including in the set-up and in most of my discussion of this problem. And yet didn't introduce that qualification in the resolution; in part, that was to keep the sketch simple, but I can make the argument that ex definitione behavior is driven by and reveals actual anticipated utility.  In which case, we say that making the sale reveals that a seller (rightly or wrongly) in fact has failed to impute infinite utility at the time that he or she made the sale.  If he is thus foolish, then the foolishness is located in his utility function.
 * Such nuance can be side-stepped in this article, while keeping Resolution 4 (and 2 and 3!) intact, much as I previously side-stepped it in discussion.
 * —SlamDiego 04:10, 22 March 2007 (UTC)


 * Oh, BTW, please take care not to write in a way that leads people to think that the argument originates with Samuelson. Again, it's an argument that's pretty obvious to economists. —SlamDiego 04:15, 22 March 2007 (UTC)


 * So who invented this "solution" if it wasn't Samuelson? iNic 05:31, 22 March 2007 (UTC)


 * Your scare quotes are still intellectually dishonest. (And your attempt to justify them before was on the basis that I was the only one here who understood the resolution noted by Samuelson.)


 * I doubt that anyone knows who was first to spot it. As I keep saying, it fails to be “original research” because it lacks novelty.  Most economists would see it for themselves without it being pointed-out to them, and any decent economist would get it as soon as it was stated it the most spartan terms.  Economists didn't present this solution sooner simply because they hadn't seen the problem sooner. —SlamDiego 06:01, 22 March 2007 (UTC)

Trial Balloon
Here is an attempt at an explanation to be inserted in the article:

Some economists dismiss the paradox by arguing that even if some entity had infinite resources, the game would never be offered. If the lottery represents an infinite expected gain to the player, then it also represents an infinite expected loss to the host. No one could be observed paying to play the game because it would never be offered. As Paul Samuelson describes the argument:
 * "Paul will never be willing to give as much as Peter will demand for such a contract; and hence the indicated activity will take place at the equilibrium level of zero intensity. Zero cancels out infinity, so to speak." (ref: “The St. Petersburg Paradox as a Divergent Double Limit” in International Economic Review v1 (1960) #1 (Jan) p31-7)

Does this capture the explanation and is it clear enough?--agr 05:08, 22 March 2007 (UTC)


 * Well, it really isn't any more of a dismissal than are the other resolutions. Nor is it less of a resolution than are the other resolutions.


 * (As to it not being a dismissal, I would note that Samuelson, while agreeing with the resolution, still sees the problem as of further interest.)


 * Other than that, my suggestions are just numbingly trivial: Put the cite within a ref element, and don't use quotation marks for a block quotation. ;-)


 * —SlamDiego 05:16, 22 March 2007 (UTC)


 * Oh, and please remember that there were italics in that original quotation. —SlamDiego 05:17, 22 March 2007 (UTC)


 * (I took and take it that Samuelson was using the italics to produce a sort of quasi-quotation.) —SlamDiego 13:23, 22 March 2007 (UTC)


 * Umm, that might be a reason not to include the italics in our article. The intent might be less clear there and an explanation would be awkward.--agr 13:28, 22 March 2007 (UTC)


 * Maybe. I'm in favor of accuracy at the cost of other things.  What about dropping both the italics and the “Zero cancels out infinity, so to speak.”? —SlamDiego 13:58, 22 March 2007 (UTC)


 * (The idea being that the article then doesn't have to distinguish two voices by font.) —SlamDiego 14:04, 22 March 2007 (UTC)


 * I like that idea. The "Zero cancels out..." bit opens another can of worms, in my opinion.--agr 17:02, 22 March 2007 (UTC)


 * I also agree that it opens a can of worms here. —SlamDiego 22:11, 22 March 2007 (UTC)

I've added the explanation to the article.--agr 23:09, 22 March 2007 (UTC)


 * Thank you. It looks acceptable to me. —SlamDiego 23:12, 22 March 2007 (UTC)

That's a very fine solution indeed, thank you Arnold! It puts the solution into the right context at the beginning ("some economists...") and it does not attempt to squeeze the explanation into the front end, but adds it at the end. I also like that the explanation comes before the (somehow difficult) quotation of Samuelson. I am soooo glad that this discussion came to a good conclusion! Sorry, if I was pessimistic about this at some point and thanks again! Rieger 08:45, 23 March 2007 (UTC)

Error in computation
There was an error in the computation of the expected value for the finite paradox: the case where the casino cannot raise the bid anymore was simply forgotten, although it has a small probability. It looks like we all overlooked this. Nevertheless, please check whether the computation is correct now! Thank you! Rieger 08:57, 23 March 2007 (UTC)


 * I don't think it was overlooked. The assumption in the previous version was that the game ended when the casino could no longer double the pot, with the player then taking home what was in it. Your version has the pot frozen and the coin keeps getting tossed until a tails comes up. That just adds unnecessary complexity to the calculation, in the form a second, infinite sum that merely captures the fact that the probability of a tails occuring eventually is one. The central problem with the "paradox" is, in my opinion, that people get mesmerized by the math and lose track of what is really going on. I'd rather keep the calculation simple and verbally add the stipulation that the game ends when the casino is out of resources. --agr 04:35, 25 March 2007 (UTC)


 * I agree with your point on avoiding too much math. However, as I see it, the old version would give zero payment once the upper limit is reached, which is not what we want... Or do I overlook something?


 * Alternatively, one could omit the whole computation and just mention that an upper limit for the prize obviously yields a finite expected value. Guess, nobody who's good enough in math to follow the computation would not see this by himself anyway... What do you think? Rieger 21:03, 1 April 2007 (UTC)

Finite St. Petersburg lotteries — Capping
A couple of points:
 * 1) An institution could cap the pay-off at some amount well less than the actual limit of its resources.
 * 2) If capping were made an artefact of that resource limit, it could be effected in many ways:
 * The final possible pay-off could indeed be the greatest power-of-two less than the limit of the resources of the casino, with no payoff if the turn corresponding to that power results in 0.
 * The final possible pay-off could indeed be the greatest power-of-two less than the limit of the resources of the casino, with that payoff guaranteed if all prior turns result in 0.
 * The final possible pay-off could instead be the actual limit of the resources, with no payoff if the turn corresponding to the power at-or-just-before results in 0.
 * The final possible pay-off could instead be the actual limit of the resources, with that payoff guaranteed if all prior turns result in 0.

Thus, the particular story examined in “Finite St. Petersburg lotteries” is rather arbitrary. And this story requires the reader to fret with an int/floor function. My suggestion is that the story told be abandoned in favor of simply saying that a lottery could be capped for various reasons (including exhaustion of the casino), and simply noting the value of lotteries capped after n turns, with (and/or without) guaranteed pay-off of the max if all prior turns result in 0. —SlamDiego 02:14, 25 March 2007 (UTC)


 * I am not sure why you are introducing the possibility of no payoff. An essential feature of the St. Petersburg game is that the player is guaranteed a monotonically increasing pot. Of course, a wide variety of different games could be constructed, but they have little to do with the St. Petersburg paradox. The finite game should be discussed because it that is is the only possibility in reality. The so called infinite game will alway terminate because any real casino has finite resources. The purpose of exhibiting the calcualtion is to show that the expected value can be computed once the bankroll is known and that it is actually quite small for any conceivable bankroll. I agree it might be better do the alternative where the actual limit of the resources is forfeit. That would add something like 25 cents on average to the expected value. --agr 04:27, 25 March 2007 (UTC)


 * I quite agree that finite games should be discussed — just without arbitrary and avoidable complexity''.
 * I introduced the idea of zero payoff here merely for sake of completeness and realism. Not simply default but absconding would obtain in many real life St Pete games. (My expectation for the San Diego game about which you kept asking me is that it would have resulted in the seller fleeing without paying anything had his losses otherwise been high.) I don't want the article to discuss this possibility, I just want it to be seen that there are many stories that the article could tell, each of which is arbitrary and avoidable.
 * My suggestion, then, is that the section have a simpler story, just have the essential equation (without concerning itself with int/floor), and a table of three columns (maximum monetary payoff, maximum number of tosses, &amp; expected monetary payoff). —SlamDiego 05:06, 25 March 2007 (UTC)

I see what your are getting at about absconding or reneging on the bet. Its an important point but I would treat is separately along with a couple of other practical objections. These include


 * 1) Reneging objection. I always presumed that is why Bernoulli set the problem in a Casino where there is a strong expectation that bets will be honored. A Casino that publicly reneged would be ruined anyway. Even if you assume the game host reneges at the point of bankrupcy,, the effect on the expected value is minimal (~$0.50).


 * 1) Value bounded by rate of play objection. The game has an expected value of $0.50 for each potential coin toss. So at best it is equivalent to getting a job that pays you to toss coins at a rate of $0.50 per toss. At, say, 10 seconds per toss, that works out to $180/hr or $360,000 per year before taxes. A handsome salary, but not infinite. This objection can be partially answered by increasing the first bet or the rate of play, say using an electronic device. Absent the finite resource limit, thee game can be maed as valuable as you like.


 * 1) Mat skills objection. One unsustainable assumption in the
 * I am not sure why you are introducing the possibility of no payoff. An essential feature of the St. Petersburg game is that the player is guaranteed a monotonically increasing pot. Of course, a wide variety of different games could be constructed, but they have little to do with the St. Petersburg paradox. The finite game should be discussed because it that is is the only possibility in reality. The so called infinite game will alway terminate because any real casino has finite resources. The purpose of exhibiting the calcualtion is to show that the expected value can be computed once the bankroll is known and that it is actually quite small for any conceivable bankroll. I agree it might be better do the alternative where the actual limit of the resources is forfeit. That would add something like 25 cents on average to the expected value. --agr 04:27, 25 March 2007 (UTC)


 * I quite agree that finite games should be discussed — just without arbitrary and avoidable complexity''.
 * I introduced the idea of zero payoff here merely for sake of completeness and realism. Not simply default but absconding would obtain in many real life St Pete games. (My expectation for the San Diego game about which you kept asking me is that it would have resulted in the seller fleeing without paying anything had his losses otherwise been high.) I don't want the article to discuss this possibility, I just want it to be seen that there are many stories that the article could tell, each of which is arbitrary and avoidable.
 * My suggestion, then, is that the section have a simpler story, just have the essential equation (without concerning itself with int/floor), and a table of three columns (maximum monetary payoff, maximum number of tosses, &amp; expected monetary payoff). —SlamDiego 05:06, 25 March 2007 (UTC)

I see what your are getting at about absconding or reneging on the bet. Its an important point but I would treat is separately along with a couple of other practical objections. These include


 * 1) Reneging objection. I always presumed that is why Bernoulli set the problem in a Casino where there is a strong expectation that bets will be honored. A Casino that publicly reneged would be ruined anyway. Even if you assume the game host reneges at the point of bankrupcy,, the effect on the expected value is minimal (~$0.50).


 * 1) Value bounded by rate of play objection. The game has an expected value of $0.50 for each potential coin toss. So at best it is equivalent to getting a job that pays you to toss coins at a rate of $0.50 per toss. At, say, 10 seconds per toss, that works out to $180/hr or $360,000 per year before taxes. A handsome salary, but not infinite. This objection can be partially answered by increasing the first bet or the rate of play, say using an electronic device. Absent the finite resource limit, thee game can be maed as valuable as you like.


 * 1) Mat skills objection. One unsustainable assumption in the
 * I am not sure why you are introducing the possibility of no payoff. An essential feature of the St. Petersburg game is that the player is guaranteed a monotonically increasing pot. Of course, a wide variety of different games could be constructed, but they have little to do with the St. Petersburg paradox. The finite game should be discussed because it that is is the only possibility in reality. The so called infinite game will alway terminate because any real casino has finite resources. The purpose of exhibiting the calcualtion is to show that the expected value can be computed once the bankroll is known and that it is actually quite small for any conceivable bankroll. I agree it might be better do the alternative where the actual limit of the resources is forfeit. That would add something like 25 cents on average to the expected value. --agr 04:27, 25 March 2007 (UTC)


 * I quite agree that finite games should be discussed — just without arbitrary and avoidable complexity''.
 * I introduced the idea of zero payoff here merely for sake of completeness and realism. Not simply default but absconding would obtain in many real life St Pete games. (My expectation for the San Diego game about which you kept asking me is that it would have resulted in the seller fleeing without paying anything had his losses otherwise been high.) I don't want the article to discuss this possibility, I just want it to be seen that there are many stories that the article could tell, each of which is arbitrary and avoidable.
 * My suggestion, then, is that the section have a simpler story, just have the essential equation (without concerning itself with int/floor), and a table of three columns (maximum monetary payoff, maximum number of tosses, &amp; expected monetary payoff). —SlamDiego 05:06, 25 March 2007 (UTC)

I see what your are getting at about absconding or reneging on the bet. It's an important point, but I would treat is separately along with a couple of other practical objections that are side issues. These include


 * 1) Reneging objection. I always presumed that is why Bernoulli set the problem in a Casino, where there is a strong expectation that wagers will be honored. A Casino that publicly reneged would be ruined anyway. Even if you assume the game host reneges at the point of bankruptcy, the effect on the expected value is minimal (~$0.50).
 * 2) Value bounded by rate of play objection. The game as stated above has an expected value of $0.50 for each potential coin toss. So at best. it is equivalent to getting a job that pays you to toss coins at a rate of $0.50 per toss. At, say, 10 seconds per toss, that works out to $180/hr or $360,000 per year before taxes. A handsome salary, but not infinite. This objection can be partially answered by increasing the first bet or the rate of play, say using an electronic device. Ignoring the finite resource limit, the game can then be made as valuable as you like.
 * 3) The math skills objection. One questionable assumption in the expected utility hypothesis is that any rational player presented with a game has the ability to perform the mathematical computations needed to compute the expected values. This clearly isn't true. One can construct games whose analysis would challenge any mathematician. In particular, most ordinary people, the subject of the "paradox," don't know how to sum infinite series. One answer to this objection is to explain the analysis and then ask the subject to state a price. This in turn can be challenged by questioning why a random person with little math training would give much credibility to math he doesn't really understand. Another solution is to restrict the questioning to people who do have sufficient training.
 * 4) The entertainment value objection. People might be willing on the order of $25 to play just for the novelty. The initial bet can be raised to $10 or $100 to make the proposition more serious.

As for the floor function, it is essential to the calculation of the possible number of plays and hence the expected value of a finite game. But it should be backed up by an explanation in words. I'm happy to tackle the above when I can get a bit of breathing room. --agr 04:56, 26 March 2007 (UTC)


 * Oh, I don't think that this article should cover reneging. My point is just that the bet can be capped in multiple ways, and I think that the section shouldn't get hung-up on the details of any one way, as I think that it is presently hung-up.


 * I see the $$floor$$ function as not needing explicit presentation even if you want to keep telling the same story. Consider this table:
 * {|class="wikitable" align=center


 * Backer || Bankroll || Maximum tosses || Maximum conforming payoff || Expected value of lottery
 * Friendly game || $100 || 6 || $64 || $3.50
 * Millionaire || $1,050,000 || 20 || $1,048,576 || $10.50
 * Billionaire || $1,075,000,000 || 30 || $1,073,741,824 || $15.50
 * Bill Gates || $51,000,000,000 (2005) || 35 || [ reneges ] || $0.00
 * U.S. GDP || $11.7 trillion (2004)|| 43 || $8.8 trillion || $22.00
 * World GDP || $40.9 trillion (2004)|| 45 || $35.2 trillion || $23.00
 * Googolnaire || $10100 || 332 || $8.8 × 1099 || $166.50
 * }
 * Now, does the user really need to confront $$floor$$ explicitly to get this? Remember that, while abstraction makes things simpler, for most people it does not make them easier.
 * U.S. GDP || $11.7 trillion (2004)|| 43 || $8.8 trillion || $22.00
 * World GDP || $40.9 trillion (2004)|| 45 || $35.2 trillion || $23.00
 * Googolnaire || $10100 || 332 || $8.8 × 1099 || $166.50
 * }
 * Now, does the user really need to confront $$floor$$ explicitly to get this? Remember that, while abstraction makes things simpler, for most people it does not make them easier.
 * }
 * Now, does the user really need to confront $$floor$$ explicitly to get this? Remember that, while abstraction makes things simpler, for most people it does not make them easier.


 * If $$floor$$ is to be included, then I strong urge that forward reference be eliminated from the presentation. L should be defined before it is invoked. —SlamDiego 05:41, 26 March 2007 (UTC)


 * I like your suggestions. I'll try to incorporate them when I get some time. Also, thanks to INic for clearing up the name question.--agr 04:24, 27 March 2007 (UTC)


 * I have seen this section only now.(Hence my comment above.) I think Slams suggestion about the table is fine. However, it needs to be explained then outside the table what 'Maximum conforming payoff' means. (Maybe we find a better title for this column?) I'd rather take out the floor function as well, since it looks complicated but isn't. Regarding the explicit computation: I still think either we take it out or we leave it with the prize being as original and not zero.Rieger 21:15, 1 April 2007 (UTC)


 * One thing that I would point out here, and that the article might mention in passing is that, in most cases, the expectation is going to be approximately the same under the three capping scenarios thus far discussed (quitting at maximum conforming payoff, ordinary bankruptcy, and absconding). But, again, my purpose in mentioning that there are many ways to effect capping is just to argue against preoccupation with the details of any one way. —SlamDiego 04:45, 2 April 2007 (UTC)