Talk:Nash equilibrium/Archive 1

correlated equilibrium more flexible than Nash equilibrium?
All the examples in the current Nash equilibrium article seem to be "one-shot" games (is there a better term?) -- as opposed to repeated games.

Strategies such as Tit for Tat don't work for "one-shot" games.

The Robert Aumann article mentions


 * Aumann's greatest contribution was in the realm of repeated games, which are situations in which players encounter the same situation over and over again.


 * Aumann was the first to define the concept of correlated equilibrium in game theory, which is a type of equilibrium in non-cooperative games that is more flexible than the classical Nash Equilibrium.

Does that mean that


 * Nash equilibrium only applies to one-shot games.
 * correlated equilibrium is used for repeated games.

? If that's true, the article should mention it.

--DavidCary 13:40, 11 October 2005 (UTC)


 * David, I don't really know much about correlated equilibrium (which is why I haven't written the article). From what I understand, those contributions are two different things.  Correlated equilibria are genearalizations of Nash equilibria (i.e. every Nash eq. is a Correlated eq. but there are some Correlated eq. that are not Nash).  I don't know what his contributions to repeated games is.  --best, kevin  · · · Kzollman | Talk · · · 18:34, 11 October 2005 (UTC)

Strategy profile
On this page, the phrase

resulting in strategy profile $$x = (x_1,..,x_n)$$

indicates that a strategy profile is simply a vector of strategies, one for each player. I agree. However, if one follows the link, a strategy profile is defined as something that "identifices, describes, and lastly examines a player's chosen strategy". It is conceivable that someone somewhere defined strategy profile in this way, but the "vector"-meaning of the term is much more common. So, at least the link, if not the page linked to is misleading.

Also there is something slightly wrong in the notation concerning strategy profiles on this page itself. You say that

S is the set of strategy profiles.

Okay, but then each element of S is a vector of strategies. Still, you write

$$x_i \in S$$

where $$x_i$$ is a single strategy. I think you want to write

$$S = S_1 \times S_2 \times ... \times S_n$$ is the set of strategy profiles

and

$$x_i \in S_i$$.

Bromille 11:50, 31 May 2006 (UTC)


 * The strategy (game theory) article really needs some fixing. I reverted Littlebear1227's definition of strategy profile to the more correct one... Pete.Hurd 13:16, 31 May 2006 (UTC)

"Occurence" section
There are some issues in the "Occurrence" section.

First, whatever the stated conditions are, they can only be sufficient, not necessary for equilibrium play. To see this, observe that nothing prevents a bunch of completely non-rational agents from playing the Nash equilibrium, not because it is a rational thing to do, but because non-rational agents can do anything they want.

Second, conditions 1. and 5. together say that agents are rational and they all believe that the other agents are also rational. But these are not sufficent conditions on rationality for equilibrium play. A classical counterexample is Rosenthal's centipede which has a unique Nash equilibrium as assumed. In this game, if you and I play the game and I believe you are rational and you believe I am rational, we still might not play the equilibrium, if, for instance, I do not believe that you believe that I am rational. This could be the case, even if I do believe that you are rational. What is needed to ensure that the equilibrium is played in the case of Rosenthal's centipede is common knowledge of rationality (CKR) which means that

A: I am rational, B: You are rational, C: I believe B, D: You believe A, E: I believe D, F: You believe C, G: I believe F, H: You believe E, etc.. etc..

The difference may seem like splitting hairs but is actually the key to understanding the discrepancy between how Rosenthal's centipede is actually played by rational-seeming people and the equilibrium play.

Reference: Robert Aumann, Backward induction and common knowledge of rationality, Games and Economic Behavior 8 (1995), p. 6--19.

Third, for general games, even if CKR is assumed, this usually only ensures that a correlated equilibrium is played, not a Nash equilibrium. The fact that a unique Nash equilibrium is assumed may make this ok - I'm not sure about this.

Bromille 13:54, 1 June 2006 (UTC)


 * Bromille, this is definitely right. I have intended for some time to rewrite this section to incorporate some of these issues.  But you are welcome to beat me to it!  :)  We do have a page on common knowledge (logic) which explains some issues about common knowledge.  --best, kevin [kzollman][talk] 17:51, 1 June 2006 (UTC)

GA Re-Review and In-line citations
Members of the WikiProject Good articles are in the process of doing a re-review of current Good Article listings to ensure compliance with the standards of the Good Article Criteria. (Discussion of the changes and re-review can be found here). A significant change to the GA criteria is the mandatory use of some sort of in-line citation (In accordance to WP:CITE) to be used in order for an article to pass the verification and reference criteria. Currently this article does not include in-line citations. It is recommended that the article's editors take a look at the inclusion of in-line citations as well as how the article stacks up against the rest of the Good Article criteria. GA reviewers will give you at least a week's time from the date of this notice to work on the in-line citations before doing a full re-review and deciding if the article still merits being considered a Good Article or would need to be de-listed. If you have any questions, please don't hesitate to contact us on the Good Article project talk page or you may contact me personally. On behalf of the Good Articles Project, I want to thank you for all the time and effort that you have put into working on this article and improving the overall quality of the Wikipedia project. Agne 05:49, 26 September 2006 (UTC)

Nash equilibria of a game
[Ed: This is an incomplete example that does not illustrate. If I choose 9 and you choose 0, I'm out two bucks. If I choose 9 and you choose 7, you make $9 and I still make $5 - why is this not superior?]

The system of strategies 9-0 is not a Nash equilibrium, because the first player can improve the outcome by choosing 0 instead of 9. The system of strategies 9-7 is not a Nash equilibium, because the first player can improve the outcome by choosing 9 instead of 7. The only Nash equilibrium of this game is 0-0, as stated. AxelBoldt 15:04 Aug 27, 2002 (PDT)


 * Boy, I sure don't understand this. Why is the Nash equilibrium not 10-10?  Don't the players both get ten bucks if they do this?  And the worst that could happen is the other player chooses a smaller number and you get less than ten.  Why would anyone rational do that, when you could both pick 10 and get 10 bucks?  Is there a statement missing from the problem definition? Jdavidb 04:17, 8 Apr 2004 (UTC)

The second player can chose 9 and he can get 11 while the other one will get only 8. That is why 10-10 is not at Nash equilibrium -- AM

I think the thing to keep in mind here is that it is a competition game. Both players are trying to beat the other's "score". This example would work better, I think, using points instead of dollars. Yes, there is a benefit to both players if they both win some money, but no benefit if they both win some points and they are each trying to beat the other's score. -truthandcoffee /ed: Does anyone else confirm/deny this? I would like to see this example changed from dollars to points, as I think it is much clearer and makes much more sense this way. I'd like some feedback before I go ahead and make the change. --Truthandcoffee 04:27, 13 November 2006 (UTC)

The modified version with 11 Nash equilibria
If the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 11 Nash equilibria. Does the Nash equilibrium not suppose that everybody is selfish and rational? So, if you suppose that the other player is selfish, you can savely assume that he/she chooses 10. IMHO that's the only equilibrium. Or does the Nash equilibrium not presuppose selfishness? Andy 12:53, 26 September 2006 (UTC)

Relationship to regulation??
I am curious about the relationship between NE and government regulation. Consider the case of airbags in cars. Without regulation requiring airbags, normal supply and demand will dictate the number of airbags produced. If air bags are mandated by the government, the unit cost will be much lower, and more consumers will choose to purchase cars with airbags and receive the consumer surplus from the purchase. The consumer was able to move to a better outcome through the intervention of a third party. In this case it is hard to say how the auto manufacturer is impacted. It is possible that they benefited because they can now offer new cars with more value at a lower cost than without regulation. More people buy a new car vs. a used car to benefit from the air bag. Measuring if there was a benefit to the auto company is impossible, but let's assume there is. Did the regulation help both parties move out of a NE to a better outcome? 65.198.133.254 20:36, 30 November 2006 (UTC)David Wilson

Nash Equilibria in a payoff matrix
Under the heading specified above, it claims that "...an NxN matrix may have 0 or N Nash Equilibriums." Shouldn't it be 1 or N NE, since at the beginning of the article it says that Nash proved the existence of equilibria for any finite game with any number of players?
 * No. For example the matrix [ 0,1 0,1 ][ 1,0 1,0 ] has no equilibrium. I assume the proof doesn't apply to that matrix because it's not a matrix representing a finite game (but that's just a guess).
 * I do think the sentence is wrong on the other side though, and it should be 0 to NxN equilibriums (note also that it's 0 TO N in the original, not 0 OR N) for the degenerate case of a matrix where all the cells have the same value. Hirudo 06:29, 31 March 2006 (UTC)
 * Every finite game has a Nash equilibrium in its mixed extension. Some games, don't have pure strategy Nash equilibria, and I assume that what is meant by the article.  I will fix it. --best, kevin [kzollman][talk] 07:23, 31 March 2006 (UTC)

Depth
This section provides a rule for finding NEs. It fails to discuss either the rule's history - who discovered it and when - or why (the reader should believe) it works. While there's a place for pragmatic rules in an encyclopedia, the article should at least indicate where its theoretical underpinnings may be found. yoyo 23:09, 24 December 2006 (UTC)

Notation Question
I have seen in some papers the notation $$x_{-i}$$ used to describe a set of strategies for all players other than $$i$$. This cleans up notation greatly; for example,


 * $$f_i(x^*) \geq f_i(x^*_{1},...,x^*_{i-1},x_i, x^*_{i+1},...,x^*_n) \forall i \forall x_{i} \in S_{i}.$$ (where S_{i} is the set of all admissible strategies for player $$i$$)

becomes


 * $$f_{i}(x^*_{i},x^*_{-i})\geq f_{i}(x_{i},x^*_{-1}) \forall i \forall x_{i} \in S_{i}.$$

This seems more compact and conveys the point more clearly. SanitySolipsism 22:27, 6 January 2007 (UTC)

Samuel Goldstein
Does anyone know anything about this? How about what journal the article is in, or anything? I couldn't find anything, the ip of the anon who added the information is at U of Toronto, and another, erilly similar ip address claims at User talk:128.100.53.169 to be familiar with Goldstein, although not think that it belongs in the lead. As unverifiable, I'm still for removing the mention, but loath to do it without mentioning it here, per WP:1RR. Smmurphy(Talk) 17:55, 14 March 2007 (UTC)
 * Yeah, looks zippable to me. If the anon editor reads this: 1. We need the name of the journal the article appears in, and 2. an independent source that makes the claim that Nash stole it from Goldstein. ~ trialsanderrors 19:01, 14 March 2007 (UTC)

Collaboration of the Month
OK, what to do? I put this on my watchlist now. ~ trialsanderrors 17:20, 6 March 2007 (UTC)
 * Here are my vague thoughts. Feel free to disagree or ignore them as you like.
 * The introductory examples (which oddly come after the formal proof sketch) are overly complex and a bit scattered. I would rather have one simple example that makes the idea clear, and then perhaps one or two more complex examples which help to remove potential misunderstandings (like a NE which is weakly dominated).  I started this once, at User:Kzollman/Nash equilibrium, but I'm not sure I'm happy with it.
 * The stability section seems a bit imprecise, but a good idea. Perhaps the major ideas could be made more clear with a discussion of a dynamics, or ESS?
 * I think a discussion of the interpretation of NE is needed. What exactly is NE used for?  I recall a gt-ist here rattling off a bunch of different interpretations for ne (like fixed point in a dynamics of strategy change, prediction for initial play, prediction for stability but not initial play, etc).  In particular a discussion of its empirical shortcommings would be useful.  Something that mimics the stuff in game theory.
 * The "occurence" section needs tightening up, and could be more specific. I know Aumann has a paper about the epistemic conditions for NE.  I have it, but probably won't read it until I do so for a class next quarter.  If someone else knows this stuff, I think it would be good to have it.
 * Some discussion of major weakenings/refinements would maybe be nice. But perhaps this would duplicate what solution concept ought to be...  Almost certainly we should discuss the refinements that share the name like strict NE, symmetric NE, whatever.
 * Maybe something about the precursors to Nash. Cornout for example.
 * Phew... That list ended up being bigger than I thought. What do you think?  --best, kevin [kzollman][talk] 05:58, 7 March 2007 (UTC)
 * I agree mostly. What I noticed first is that the article is a A7 candidate because it never bothers to tell us why the Nash equilibrium is important. So a historical relevance/refinements section should be somewhere at the beginning. The proof sketch should move further down, it's not really deletion material, but it's also not the most encyclopedically relevant aspect. On the examples section, I think examples with one, two and no NE's are necessary, and then maybe an example that's not immediately intuitive (maybe extensive form game?). Also, the relationship of NE to dominance, minimax and Pareto optimality isn't made clear. Lots of work... ~ trialsanderrors 08:51, 9 March 2007 (UTC)


 * Okay, sorry I haven't had much time recently. Given the ammount of information that need to be added, I suggest maybe we put together an expected Table of Contents here before adding a bunch of stuff to the article.  So here's what I think:
 * Definition
 * Intuitive
 * Mathematical
 * Mixed vs. pure strategy NE
 * Examples
 * Simple strict dominance example (Prisoner's dilemma?)
 * Game with Multiple PSNE (stag hunt/battle of the sexes?)
 * Game with no PSNE (matching pennies/rock paper scissors?)
 * History
 * Cournot and John Nash
 * Uses for NE
 * Normative
 * Descriptive
 * Epistemic requirements for NE
 * Refinements
 * Strict
 * Stable
 * Maybe some others
 * Proof
 * How does this sound? --best, kevin [kzollman][talk] 22:18, 15 March 2007 (UTC)
 * Yeah, I thought about that last night too. I'd say the Cournot-Nash history can stay where it is, there isn't really much more to be said aboiut it, other than that the structure looks better than what we currently have. Of course the lead should make a mention why the concept is important to game theory. ~ trialsanderrors 03:30, 17 March 2007 (UTC)
 * Also, PD is probably not a good example of a simple NE since it's also dominant strategies eq. It might be useful to find an example without dominant strategies. ~ trialsanderrors 03:39, 17 March 2007 (UTC)

Citation style
Hello all - The article currently uses two inconsistent citation style (APA and wiki-ref style). We need to be consistent. I personally prefer APA-inline citations, but I know wikipedia generally seems to be leaning toward the footnote style. Anybody have any druthers? --best, kevin [kzollman][talk] 23:19, 29 April 2007 (UTC)

Why does the term 'governing dynamics' re-direct here?
Governing Dynamics, i.e. the way an object or phenomenon will occur or behave under a given set of circumstances is not Nash's idea, so why when one types the phrase into the search box, does one get diverted to his page? It's ludicrous. Governing dynamics is a centuries old theory which simply refers to the way an e.g. object or phenomenon will behave given a set of prevailing circumstances. If anything it is a philosophical rationale more than a scientific theory. By understanding the laws which governs said object in said circumstances, one is essentially able to understand the true nature of all matter and occurrence and abstracts are erased. The principle of governing dynamics serves to found and object or occurrence in its pure and absolute state because it is essentially a malleable law, under which nothing is fixed nor finite, that is to say X will behave as Y under Z, but that does not imply X will also behave as Y, because X is not fixed and will not always be under condition Z. Under condition V, X will behave as A and not Z, because it will reflect the new conditions and parameters it finds itself operating in. Thus by understanding the laws, the governing dynamics of phenomenon, one is ultimately able to understand the true and original nature of the object or phenomenon under inspection, because one essentially has a series or set of laws which encompass all possible existence and not simply a single law under which an object or phenomenon is expected to eternally function.

I therfore suggest that for accuracy these two pages should be split and a seperate page developed which specefially deals with the Law of Governing Dynamics in its historical and philosphical contexts. Andrew Woollock 14:08, 13 July 2007 (UTC)


 * Feel free to create a page at Governing dynamics to replace the redirect. --best, kevin [kzollman][talk] 15:12, 13 July 2007 (UTC)

Good article delist?
This is a notification of an intention to delist this article as a good article. In February this year at Wikipedia_talk:WikiProject_Game_theory, the consensus was that this article is B-Class (at best) from the point of view of WikiProject Game theory. I agree with this assessment, and last month I also rated it as B-Class from the point of view of WikiProject Mathematics. The article has not improved since these assessments, despite being Game theory collaboration of the month for two months.

The lead is inadequate as a summary of the article. The article is missing a history section, which is surely vital in this case (and is partly covered in the lead). The accessibility of the article could be much improved, the references are poor, and citation is both inadequate and inconsistent. I will attempt to improve the article myself over the next day or two, but I don't think I will be able to raise the article to GA standard myself. I hope others will contribute to ensuring that this article meets with criteria, otherwise I will have to delist it. An alternative would be to take the article to Good article review and any editor is welcome to do that. Geometry guy 21:28, 12 July 2007 (UTC)


 * GA review (see here for criteria)


 * 1) It is reasonably well written.
 * a (prose): b (MoS):
 * 1) It is factually accurate and verifiable.
 * a (references): b (citations to reliable sources):  c (OR):
 * 1) It is broad in its coverage.
 * a (major aspects): b (focused):
 * 1) It follows the neutral point of view policy.
 * a (fair representation): b (all significant views):
 * 1) It is stable.
 * 2) It contains images, where possible, to illustrate the topic.
 * a (tagged and captioned): b (lack of images does not in itself exclude GA):  c (non-free images have fair use rationales):
 * 1) Overall:
 * a Pass/Fail:
 * a Pass/Fail:

I think there is consensus that this is not yet a good article. The lead is weak, the prose poor. Citation is extremely weak. Improve and renominate: good luck! Geometry guy 20:35, 14 July 2007 (UTC)

Nontechnical definition
I hope the specialists among us don't mind my attempt at defining Nash equilibrium in a lay person's language. If you are tempted to quibble with my definition, just remember how awfully vague the (non)explanation of Nash equilibrium was in that bar scene in the film of A Beautiful Mind. Such a fundamental and elegant idea deserves to be explained in words everybody can understand. --Rinconsoleao 19:46, 16 July 2007 (UTC)

Math Typo
I have added a comma after the i, 194.223.231.3 15:26, 17 July 2007 (UTC)

Maybe I just don't get this, but...
I think it is weird that the proportion between the players' payoffs are not taken into account. Surely I would, in a two-player game, prefer weakening both myself and my opponent if I weakened him more. Consequently, I would, from the situation A,B (25,40) being player one, change to C (15,5) to force the opponent into also doing C (10,10). That is a truly stable position, in which no player can earn anything by changing strategy. Isn't that actually the only of the three "stable" combinations that satisfies both stability criteria? —Preceding unsigned comment added by 77.40.128.194 (talk) 21:25, 15 January 2008 (UTC)
 * Well this is one common problem in people looking at normal form game representations etc. the situationis given for the game at hand where each player maximizes his or her payoff. Payoff doesen't have to mean money, but could mean pleasure/whatever aswell, so if someone is happy with the counter-player not getting that much either then it should be counted into the payoff(eg. a 25,40 would be a 0,40 if the player does not want the other player to make 40), but it is not something that should be factored in when calculating a N.E. in other words the example on the page does not necessarily have to mean the two players are competing to win a game (lets say they are both businessmen selling products on the same market and getting that big profits without giving a damn how much the other one makes) Gillis (talk) 23:29, 15 January 2008 (UTC)

Strong/Weak explanation
I've noticed that although the article makes use of the terms strong & weak Nash equilibria, they are not defined anywhere. -- ricmitch 08:17, 21 April 2008 (UTC)
 * Check Gillis (talk) 18:37, 21 April 2008 (UTC)

History of Nash equilibrium. I just changed this section. I can develop it a bit more and add some discussion. The concept of a Nash equilibrium in pure strategies (to use the current name) was clearly in use in oligopoly theory in the 19th Century and was well known: the writings of Edgeworth, Hotelling, Stackelburg to name a few of the better known names. I do not know for sure, but it quite possible that Oscar Morgenstern did not know much about this, since it is left out of the Theory of games and economic behavior (as far as I remember it). Von Neuman was primarily interested in card games and developed the idea of a mixed strategy (bluffing) in this context. In the Theory of Games, he showed that all zero-sum games posses a mixed strategy equilibrium. The modern form of the Nash equilibrium (where the strategies are in generla mixed) was the contribution of Nash: he extended Von Neuman's concept to all games with a finite number of actions. There have of course been various extensions since then: in particular Dasgupta and Maskin provided an existence Theorem for the non-finite case (when the strategic variables such as price are continuous) in their 1987 Review of Ecoonomic Studies articles and so on, but much of this is secondary. Zosimos101 (talk) 18:04, 25 April 2008 (UTC)

pareto optimality example
Oh well, the bank run might be as good as well... feel free to revert me if you think so... but i think a cartel is something more people are prone to be familiar with than a bank run?... and in the cartel example all players get a boost in profits wheras in a bank-run only some get more than theit share of the banks assets value, Gillis (talk) 21:38, 31 May 2008 (UTC)

Needs citations
Gentlemen, this article is going to get an ugly citation template if you continue without proper citations. This is a serious topic, not pop culture fan-cruft, so we expect better from you. It's been a year since it was shown that this was a problem, yet there are still only two in-line citations. Please try to rectify this as soon as possible! Note, the citations need not be Internet accessible - good, old fashion book citations are more than sufficient. --Dragon695 (talk) 03:01, 8 July 2008 (UTC)
 * am I one of those gentlemeen? I have seen in your "call for a look" in the talk page of User talk:ScienceApologist that you mention paranormal fans and homeopaths, in front of a page in which you can even find a couple of proofs of a mathematical theorem. The definition, the examples given, the justifications for the importance of the Nash equilibrium concepts are to be found in any good book dealing with game theory. The list of references could be improved adding some more reference books (Myerson, Owen, Binmore, etc.) that appear already in the references of the page devoted to game theory. And these are printed books, even best sellers in the field.
 * To sum up: page can be improved, citations can be added, but it is not (as of now) a meeting point for paranormal fans. --Fioravante Patrone en (talk) 04:58, 8 July 2008 (UTC)


 * Hmm. Tags are largely pointless; based on my knowledge of game theory, the page appears basically correct. It could use copyediting to reduce wordiness. You could add a tag, but do you think that's going to really spur people to cite it inline? It's just going to bother the readers. If you want it cited inline, get one of those books -- if necessary do an interlibrary loan -- then just go down the page citing. It might take you 4-5 hours, but you'd learn something in the process. II  | (t - c) 19:11, 8 July 2008 (UTC)


 * Dragon695, since you have taken an interest in this article and indicated this by chastising its current editors, you should consider adding a few of these references as a good-faith gesture. Antelan talk  00:49, 9 July 2008 (UTC)
 * I have added few notable game theoretical textbooks --Fioravante Patrone en (talk) 06:01, 9 July 2008 (UTC)

Stability of strategy
Article states, in the "Stability" section: "Note that stability of the equilibrium is related to, but distinct from, stability of a strategy."

I thought it should be relatively easy to track down the meaning of "stability of a strategy". Unfortunately, a visit to the Stability disambiguation page and the linked Stability (probability) page have left me none the wiser.

So, unless the reader can find some meaning in this phrase, I suggest that leaving the quoted sentence in the article is both unhelpful and potentially discouraging. Should we remove it? yoyo 23:02, 24 December 2006 (UTC)

I think that the current section on stability is rather weak. In general, we know that in a mixed strategy NE, all actions played with a strictly positive probability have the same expected payoff (i.e. if they were played with probability 1 they would yield the same payoff as the equilibrium mixed strategy. Hence if more than one strategy is played in equilibrium it must be unstable in sense 2. Only pure strategy NE can be strict equilibria.  Zosimos101 (talk) 17:41, 25 April 2008 (UTC)


 * Do you suppose that line is referring to Evolutionarily stable strategy? If so, that can/should be clarified. If not, I'm also ignorant of what it means, and unless someone can clear it up with a ref, it should go away. Cretog8 (talk) 02:51, 25 June 2008 (UTC)


 * Having no clarifying response, I pulled that line out. Cretog8 (talk) 03:19, 12 July 2008 (UTC)

The article is written mostly for mathematicians, not for laypeople
I am a layman who is merely interested in mathematics, and whose formal education stopped at more or less high-school level. From reading this article, I do not understand why or how the Nash equilibrium is important, although I am persuaded (by other sources) that it is so. To give you an idea of how mathematically (il)literate I am, I have what I imagine is a reasonably accurate (albeit non-technical) understanding of the significance of Gödel's first incompleteness theorem, although I didn't get it from Wikipedia but from reading various books on the subject. I can only assume that this article faithfully represents Nash's result, but all I can tell you is that it explains the result in terms that mostly mathematicians will understand. As an encyclopedia article, designed to explain the equilibrium and its importance in layman's terms, it is a failure. The lead paragraph helps me to understand the nature of the result, and because I have read a few introductory books about game theory I think I get how it builds upon earlier work, but it doesn't tell me why it matters. Since Nash won the Nobel for this work, I assume the result must be very significant, but all this article seems to do is give a precis of the result that would help a math undergraduate help with coursework. It does not explain to non-mathematicians why Nash deserved a Nobel for this work. Sorry to be a pain, but I am not mathematically literate enough to improve the article, since I still don't understand the significance of the result. Lexo (talk) 23:39, 10 July 2008 (UTC)


 * The gist of Nash equilibrium is pretty easy, and that's what most of the article includes, rather than the math. Is that stuff unclear to you, or just the math sections? I'm not sure that there's a simple/intuitive proof or semi-proof of the existence of Nash equilibrium, as there are intuitive semi-proofs of the incompleteness theorem. Cretog8 (talk) 23:53, 10 July 2008 (UTC)


 * I can understand being confused. The article presumes knowledge of some things, introducing terms like "pure strategies", "mixed strategies", and "zero-sum" without really concisely explaining what they are. I've wikilinked a little bit to help you find that context. As for why it matters: that's a damn good question. Economists tend to do theory for theory's sake. It appears that most of the editors to this article are guilty of falling into this trap. You're right that there is no real discussion of the applications of this. It tends to be applied loosely (without a ton of concern for the empirics), to oligopolistic competition. The prisoner's dilemma should probably be particularly highlighted, rather than being placed in parentheses, as large corporations sometimes face a form prisoner's dilemma when it comes to a price war. It's somewhat amusing in this context, since it implies to me that firms will collude, but most economists don't worry about collusion. It's applied across game theory and in several models, eg. Bertrand competition. II  | (t - c) 03:24, 11 July 2008 (UTC)


 * I'll add a few points here in hopes of helping your understanding (and in figuring out how to make the article better). A Nash equilibrium is what will happen in a game when two rational agents are playing against each other (and the fact that both agents are rational is common knowledge).  In a way, you can think about a Nash equilibrium as effectively all possible stalemates in the game.  Note that there can be several Nash equilibria for a game, and if there are several, you can't be sure which one the rational agents will play, but they will play one of them.  I don't remember whether Nash actually came up with the original idea of the Nash equilibrium by himself (I swear I remember reading that the idea was floating around among other researchers), but he was definitely the one that proved that every game has at least one of these equilibria, which was .  Nash equilibria frequently (but definitely not always) predict the actions of individuals, firms, governments, etc., and in other cases suggest how to optimally play.  Note that sometimes Nash equilibria can have counter-intuitive results, with interesting artifacts being games like Dollar auction and the Traveler's dilemma.  Another thing to note is that most of the Nash equilibria are expressed in introduction books to single-shot games, meaning that once you play the game, you will never encounter the agent(s) you played against ever.  With repeated games, that is, any time when you may have interactions in the future (which could even be different games), MANY Nash equilibria often emerge, and in some cases, uncountably many.  Other limitations are that people, agents, firms, etc. are often boundedly rational, meaning they don't have the information or computational resources to determine the Nash equilibria in time to play the game (finding Nash equilibria is generally NP-hard, but the structure of certain classes of games can make finding the equilibria considerably easier in those cases).  Sometimes, single-shot games can be effectively used to approximate behavior of repeated games if the agents are sufficiently impatient.  Hope this helps.  Halcyonhazard (talk) 07:02, 11 July 2008 (UTC)
 * I appreciate the comment above. And I am waiting for the helpful contribution of Halcyonhazard. Just a point of disagreement (a non irrelevant one). You say: Note that there can be several Nash equilibria for a game, and if there are several, you can't be sure which one the rational agents will play, but they will play one of them. I find this a potentially misleading sentence. The main trouble arising from the multiplicity of Nash equilibria is due to the fact that in general it is meaningless to speak of "equilibrium strategy" for a player. The dammnation of Nash equilibria is that they are strategy profiles. So, in case of multiplicity you cannot say that they will play one of them: in games like the battle of the sexes or of pure coordination, you cannot guarantee that you get as outcome a Nash equilibrium outcome. Hope 'was clear. --Fioravante Patrone en (talk) 07:37, 11 July 2008 (UTC)
 * Good catch, Fioravante Patrone en. After re-reading that sentence, I agree that it is potentially dangerously misleading... I was trying to give simple explanations, but I simplified a bit too much in that sentence.  Does anyone else have any further thoughts about what information should go in the intro of the article to make it more accessible?Halcyonhazard (talk) 17:52, 11 July 2008 (UTC)
 * As I read your comments, I begin to despair at the prospect of making a single sensible article. Here's two thoughts. First, it's possible that what it needs is highlighting of the importance of various ideas. Really (as far as I'm concerned) the existing lead is a pretty solid explanation of Nash equilibrium. The concept is straightforward, and the article should make it clear that it's straightforward, rather than leaving people to think they have to grok a lot of other stuff. (Of course, I already know too much, so it might take feedback from people who don't already know NE to decide if the lead makes sense to them.) Second, what's the Nash part, and why do we care? Nash's technical contribution was the mixed-strategy existence proof, but unfortunately (which the article doesn't mention) that's empirically very questionable as a prediction of behavior.
 * Oops, third, and getting to the technical side: I'm not sure how an example like Prisoner's Dilemma should be handled. Yes, it's an interesting game and the solution is a NE, but the concepts behind NE are unnecessary for that solution. I would actually (again, being on the technical side, so coming after other explanations) lean towards pointing out that the Nash ideas aren't necessary as a way of clarifying the ideas. (And I'm pretty sure that I'm going to replace Prisoner's Dilemma with Rock paper scissors in the equilibrium info box.) Cretog8 (talk) 18:25, 11 July 2008 (UTC)
 * When you say what's the Nash part, and why do we care? Nash's technical contribution was the mixed-strategy existence proof..., it was more that Nash proved that every game has at least one of these equilibria described in the article (which may or may not be pure strategy). John van Neumann is usually credited with coming up with the idea of mixed strategies.  I agree that the PD has a lot of conceptual baggage that might bog down the NE article, but I do think it should be mentioned because it's so important (I once read that prior to 1970, there were 2000 peer reviewed articles on PD... I can only imagine how many there are now).  I wonder if this article could be more accessible if dominant strategies were briefly explained first (I know it's a separate article), then use iterated strict dominance to show a fairly trivial NE, then show another NE example where the scaffolding of iterated strict dominance doesn't work (this is how I've taught it in my classes, and so far it's worked pretty well).  I also think it might be worth briefly mentioning repeated games and folk theorems, because repeated games might show more applicability to some readers (and the details can be left for the appropriate other articles).  However, in my opinion, I don't think we should get rid of some of the more technical sections, because they are very important to the article. Halcyonhazard (talk) 22:00, 11 July 2008 (UTC)
 * Both seem to be good. Replacing PD with rock-scissors-paper (as has been done by Cretog8). Or re-introducing PD following the path: domination - iterated deletion of dominated strategies - equilibrium. I agree with the didactic use of this path (direct experience too). However, I am ot sure that it should be used as the main pathway to Nash equilibrium in this article. After all, rationalizable strategies (and emphasis on iterated elimination...) came some thirty years after Nash equilibrium ;-) Why not to insert this in an interesting and useful subsection (with conenctions to other articles)? For repeated games, some room can be left for them, sure. --Fioravante Patrone en (talk) 23:36, 11 July 2008 (UTC)
 * I'm not comfortable with the domination->...iterated elimination->...Nash progression for an article like this. It makes sense in lectures where you can do lots of back-and-forth, but in a static article, I think NE can and should stand on its own. It really is a nice, clean idea, and I'm still concerned that the nice clean idea gets lost in the surrounding detail. I think that after NE has been explained, showing PD's relationship to it would be great. And I certainly don't think the technically stuff should be removed, just that (and I'm not a good enough writer to know how to do this, or for that matter if it's a problem in the current version) it should be made clear to the reader that they can get the ideas without the math. Cretog8 (talk) 00:17, 12 July 2008 (UTC)
 * I would like to chime in on the ambiguity of the phrase "equilibrium strategy" in reference to the NE. Somebody else has even dead-wiki-linked the term so I have to assume it's a problem in the opening section.  Somebody above was asking if regular folk of reasonable intelligence were having trouble understanding that first paragraph.  I am a person of better than reasonable intelligence but no particular history in game theory and I find it to be confusing.  Maybe game theorists already have these concepts down but I want to ask the following:  Does NE only apply to participants that want an equilibrium because that's what it sounds like right now?  Does NE describe the state/progress of the game or the player's strategy?  The answer to this last I assume is the state/progress but it's not that clear from the article.

This article needs a good "Applications in economics" section, although I'm not really qualified to do it. The real-world application here is rather light. II | (t - c) 00:34, 12 July 2008 (UTC)


 * This discussion has got a bit bogged down. Most of the replies to my original comment are about ways of improving how NE can be explained, but few of them address my main point, which was that it's not clear from the article why Nash deserved to win the Nobel prize for his work.  I don't know whether it was because NE has such useful real-world applications or whether it was just a brilliant piece of work on his part, but then mathematicians have done great work before and have not been awarded the Nobel for it.  Halcyonhazard, in his initial comment above, has come closer than anyone else in suggesting how NE is actually used in real life, but can anyone provide me with a properly sourced example?  I'm not suggesting that the technical stuff be cut, but I think that this article has all the math it needs for the moment; what it requires right now is some history of math.  It reads like an extract from a textbook, when it should be illustrating the context and significance of the result for lay readers. Lexo (talk) 13:43, 14 July 2008 (UTC)
 * as I said: "I am waiting for the helpful contribution of Halcyonhazard." My hope is that he will help this article, since he showed ideas that I liked. This article is not an easy one, since it lies at the heart of game theory and many lines of thought mix there (not to speak of its relevance, or not, for the applied side of game theory). --Fioravante Patrone en (talk) 14:45, 14 July 2008 (UTC)
 * I don't think it is fair for an article on the NE (if any article) to be expected to explain why Nash deserved to win the Nobel. There is quite a discussion, with more than a few people seeming to believe that Nash won it because von Neuman died before he could get it.  On the other hand, some see Nash's work as critical in the turn of economic towards being a science of human behavior (think Becker) and incentive and away from being a science of resource allocation (think Jevons).  Here is Myerson making that argument.  That discussion, however, might belong more in the history section of the game theory article.  The reason the significance of the Nash equilibrium in terms of applications isn't more fully discussed is because that discussion usually belongs more in articles on individual games themselves (PD, for example). Smmurphy(Talk) 17:03, 14 July 2008 (UTC)
 * For "why the Nobel", the best source might be . My take on it is that it wasn't the math that won Nash the Nobel, but the concept. Von Neumann's comment, "That's trivial, you know. That's just a fixed point theorem." might be true from a math point of view. (I'm not one to judge.) And lots--I'd say the majority--of game theory whether applied or otherwise, doesn't relate to mixed strategies in finite games. But the idea of an equilibrium in best-responses (while it had been implicit for Cournot and vN&M) is now the core of game theory, including dynamic games, refinements, Bayesian games, whatever.
 * The link above has the quote, "His most important contribution to the theory of non-cooperative games was to formulate a universal solution concept with an arbitrary number of players and arbitrary preferences, i.e., not solely for two-person zero-sum games." To me, that captures much of the essence, since it doesn't dwell on the details (fixed point theorem, simultaneous, finite game of complete information), but on the idea.
 * As to its importance in application, that might be trickier. Would a general list be more helpful, or one or a few specific cases well-explained? Cretog8 (talk) 18:12, 14 July 2008 (UTC)
 * Smmurphy's point is well taken, but on the other hand, an article on (say) an important book or film would be regarded as simply incomplete if it didn't record something of the critical reception and public success (or lack of same) of that book or film. The important criterion here is in fact the simple one of WP:Notability.  I have to reiterate once again that this is not supposed to be simply a textbook explanation of the NE, but an encyclopedia article about it; if it is to be a useful encyclopedia article, it needs to study the subject from more than just the purely game-theoretical angle, on the grounds that a good article is broad in its coverage.  It could be that there is no consensus about how useful or important the NE is; I don't know, not being a mathematician, but as an editor what I do know is that the article should record and reflect that lack of consensus. Lexo (talk) 01:51, 15 July 2008 (UTC)

Formal definition
Section states:

A strategy profile $$x^* \in S$$ is a Nash equilibrium (NE) if no unilateral deviation in strategy by any single player is profitable, that is


 * $$\forall i, f_i(x^*_{i}, x^*_{-i}) \geq f_i(x_{i},x^*_{-i}).$$

but I think that it would be more accurately to say:

A strategy profile $$x^* \in S$$ is a Nash equilibrium (NE) if no unilateral deviation in strategy by any single player is profitable for that player, that is


 * $$\forall i, f_i(x^*_{i}, x^*_{-i}) \geq f_i(x_{i},x^*_{-i}).$$

since some other player could certainly increase his payoff if player $$i$$ (foolishly) changes his strategy. Am I right? -- Obradovi&#263; Goran ( t al k  01:17, 25 July 2008 (UTC)


 * You're right, except that I think "for that player" is implicit. However, being a formal definition, there's no harm in being explicit.
 * Although, now that I read it a bit more carefully, the overall definition needs to change some.
 * $$\forall i,x_{i} : f_i(x^*_{i}, x^*_{-i}) \geq f_i(x_{i},x^*_{-i}).$$
 * ...and even that isn't quite enough, I don't think without requiring that xi be in i's strategy set, which isn't defined. So it looks like it needs cleanup. Cretog8 (talk) 04:07, 25 July 2008 (UTC)


 * How does this look?
 * Let (S, f) be a game, where Si is the strategy set for player i, S=S1 X S2 ... X Sn is the set of strategy profiles and f=(f1(x), ..., fn(x)) is the payoff function. Let x-i be a strategy profile of all players except for player i. When each player i $$\in$$ {1, ..., n} chooses strategy xi resulting in strategy profile x = (x1, ..., xn) then player i obtains payoff fi(x). Note that the payoff depends on the strategy profile chosen, i.e. on the strategy chosen by player i as well as the strategies chosen by all the other players. A strategy profile x* $$\in$$ S is a Nash equilibrium (NE) if no unilateral deviation in strategy by any single player is profitable for that player, that is


 * $$\forall i,x_i\in S_i : f_i(x^*_{i}, x^*_{-i}) \geq f_i(x_{i},x^*_{-i}).$$


 * quite reasonable, and for sure a relevant improvement. (I deleted an extra " ' " sign above). An additional feature could be the coordination with the definition of game in normal (or strategic) form. There are notational differences that could be polished. You're welcome ;-) --Fioravante Patrone en (talk) 09:28, 26 July 2008 (UTC)


 * Thanks. Silly of me, I didn't think to check out other relevant articles. It would be good to have as much formal stuff in a canonical form in one article and then hidden references to that in other articles. I'm too scatter-brained at the moment to do polishing, but should try to come back to it.
 * I made one additional change in the article, which is to make the math bit:
 * $$\forall i,x_i\in S_i, x_i \neq x^*_{i} : f_i(x^*_{i}, x^*_{-i}) \geq f_i(x_{i},x^*_{-i}).$$
 * That additional $$x_i \neq x^*_{i}$$ is to make it match with the text below about strict inequality.
 * I can't decide which looks uglier, straight html or . The stand-alone math line looks fine, but the inline stuff is horrible. Cretog8 (talk) 05:28, 30 July 2008 (UTC)

Usefulness section
I've argued with the editor who added this section. I can definitely understand the impulse, but I think the addition as it is doesn't improve the article. It tries to provide some more context for application of NE. On reviewing the article, I agree it needs that early on, perhaps in the lead, perhaps in an early section. It also attempts to flesh out further the prisoner's dilemma/tragedy-of-the-commons type game application, which could possibly use a little extra development in the prisoner's dilemma section.

On the downside, the addition puts too much emphasis on this specific kinda of dilemma game, while NE is much more broadly applicable (and NE isn't needed to solve such games). It's also a bit essayish with the risk of including POV and inaccuracies (about whether people follow their own best interest in fact, and about what NE assumes about it).

I'm going to pull it out (again), and this it should remain out until the important ideas can be better integrated, possibly via discussion here. C RETOG 8(t/c) 16:33, 7 January 2009 (UTC)


 * I am waiting for an answer by Muinchille1 to the constructive remarks by Cretog8. There is clearly room for integration as just said by Cretohg8 --Fioravante Patrone en (talk) 17:28, 7 January 2009 (UTC)


 * I have read the section and the arguments from User:Cretog8 and User:Muinchille1 - I don't want to come of as an oracle or anything, but I agree with Muinchille1 in the part that it can be interpreted as inappropriate to simply remove an article if you are disagreeing with the quality of it. I also agree with Cretog8 that it should be rewritten to further please the general standards of Wikipedia - but I know for a fact that one of the most asked questions by any "secondary schoolchild"  is "what can this be used for?". So my conclusion is that the essence of Wikipedia is to write and improve - so the section should be there and improved on-site. I think that there is a very long way between saying the article does not improve the article, to go and saying that it harms the article, and Cretog8 has stated both. Just my "five cents". --Kotu Kubin (talk) 19:06, 7 January 2009 (UTC)


 * Why not call it Applications instead of Usefulness? Then we could very very briefly describe some of the applications and insights associated with some of the games which are more formally defined in the sections that follow... --Rinconsoleao (talk) 09:24, 24 February 2009 (UTC)
 * Alternatively, if that eats up too much space, we could very very briefly discuss the purpose of Nash equilibrium in the introduction (it is a way of analyzing and predicting the outcome of the interaction of several decision makers, where each takes into account the strategies of the others), and then put an Applications subsection following each type of game that is discussed in more detail. --Rinconsoleao (talk) 09:37, 24 February 2009 (UTC)

ROUGH DRAFT of Applications section:
 * The Nash equilibrium concept is used to analyze the outcome of the strategic interaction of several decision makers. In other words, it is a way of predicting what will happen if several people or several institutions are making decisions at the same time, and if the decision of each one depends on the decisions of the others. The simple insight underlying John Nash's idea is we cannot predict the result of the choices of multiple decision makers if we analyze those decisions in isolation. Instead, we must ask what each decision maker would do, taking into account the decision-making of the others.


 * Nash equilibrium has been used to analyze hostile situations like war and arms races (see Prisoner's dilemma), and also how conflict may be resolved through repeated interaction. It has also been used to study how people with different preferences can cooperate (see Battle of the sexes), and whether they will take risks to achieve a cooperative outcome (see Stag hunt). It has been used to study the adoption of technological standards, and also the occurrence of bank runs and currency crises (see Coordination game). --Rinconsoleao (talk) 10:49, 24 February 2009 (UTC)

COMMENT: I have now added the Applications section. Maybe I should point out that I wrote it with nonspecialists in mind. Economists and game theorists and mathematicians already know "why it's useful", so the people that need to see an Applications section are others. But therefore I have NOT tried hard to distinguish between applications of Nash equilibrium per se and applications of game theory in general. And I have not taken care, in my list of examples of specific applications, to distinguish between examples using Nash equilibrium per se and examples that use refinements or generalizations of Nash equilibrium. --Rinconsoleao (talk) 15:44, 25 February 2009 (UTC)


 * Rinconsoleao, good work on that section, I think it brings a lot more clarity to the article, without sacrificing too much precision. One comment, though: Maybe it's me, but I like reading the History section of an article before anything else (except the lead). Would it look OK to move the applications section below history?. Think about it: if it doesn't look strong enough as an independent section below history, maybe it should get rewritten and placed under the lead section. --Forich (talk) 14:34, 28 February 2009 (UTC)


 * Usually I would agree, I like the history section to come first. But in this article, the history section is necessarily somewhat technical, and the purpose of the 'applications' section is to give the ordinary reader some idea what this is about. So I think it's a lot more 'user-friendly' if we put applications before history. --Rinconsoleao (talk) 19:18, 1 March 2009 (UTC)

The "NE" acronym
There is an incosistent use of the acronym "NE" throughout the article. I encourage us not to use the acronym at all. It's just a two-word long name after all.--Forich (talk) 04:51, 16 March 2009 (UTC)
 * I agree. The acronym is ugly and unnecessary. --Rinconsoleao (talk) 09:21, 16 March 2009 (UTC)

Rock, Paper, Scissors
The infobox states that RPS is an example of a game with a NE. Can someone enlighten me as to what that strategy is supposed to be? Neither this article nor the RPS article mentions it, and all the strategies I can think of offhand will cause an opponent to change strategy... so not sure what is meant here. Gijs Kruitbosch (talk) 23:31, 22 March 2009 (UTC)
 * The equilibrium strategy is to pick one's play at random: 1/3 probability of each option. Algebraist 23:48, 22 March 2009 (UTC)
 * In that case, doesn't every game have an NE? Considering random play a strategy seems... peculiar. Gijs Kruitbosch (talk) 09:31, 23 March 2009 (UTC)


 * Strictly speaking picking strategies randomly (with probability <1) is called a "Nash Equilibrium in Mixed Strategies". The Nash Theorem says that every game with a finite number of strategies has an equilibrium in mixed strategies. But games with infinite number of strategies may not have even an NE in mixed strategies. For example, the game where we both simultaneously pick a number and whoever picks a higher number wins has no equilibrium.radek (talk) 09:42, 23 March 2009 (UTC)


 * Furthermore, wouldn't an infinitesimal change in the probabilities of that strategy mean the opponent would be better off making an equivalent infinitesimal change in strategy, thus not living up to the cited rules under the Stability heading? Gijs Kruitbosch (talk) 09:36, 23 March 2009 (UTC)


 * Yes, but for the probabilities to be a Mixed NE they have to be such that no player wishes to change their probability distribution given the other player's distribution. For stability - that's going to depend on the game. RPS is an example of a game which is cyclically stable but no "pure strategies" equilibrium is itself stable. It actually describes an evolutionary model of some lizard which cycles between different colors (or was it sizes) in the same way that rock beats scissors beats paper beats rock.radek (talk) 09:42, 23 March 2009 (UTC)


 * Oh yeah, here: .radek (talk) 09:43, 23 March 2009 (UTC)

nash equilibrium vs pareto optimality
Could the authors of the Nash Equilibrium article say something about WHY a NE is not always Pareto Optimal? Is it because the conditions for a NE game are no cooperation between the players and so they can get stuck, not having control over the whole strategy vector but only their own part of it? Clejan (talk) 23:51, 1 June 2009 (UTC)


 * Yes, that's basically it. NE is a solution concept for noncooperative games, so it looks at decisions based only on each decision maker's own interests. Depending on the type of game, each individual's rational decision may lead to a Pareto Optimal situation, or a to suboptimal situation (the Prisoner's Dilemma is the most famous example). There is also another branch of game theory called cooperative game theory in which decisions may be made by groups, but NE plays no role in that theory. --Rinconsoleao (talk) 07:35, 2 June 2009 (UTC)


 * That's correct, in a sense. However, I think it's better to think of Nash equilibrium and Pareto optimality as unconnected in any general sense. How they interact in any given game is a property of the game, not how the two concepts relate to each other. C RETOG 8(t/c) 16:41, 2 June 2009 (UTC)

Joining equilbrium systems in real life
Nash equilibrium may not be a specific goal of the major entities in the economic environment at all times, but the usual state of the economy is much closer to a Nash equilibrium than not.

It will generally not be possible for the actions of an individual to affect the market's equilibrium. After he is in a stable situation, he won't be able to improve his situation by his own actions alone. He must form alliances or take other actions in cooperation with others, in order to improve his situation.

Usually he will form an alliance with one or more of the entities, called employers, that are part of the economy. After making the best decisions he can with the income he has, he will be able to improve his situation further only if he acts as a cohort with other agents. Even as an employee, he must consult with either his present employer, other employers, or employment counselors in order to improve his situation.

This whole area of economics seems to be woefully unknown. Yet it should be known thoroughly, for millions of new agents join the largely stable economic system each year. Their living standards, even their very lives, depend on a win-win economic relationship with others. How does a new graduate successfully join a stable economy? How does he maintain a sense that he can act effectively in charge of his own fate? Why is knowledge of successful strategies coveted and privileged information, not widely known, as if the market economic system did not actually plan for the number of people entering it each year? Or are there only a few possible ways to use successful entry strategies? —Preceding unsigned comment added by SyntheticET (talk • contribs) 02:12, 1 October 2009 (UTC)

Intransitive Strategies
It seems that this page needs to mention the case when strategies are intransitive. In the number game for example, a comparison of 0 vs. 4 would give an additional equilibrium of (4,4), but choosing 3>4, 3>2, 2>1, and 1>0. This situation seems to apply to several cases such as the Traveler's Dilemma and Centipede game where the Nash equilibrium is not the result of play and not the best strategy. I don't know of any references and am not educated in game theory, but explicitly addressing this could make these cases where the Nash equilibrium is not chosen much less confusing. —Preceding unsigned comment added by Sapiens scriptor (talk • contribs) 03:22, 14 December 2009 (UTC)

"Bomb room" example in "Informal Definition"
Am I the only one who finds the bomb example really confusing? Why would one person kill the other if they have money? (It doesn't even mention at the beginning that the people have money.) Unless I'm missing something totally obvious, I think the example should be modified to be more clear. --Lasindi (talk) 11:15, 5 July 2008 (UTC)


 * The example is in a sense correct, but it's short and quite non-explanatory. It doesent do a very good job in explaining something to someone who is not familiar with the concept from before, so i suggest we shorten it to not contain the example but only a link to some article that explains non-credible threats.
 * To explain it, you could see it as a sequential move game with the strategies known. If you'll draw it up as a tree and normal form you'll notice it'll probably have two NE strategy-pairs, aka. two strategies where both players do not gain anything from deviating (as the strategies are common knowledge). But in fact the strategies that are not subgame perfect contain non-rational moves - or in other words incredible threats. Non-credible threats are strategies which "threat" to do something (such as set of a bomb if the other player does not do something specific). But that threat may prove non-rational when the time of choice comes (the guy notices he loses just as much himself by setting of the bomb... now the other player that is being blackmailed with the bomb may analyze a subgame perfect NE for the game and therefore know he is safe to keep his money if he knows for sure the guy is rational ;)). This became very fuzzy of an explanation... I'll look into if there is a need for another section covering it. Gillis (talk) 11:56, 6 July 2008 (UTC)
 * Okay now the same point is presented with a understandable example and graph... of course someone may want to check my hasty work for me thanks... Gillis (talk) 20:14, 29 July 2008 (UTC)
 * I don't get the example for non-credible threats. The game isn't explained explicitly - what are the duplets? should both player choose U/K? or just one of them? please clarify...--84.111.29.205 (talk) 20:40, 19 December 2009 (UTC)

Cabal equilibrium
In voting theory, Nash equilibria are much too common, while trembling-hand equilibria are often incalculable. For instance, in the simple election between Gandhi and Hitler, it is a Nash equilibrium for everyone to vote for the universally-despised Hitler, because any one person changing their strategy is not enough to improve the result. Peter de Blanc (not me) has proposed a more restrictive equilibrium, the Cabal equilibrium, in a blog post. Basically, it's an equilibrium where no set of players can strictly improve the result for all of them by simultaneously changing strategies. I believe this is a very useful concept, and wonder if it has been proposed before, or if anyone can find (or create :) citations to establish its WP:Notability. Homunq (talk) 17:24, 28 May 2010 (UTC)


 * I just reread the article, and this is mentioned. It's called a CPNE. Homunq (talk) 17:30, 28 May 2010 (UTC)

Another notation question
In the Proof of Existence there seems to be a bit of loose ends in terms of notation. For instance in the first half proof, the σ-i is introduced with a mention that somewhere earlier this is defined. I was not able to locate this definition and so even if its somewhere in the article, it is not easy to find. It doesn't hurt to define all the symbols within the scope of the section.

Also in the next proof I find the same issues with the σ notation and also that the u in the G=(N,A,u) is also undefined. I find this particularly annoying as I would like to learn the proofs myself. If knows what to do about this, I feel this is an easy fix. -Xian —Preceding unsigned comment added by Dragonflare82 (talk • contribs) 21:14, 20 May 2011 (UTC)

network traffic
Comments should go on the talk page, rather than in the article itself. The current numbers for the network traffic/Braess's Paradox section are correct.


 * 25 via ABD
 * 50 via ABCD
 * 25 via ACD


 * 75 via AB -> travel time is (1.75)
 * 25 via AC -> travel time is (2)
 * 75 via CD -> travel time is (1.75)
 * 25 via BD -> travel time is (2)
 * 50 via BC -> travel time is (0.25)


 * ABD: (A->B)+(B->D) = (1.75)+(2) = 3.75
 * ABCD: (A->B)+(B->C)+(C->D)= (1.75) + (0.25) + (1.75) = 3.75
 * ACD: (A->C)+(C->D) = (2)+(1.75) = 3.75

If this still isn't clear, see if the main article on Braess's Paradox is clearer to you. The bit on this page is meant to be short. If the main article is still confusing, then maybe it will need tweaking for clarity. C RETOG 8(t/c) 23:03, 18 August 2009 (UTC)
 * Since there have been a couple of "corrections" (fortunately rollbacked), I checked also the numbers, and I found them, as are now in the page, correct. --Fioravante Patrone en (talk) 21:26, 13 October 2009 (UTC)

Thanks, will need to brush up! —Preceding unsigned comment added by 92.9.84.41 (talk) 16:33, 21 August 2009 (UTC)

The article states that "Every driver now has a total travel time of 3.25." As you noted above, in the article's example every driver has a total travel time of 3.75, and removing the B->C route would reduce the total travel time to 3.50. So the numbers aren't right, in the sense that there's a typo that incorrectly says the total travel time in the ABCD model is 3.25. 216.175.89.152 (talk) 18:32, 26 May 2012 (UTC)MMHerbst

Prisoner's dilemma
A few things...

1) The section on Prisoner's dilemma says: each player improves his situation by switching from strategy #1 to strategy #2, no matter what the other player decides. Assuming strategy 1 is cooperate and strategy 2 is defect (and it makes even less sense the other way around) this is not true. The players only improve their situation (originally both cooperating and both get 3) if only one of them makes the switch (then 5 for the improved defector, 0 for the poor cooperator), if both of them make the switch they worsen both their positions (both get only 1). Besides that, the use of 'strategy 1' and 'strategy 2' is confusing, just say 'Cooperate' and 'Defect'.

2) The notation 'C > A > D > B' seems very odd and I still haven't figured out what that is supposed to mean. We had strategy A and B in the coordination game, which this section refers to C (cooperate) and D (defect), but I don't see how they are related. Rewrite for clarity...

3) The formatting for the title is a bit messed up (both IE8 and Firefox 5). The section title should follow the figure for the driving game and be aligned on the left. — Preceding unsigned comment added by 128.244.9.9 (talk) 16:55, 1 July 2011 (UTC)


 * 1) It's meant as: For any given situation ("no matter what the other player decides"), a player improves his own situation by switching to strategy #2.
 * 2) True.
 * 3) Sorry for the cliché but: it's a wiki! --Mudd1 (talk) 13:53, 18 July 2012 (UTC)