Talk:Stag hunt

suggest another diagram
"But occasionally players who defect against cooperators are punished for their defection. For instance, if the expected punishment is -2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction."

It would be helpful to see the result of the -2 applied to the PD diagram - I can't quite understand the argument, but a diagram would really help. —Preceding unsigned comment added by 24.18.225.0 (talk) 06:30, 29 January 2010 (UTC)

suggest merge
Because the stag hunt is just one version of a coordination game, I think this article could be merged with coordination game (User:Tabako 13:06, 31 August 2008 )

Wolf's Dilemma
Douglas Hofstadter's "Wolf's Dilemma" is essentially the same problem, but with many players(usually around 30) instead of just two. Any thoughts on whether this should be mentioned on this page? I'm inclined to add some lines about it, but I'm not sure the what the consensus on noteworthiness would be. --68.102.156.139 22:41, 9 September 2007 (UTC)


 * This is just another name for the same thing, if you're describing it correctly. Rousseau's original example was not supposed to be a 2-player interaction (the idea is that you need to have a bunch of guys surrounding a stag so that whichever way it runs its surrounded!) 128.135.98.189 (talk) 04:08, 28 May 2009 (UTC)

Payoff
"The stag hunt differs from the Prisoner's Dilemma in that the greatest potential payoff is both players cooperating, whereas in the prisoners Dilemma, the greatest payoff is in one player cooperating, and the other defecting." This statement is poorly worded. In the PD, each prisoner can only control his/her own choice. Therefore, the greatest payoff comes from defecting, no matter what the other prisoner chooses. This payoff is only maximized if the other prisoner cooperates, but it's still better than your alternative choice. Applejuicefool (talk) 17:37, 18 December 2007 (UTC) kblkl/kb/nknknn,mknlkb — Preceding unsigned comment added by 118.136.154.116 (talk) 03:30, 27 October 2013 (UTC)


 * The statement that starts this comment is incorrect. Firstly, total payoffs are not relevant since there is no interpersonal comparability of utility. Second, even if you were to drop that assumption total payoffs could be higher or lower with both players cooperating than with one cooperating and the other defecting. 88.107.226.24 (talk) 12:46, 5 February 2023 (UTC)

Also, the diagram shows 2,8 and 8,2 for the payoffs in which one hunts the stag and another defects and hunts a hare... shouldn't those be 1,0 and 0,1 - since the hare is worth 1 and hunting the stag and failing is worth 0? Also, if both hunt hares shouldn't the bottom right strategy payoffs be 1,1? Just a thought - but maybe I misunderstand.2601:18F:700:3E7:F962:91D9:54D3:C37C (talk) 01:32, 31 March 2023 (UTC)Solnza

Hume
It's not clear to me that Hume's examples fit here, even if they can plausibly be modeled to fit Nash's definition. Rousseau's point, and the point that the Stag Hunt is normally used to make, is that under certain sort satisficing or risk-averse evaluations the rabbit may seem preferable (because the rabbit meets your dietary needs just as well as a share of the stag). Hume's point, though, is that it is never reasonable to, e.g., not row, because none of you will go anywhere. Unless someone objects I'm removing the discussion of Hume. 128.135.98.189 (talk) 04:08, 28 May 2009 (UTC)

"... no longer risk dominant." Really?
Could someone please confirm this (or tell me where I went wrong)?

The article mentions an example of a stag hunt that is not formally one, but that many people would call one nonetheless.

"Often, games with a similar structure but without a risk dominant Nash equilibrium are called stag hunts. For instance if a=2, b=1, c=0, and d=1. While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call this game a stag hunt."

It think that's not correct. For values: a=2, b=1, c=0, and d=1, (Hare, Hare) /is/ in fact still risk dominant, no?

[EDIT]

Never mind. I mixed up risk dominance and maximin. My mistake.

-- Minvogt (talk) 21:33, 1 March 2010 (UTC)

How many hares?
The initial text suggests that both participants can go hare hunting, scoring one point each. But the more descriptive version at the end of the article suggests that there is only one hare; if one hunter goes for it, the others starve. Which is correct? Stevage 00:52, 9 September 2011 (UTC)

Answer: (edited/corrected from my previous deleted answer) Skyrms' Stag Hunt book, p.3: "Suppose that hunting hare has an expected payoff of 3, no matter what the other does. (...)". There are enough hares for anyone who wants to hunt them. You may have misunderstood the following point: the other gets 0 if decides to the stag while the other dont (ie. whatever the number of hares available).

Symmetric generic stag hunt?
The matrix labelled "Generic Stag Hunt" is just a generic payoff matrix, but the note that "a>b≥d>c" implies that the generic form should be symmetric. This is not my field -- is the generic Stag Hunt symmetric? Meanwhile, since the matrix listed is completely generic, I'm going to replace it with a symmetric one to match the inequality listed and call it "Generic symmetric Stag Hunt." SamuelRiv (talk) 01:44, 19 August 2015 (UTC)

I also got stuck on "No longer risk dominant"
Since risk dominance is defined in weak form here, we could see that for the symmetric example with 'a=2, b=1, c=0, and d=1'. For (H,H) dominating (S,S) we need: (b-a)² ≥ (c-d)², so as 1 ≥ 1, we should still say that (H,H) risk dominates (S,S) and the part "it is no longer risk dominant" is not valid.

Unless that we use the strict definition for Risk dominance: (b-a)² > (c-d)². Note that example in figure 2 (H,H) is also not risk dominant by the strict definition.

The formal definition section helped me a lot because I was looking for some details of this game, but I understand that it is too technical.

Victor Maia (talk) 04:34, 23 June 2019 (UTC)
 * That's your reasoning. What I need is a reliable source. And yes, that was too technical, to the point it reads like meaningless jargon. Chris Troutman  ( talk ) 21:52, 23 June 2019 (UTC)
 * Thanks, I should say that I know a little about this subject and the formal definition is essentially correct, my point above is only to show that "risk dominant" depends of using '>' or '≥', both uses are admissible. In plain language, the Stag Hunt Game is characterized as a game with two Nash Equilibria: One that is better for every player (Pareto_efficiency), each player betters off and that point is the (Stag,Stag) equilibrium. The other equilibrium is less risky, everyone is risking less, so the risk_dominance definition, and that equilibrium point is the (Hare,Hare). I will search for better references for this section, some of them are in the Wikipedia link (risk_dominance) already used and other good one is the Skyrms' book (already used in the current article).Victor Maia (talk) 01:43, 24 June 2019 (UTC)