Talk:Cromwell's rule

Problem with the Bayesian Divergence example
The only reason the second character in this story fails to update her posterior probability is that her prior probabilities were ill-formed (and worded in a terribly confusing way, as a matter of clarity). While the beta(0,0) distribution (characterized as "either always or never") can be used as an improper prior, the character in this story did not use it correctly. Even better than the use of an improper prior would be to use a markov chain as follows:

AliceX thinks the coin-flipper either chose the unfair coin with probability X or a fair coin with probability 1-X. Probability of heads with fair coin = .5 ; Probability with unfair coin = 1

Alice1 and Alice0 are the character in the story of this article who never update. Any other Alice(between 0 and 1) will update and the posterior probability of all those other Alices will converge to the same posterior probabilities as the coin-flipper flips more consecutive heads (and will instantly update to the same posterior as soon as a tail is flipped).

Bad examples with poor wording will lead non-experts to misunderstanding and should be purged. — Preceding unsigned comment added by 70.186.132.174 (talk) 22:00, 23 December 2014 (UTC)


 * Fixed. Loraof (talk) 16:08, 21 October 2017 (UTC)

Make Convergence a separate page?
Even if one respects Cromwell's rule, convergence can still be problematic, at least under some interpretations of probability. --Djmarsay (talk) 21:11, 7 January 2024 (UTC)

Clarify impact of rule
It is quite true that Bayesian inference never requires one to change one's mind if one believes that P(H)=1, but if one has evidence E for which P(E)=0 one might. (Compare Probability axioms.) This is important when H is 'the totality of hypotheses considered', in which case you might consider another. — Preceding unsigned comment added by Djmarsay (talk • contribs) 16:31, 9 January 2024 (UTC)