Talk:Gibbs sampling

Stationary Distribution
I'm not an expert, but the phrase 'the stationary distribution' suggests uniqueness to me, but my intuition suggests there could be cases where it isn't unique. For example p(x,y)=0 unless |x-y|<1/2 and 1<|x-y|<2, in which case p(x,y)=1.


 * I think there is a typo in your example. If |x-y|<1/2 then it is never the case that |x-y|>1...
 * Anyway you are correct that for some distributions with inaccessible regions of state space, Gibbs sampling does not give a unique stationary distribution. In these cases Gibbs sampling by itself isn't a valid MCMC method. Here's a toy example of discrete x and y taking on values 0 and 1.
 * p(x=0,y=0)=0
 * p(x=0,y=1)=0.5
 * p(x=1,y=0)=0.5
 * p(x=1,y=1)=0
 * Gibbs sampling can never move out of (0,1) or (1,0). However, it does leave the distribution of interest invariant. So Gibbs sampling is always a valid MCMC operator that can be mixed with another operator that ensures ergodicity. Someone should add a lucid description of these issues at some point... 128.40.213.241 13:54, 6 September 2005 (UTC)

Proved vs. Proven
Isn't "proven" an adjective, with the more accepted past participle of "prove" being "proved"? Quantling (talk) 11:56, 12 October 2007

I made the change to "proved". Quantling (talk) 17:00, 14 May 2008 (UTC)

Deterministic
OK, Gibbs sampling is stochastic and produces a different result every time, vs. EM or variational approx. mathods. But the latter techniques generally use a random init, which make the result overall MORE sensitive to init that Gibbs sampling run for long enough. — Preceding unsigned comment added by 83.195.171.99 (talk) 12:05, 9 June 2012 (UTC)

Inline references problem
Hi there.

Maybe I'm not completely familiar with Gibbs sampling, but I'm motivated to ask a inline reference supporting this:

"In its basic version, Gibbs sampling is a special case of the Metropolis–Hastings algorithm. However, in its extended versions (see below), it can be considered a general framework for sampling from a large set of variables by sampling each variable (or in some cases, each group of variables) in turn, and can incorporate the Metropolis–Hastings algorithm (or similar methods such as slice sampling) to implement one or more of the sampling steps."

I though Gibbs sampling was a particular case of the Metropolis-Hastings algorithm. It was just a particular proposal function which slides on particular directions. No mather how extend it can be, it keeps being a Metropolis–Hastings algorithm. Tis should be said on the initial overview of the article.

Besides, I'm forced to say that this article either lacks inline references or it is original research. Much of what is said does not have any inline references supporting it.