Talk:Von Mises distribution

Untitled
Hello. There must a multi-dimensional analog of the Mises distribution. I'm thinking of something like a spray paint splotch on a basketball; that would be like a 2-d Gaussian blob wrapped onto a 2-sphere. What can be said about a N-d blob on a N-sphere? Have a great day, 64.48.193.123 18:38, 15 January 2006 (UTC)


 * There is; it's called the generalized von Mises distribution, and its PDF is basically exp(k*dot(x,mu)) with the appropriate normalization.


 * Take a look at the directional statistics article. In fact, the generalization of the Von Mises distribution is a distribution on the torus. See this article in PNAS and the references therein. Tomixdf (talk) 17:32, 9 October 2008 (UTC)

Graphs
The graphs are conventional graphs on a regular xy axis. Would not a circular graph capture more of the spirit and intuitive meaning of this distribution? Cazort (talk) 19:27, 26 November 2007 (UTC)


 * Yes it would! It would be great if someone make it (a rose diagram for example).  It is tricky to superpose rose diagrams, but side-by-side plots with different parameter values would be useful.  Also, the related Wrapped Cauchy distribution deserves a page, no?   It, too, is widely used for circular data and has some nice properties.  Maybe I'll give it a shot (though very busy and not much experience writing up stats pages). - Eliezg (talk) 17:41, 16 April 2008 (UTC)

Numerically computing cdf
While the cdf equation on the page is correct, it is not suitable to compute the cdf numerically due to instabilities in the fraction of Bessel functions (took me some hours to figure that one out). Wouldn't it be useful to add a comment about that and provide a reference to a paper (GW Hill, 1977) that provides a numerically stable solution? I didn't want to mess with the article, as there are probably people who have strong opinions about what's supposed to be in these articles and what shouldn't. 128.151.80.181 (talk) 15:39, 20 April 2009 (UTC)

Limiting behavior
In the section 'Limiting behavior' it is said that as k --> infinity the distribution becomes a normal distribution. While this is correct, I think it is somewhat misleading as said normal distribution would have zero variance. I think it would be more appropriate to say that the distribution becomes a Dirac delta function.


 * What that means is that as k grows larger, the difference between the von Mises and the normal distribution becomes smaller. If you are willing to settle for some small error in your calculations, then what that means is that there is a variance (that is larger than zero!) below which you can use the normal distribution and von Mises distribution interchangeably. That useful information would be lost if we simply said it tended towards a delta function.PAR (talk) 19:02, 31 December 2009 (UTC)


 * Agreed. But the current formulation is also misleading (because that variance would go to zero is not immediately obvious to the uninitiated) Strasburger (talk) 18:39, 12 September 2014 (UTC)


 * I am just going to fix it (user name irchans)  — Preceding unsigned comment added by Irchans (talk • contribs) 16:46, 31 May 2017 (UTC)

Somethings wrong
Somethings wrong, either I am or the article is. Lets say $$\kappa$$=1, $$\mu$$=0. Then I calculate moments of z to be:


 * $$m_0=I_0(1)/I_0(1)=1$$
 * $$m_1=I_1(1)/I_0(1)=0.44639...$$
 * $$m_2=I_2(1)/I_0(1)=0.10722...$$

and so I calculate the variance of z to be m2-m1^2=-0.0920439... (negative!) and according to the formula given its 1-m1=0.55361... which by the way is m1+m2. PAR (talk) 00:31, 4 December 2009 (UTC)

Ok, thats wrong, the variance of z should be


 * $$\langle zz^*\rangle-\langle z\rangle\langle z^*\rangle=1-\left(\frac{I_1(\kappa)}{I_0(\kappa)}\right)^2$$

I mean, there is something strange about the "circular variance"


 * $$1-\frac{I_1(\kappa)}{I_0(\kappa)}$$

Does anyone know where this definition comes from or how it is used in analysis? It seems unnatural and mathematically intractable, unlike the first definition. I have never seen it used in any analysis of variance, it seems like every reference basically says "oh and by the way the circular variance is…." and then ignores it. Furthermore, if you use the first expression as the variance, then a sample statistic of


 * $$ \frac{N}{N-1}\,(\overline{zz^*}-\overline{z}\overline{z^*})$$

where overlines indicate sample averages, will be an unbiased estimator of the first variance, and this is a familiar kind of expression, similar to the relationship of standard deviation to variance in linear statistics. I'm pretty sure an unbiased estimator of the second expression is fairly crazy.PAR (talk) 18:49, 31 December 2009 (UTC)

No use of future tense in math articles, please
I find this page nice and informative. However, could someone sufficiently familiar with the subject eliminate the use of "will"? I think the future tense has nothing to do in maths. Either it is, or it isn't. —Preceding unsigned comment added by 134.76.74.121 (talk) 13:15, 22 February 2010 (UTC)

Von Mises vs Wrapped Gaussian
Can anyone expand on "It may be thought of as a close approximation to the wrapped normal distribution," ? While it is true that as k->infinity the approximation becomes very good, for small k it is not clear. Yes, the shapes of the two distributions look similar, but I wouldn't describe the approximation as "close" for small k, based on looking at the graphs. Are there any bounds on the error that anyone knows about, as a function of k? —Preceding unsigned comment added by 71.199.204.156 (talk) 18:03, 1 January 2011 (UTC)
 * Actually, as k->0, the approximation gets good again. But in between, it's arguably not so good.  —Preceding unsigned comment added by 71.199.204.156 (talk) 18:06, 1 January 2011 (UTC)

I edited the introduction. Not sure if wrapped normal distribution should be in the introduction at all. But if these two are compared one should be clear in which case one or the other is correct. The wrapped Gaussian as a result of the addition of independent random angle increments (diffusion) is not stationary. Disordered materials with a preferred orientation, on the other hand, are to the first order approximated by a diffusion process in a cosine potential. The stationary distribution of this process is the von Mises distribution. I guess the name circular normal distribution reflects the fact that it is the maximum entropy distribution for the circle. 133.65.54.177 (talk) 06:49, 25 August 2011 (UTC)


 * I was wondering what you mean by "not stationary" for the addition of independent angle increments? Also, von Mises is maximum entropy for a fixed value of the mean of z - that is, a fixed value of the mean sine and cosine of the angle measured. As far as comparing the two for various values of &kappa; and &sigma;, 1/&kappa;=&sigma;^2 when &kappa; is large (or &sigma; is small). This does not mean that when this is not the case, the two can be compared. &kappa; and &sigma; are just parameters that happen to have a simple relationship in the limit of small angles. One way to compare the two is to compare the two cases with the same circular variance. The match is closer. I think the best way to compare the two is to match two that have the same entropy - i.e. convey the same uncertainty in information. Again, I would expect the match to be closer. PAR (talk) 20:15, 25 August 2011 (UTC)

Maximum Entropy
The Von Miss distribution is a Maximum entropy probability distribution on the unit circle with a given mean. This fact should be added to the article somewhere, probably in the introduction. —Preceding unsigned comment added by 206.248.176.50 (talk) 23:00, 7 January 2011 (UTC)

Measure of concentration $$\kappa$$ : alternative names
A quick web search revealed 'precision', 'scale', 'concentration parameters' as alternative names for the parameter $$\kappa$$. Can some expert say which names are most common? I think we should mention the most important alternative names. Benjamin.friedrich (talk) —Preceding undated comment added 08:22, 7 January 2020 (UTC)

a recent paper investigated the statistical divergence of the von Mises-Fisher distribution
https://arxiv.org/abs/2202.05192

I'm not quite sure how to correctly add this or else I would have. This paper covers the Rényi entropy as well as several related entropies.

For instance, the Kullback-Leilber divergence of two VonMises distributions looks like

$$\frac{I_1(\kappa)}{I_0(\kappa)}(\kappa - \kappa_0 \cos(\mu - \mu_0)) - \log\left(\frac{I_0(\kappa)}{I_0(\kappa_0)}\right)$$

For two VonMises distributions with parameters $$\mu$$, $$\kappa$$, $$\mu_0$$, and $$\kappa_0$$.

(This form isn't directly mentioned in the paper, but if you break down the more general form that works for all dimensions to the special case, you get this as a result.)

The paper also special cases what happens in the various limits of $$\kappa$$ and $$\kappa_0$$. For instance, if we compare to a uniform distribution, the Kullback-Leibler divergence turns out to be

$$\frac{I_1(\kappa)}{I_0(\kappa)} \kappa - \log(I_0(\kappa)) $$

Kram2301 (talk) 17:52, 14 November 2023 (UTC)