Talk:Rice distribution

Any comment about the cdf will be highly appreciated.

--Lucas Gallindo 18:34, 21 August 2007 (UTC)


 * The cdf looks awesome, a bit like the one for the normal distribution. --WikiSlasher (talk) 07:48, 14 December 2007 (UTC)

characterization and related distributions use different order for sigma and nu
In the characterization section, the pdf is given as $$f\left(x|\nu,\sigma\right)$$, but in the related distributions section, the variable $$R$$ is taken from the distribution $$Rice\left(\sigma,\nu\right)$$. I think that the $$\nu$$ and $$\sigma$$ should be in the same order in both places to avoid confusion. ChristineInMaryland (talk) 19:08, 28 July 2008 (UTC)


 * I was about to correct this and change to nu rather than v consistently (in the present form, it vascillates between the two). But then I saw the words "cumulative density function".  I don't know what possesses anyone to write such an absurd phrase.  The words "cumulative" and "density" obviously flatly contradict each other.
 * I'll be back. Michael Hardy (talk) 21:39, 28 July 2008 (UTC)
 * I'll be back. Michael Hardy (talk) 21:39, 28 July 2008 (UTC)

What is this distribution for?
This is clearly a very cumbersome distribution to work with, so it must have been imagined for a specific reason. Could anyone explain a little what the thinking behind the distribution is? Is it the distribution of something specific process (c.f. Poisson) or is it constructed to prove a point (c.f. Cauchy distribution or Cantor distribution). 83.244.153.18 (talk) 16:26, 30 July 2008 (UTC)

I believe the Rice distribution is used to describe the statistics of the lengths of 2D vectors drawn from 2D Gaussian distribution having non-zero mean. So, imagine a mean vector and a dispersion about the endpoint of the vector described by the 2D Gaussian ... This makes it a generalization of the Rayleigh distribution, which applies only to the dispersion part, and assumes zero mean vector. —Preceding unsigned comment added by 136.177.20.13 (talk) 16:38, 15 January 2009 (UTC)

Marcum Q-Function ref?
The CDF is given in terms of $$Q_1$$, with the note that $$Q_1$$ is the Marcum Q-Function But this appears not to be described on Wikipedia. It is described on MathWorld, should a link be added within the CDF section of the sidebar, and/or within the references? --Ged.R (talk) 14:45, 8 December 2008 (UTC)
 * Yeah, it's probably a good idea at least until a Wikipedia article about it is written. I'd put it as an extra link in the sidebar. --WikiSlasher (talk) 15:46, 29 December 2008 (UTC)

'Rice' or 'Rician'?
Which is correct article title? The normal distribution is sometimes called the 'Gaussian', not the 'Gauss' so I think 'Rician' is more appropriate. -Roger (talk) 20:01, 1 April 2009 (UTC)

Definition of the Laguere polynomial $$L_{1/2}(x)$$
The Wikipedia page Laguerre polynomials about Laguerre polynomials only gives definitions for $$L_n(x)$$ for integer $$n$$. How is the definition for $$n=1/2$$ as used in the raw moment of the Rice distribution as proposed in this article?

Troelspedersen (talk) 11:31, 28 April 2010 (UTC)

Indeed this is very annoying. — Preceding unsigned comment added by 195.115.170.59 (talk) 14:36, 28 September 2011 (UTC)

The Koay inversion technique
How is this inversion technique discovered? Could someone provide some guidance on this? Thanks. —Preceding unsigned comment added by 68.246.18.196 (talk) 05:55, 14 May 2010 (UTC)


 * Well, it is rather trivial. First notice that the distribution depends only on the value of the ratio θ = ν/σ. Then you see the formula for variance? If you divide it through by σ² and plug in the definition of the function L1/2 in terms of the Bessel functions (the formula is given in the “Moments” section), then you will obtain that the (normalized) variance is given by ξ(θ) expression. Then if you look at the expression for the mean and compare it with the expression for the variance, it all can be written as (Mean² + Var)/σ² = 2 + θ². Combined with the nonlinear equation that ξ(θ) = Var/σ² immediately gives you the formula that g(θ) = θ, where function g is as defined in the article.  //  st pasha  »  17:45, 14 May 2010 (UTC)

As a side note, I'm not quite sure why this “technique” was even included in the article. It is neither standard (standard estimation method in the problems like this is maximum likelihood) nor interesting. This just a method of moments estimator based on the first 2 moments. The resulting estimator is unduly complicated, compared to, say ,an estimator based on the second and the fourth moments, which can be found in closed form as a solution to a quadratic equation. //  st pasha  »

Thanks Stpasha for the explanation. Perhaps, it is trivial once the inversion has been discovered because it was not clear to me the motivation behinds the steps taken in deriving the inversion formula. By the way, the estimator based on the second and the fourth moments is less efficient (in the statistical sense) than the Koay inversion technique. —Preceding unsigned comment added by 108.97.35.195 (talk) 19:08, 14 May 2010 (UTC)


 * This “Koay technique” certainly looks nice and impressive and scientific-y, with all those special functions and fixed-point arguments and iterative schemes. But any statistician will tell you that this is not the “best” technique for this point. That is, there are methods which give more efficient and faster estimates. For example the one-step estimator in this case would be relatively simple (requires computation of special functions only once) and efficient. In particular, it will be as efficient as MLE and more efficient than the Koay method.  //  st pasha  »  06:46, 15 May 2010 (UTC)

Stpasha. You statement is incorrect. The one-step estimator, if you meant the method which uses the second and the fourth moments, is actually less efficient (CRLB) than the "Koay technique" or the MLE. The "Koay technique" is, as noted in the Wikipedia, a method of moments but it uses the first two moments. Any statistician will tell you that estimation done through higher moments is less efficient. In fact. the "Koay technique" is also as efficient as the MLE---I did the test and both are very competitive! —Preceding unsigned comment added by StanfordCommSci (talk • contribs) 04:21, 16 May 2010 (UTC)

Limiting case
I am almost certain that in the limiting case where v >> sigma, the Rice distribution reduces to a Gaussian, since it is essentially the absolute value of a complex-valued gaussian random variable with non-zero mean. — Preceding unsigned comment added by 199.46.245.230 (talk) 00:19, 13 June 2012 (UTC)