Talk:Cauchy distribution

Estimation for small sample sizes
The article states that the MLE is relatively inefficient for small samples, and cites an article that only discusses estimation of the position parameter. Could somebody clarify or add sources? — Preceding unsigned comment added by 62.20.124.50 (talk) 06:41, 12 April 2012 (UTC)


 * Added one source. Hanspi (talk) 07:36, 2 July 2015 (UTC)

Existence versus definition
I've changed the notations on all moment-related quantities from "not defined" to "does not exist." The definitions of these are the same as they are for everything. The integrals that implement those definitions, however, are not convergent. Therefore, quantities like the moment generating function do not exist, rather than being undefined.

Inverse of distribution for generating random variables
The nice thing about this distribution is that you can make Cauchy random variables using RND (uniform random variable on [0,1] or the RND and similar functions available in so many environments) and the common tangent function (sine over cosine)

Can someone more knowledgeable check the inverse Cauchy cdf distribution?

When I inverse the CDF I get something different.


 * Indeed, the inverse CDF has an error, and should be instead x=x0+sigma*tan(pi*(F-1/2)). Anyone mind correcting this?


 * Done. Thank you for your suggestion! When you feel an article needs changing, please feel free to make whatever changes you feel are needed.  Wikipedia is a wiki, so anyone can edit any article by simply following the  link. You don't even need to log in!  (Although there are some reasons why you might like to...) The Wikipedia community encourages you to be bold.  Don't worry too much about making honest mistakes&mdash;they're likely to be found and corrected quickly.  If you're not sure how editing works, check out how to edit a page, or use out the sandbox to try out your editing skills.  New contributors are always welcome. --MarkSweep &#x270D; 00:28, 10 September 2005 (UTC)

Sample median
The article states "However, the sample median, which is not affected by extreme values, can be used in its place." I think this statement is ambiguous. Does it imply that the sample median can be used as the mean, or that the sample median can be used as the median?. -- Jeff3000 16:32, 29 March 2006 (UTC)


 * In the current version, this is much clearer: use the sample median instead of the sample mean to report the central tendency of a sample from a Cauchy population (and to estimate the location parameter of the population). --MarkSweep (call me collect) 14:32, 24 July 2006 (UTC)

Cauchy Equivalent to Student's t-distribution?
I read both here and in print sources (for instance Dagpunar 1988) that a standard Cauchy distribution is a special case of Student's t distribution with one degree of freedom.

How can that be if the Cauchy distribution does not have a defined mean, but S(1) has a mean of zero? I have generated plots and calculated stats for both distributions, and the numbers don't match, either.

There are particularly fast algorithms for generating Cauchy variates, and I would have liked to use that approach for generating S(1) variates, but if the two distributions are really different (as it seems), then I can't do that.

DLC: You are confused in thinking that the Student's t with 1 degree of freedom has any moments. In fact, it does not. The t-distribution generally has moments up to order df - 1.


 * The t distribution has the following pdf:


 * $$f(x) = \frac{\Gamma((\nu+1)/2)}

{\sqrt{\nu\pi}\,\Gamma(\nu/2)\,(1+x^2/\nu)^{(\nu+1)/2}} \!$$


 * Now set $$\nu = 1$$ and simplify:


 * $$f(x) = \frac{\Gamma(1)}{\sqrt{\pi}\, \Gamma(1/2)\, (1+x^2)} \!$$


 * Look up the values of the Gamma function. It turns out that $$\Gamma(1) = 1$$ and $$\Gamma(1/2) = \sqrt{\pi}$$.  Therefore:


 * $$f(x) = \frac{1}{\pi\, (1+x^2)} \!$$


 * Which is precisely the pdf of the standard Cauchy distribution. --MarkSweep (call me collect) 17:59, 23 July 2006 (UTC)

HWHM versus FWHM
Are you sure, that γ is the half width at half maximum (HWHM) and not full width at half maximum (FWHM)? In articles in other languages I found that it is FWHM. ...? 149.220.35.153 09:11, 26 January 2007 (UTC)Len


 * Those other articles are wrong then. Forget about the offset $$x_0$$. The Cauchy distribution is


 * $$f=\frac{1}{\pi}\left(\frac{\gamma}{x^2+\gamma^2}\right)$$


 * Its pretty easy to see that when x=0, f is at its maximum value of 1/&pi;&gamma; and when x=&gamma; f is half of that. So that means the half width at half maximum is &gamma; and the full width at half maximum will be twice that or 2&gamma;. PAR 15:06, 26 January 2007 (UTC)

Thank you very much. Now I could find where my mistake was. In all the other articles was written
 * $$ f(x) = \frac{1}{2\pi}\frac{\Gamma}{x^2+\Gamma^2/4}$$

and with that, Gamma is FWHM, of course. 149.220.35.153 13:26, 29 January 2007 (UTC)LEN

In the reference to central limit theorem, it is mentioned that the sample mean (X1+...+Xn)/n has the same distribution as Xi, so central limit thm doesn't apply. But doesn't central limit theorem deal with (X1+..+Xn)/sqrt(n)?

65.219.4.5 01:38, 10 March 2007 (UTC)corecirculator

A little more user-friendly
Just wondering if someone more knowledgeable might be willing to make this page a little more user-friendly for the interested/educated non-mathematician/non-physicist. Perhaps add some more information on how the Cauchy-Lorenz distribution is used in probability and physics, its applications and implications?
 * I agree. Start the article with "The Cauchy-Lorenz distribution is used to find solutions to problems involving forced resonance etc. It is defined as, and has the following properties: . Example problem worked through. History (when conceived, first used, etc.)".--203.214.140.61 13:30, 23 July 2006 (UTC)
 * Or even mention the Cauchy-Lorentz distribution? ;-P  —DIV (128.250.204.118 06:47, 9 July 2007 (UTC))

Is it correct to redirect Lorentzian function to this side???
I was looking for the definition of the Lorentzian function and got a little confused when I was redirected here. OK the probability function of the Cauchy distribution follows a Lorentzian function (I know that now) but if you don't know that from the begining you don't know what to look for. For example I was reading a text which said that a focused telescope will receive backscatter as a Lorentzian function of the distance centered around the focus distance. Turning up at a distribution page wasn't really what I expected ...

A "Lorentzian function" google search gives 70 000 hits and Gaussian function has a (not so good?) wikipedia page so I think the Lorentzian can deserves a page of its own. Otherwise I would like to say that the people behind the distribution pages on wikipedia has and are doing a great job!!!!

Petter


 * Hi - "Lorentzian function" is just another name for "Cauchy distribution", so it definitely should not have a separate page. I have put in that it is sometimes known as the Lorentzian function. PAR 15:51, 31 May 2007 (UTC)

Lorentzian power-spectrum
An application: In a lot of micro-manipulation techniques (Atomic force microscopes, magnetic tweezers, optical tweezers) a harmonic force holds a probe (AFM tip, magnetic/dielectric microsphere) and the probe is subject to thermal noise (white noise). This results in a Lorentzian power-spectrum for the probe motion. Lorentzian fits to measured power spectra are widely used when calibrating these instruments. —Preceding unsigned comment added by 82.181.221.219 (talk) 18:51, 24 May 2008 (UTC)

Generalized Cauchy distribution
Hi,

The page on ratio distributions mentions a generalized Cauchy distribution (and also the Cauchy distribution with different names for the variables etc.). Could that information be moved here? In particular, given two normally distributed variables X and Y, what does the distribution for X/Y look like? If Y is sufficiently far from (as in: unlikely to be) 0, is there a good approximation for the mean and variance? -Sesse (talk) 22:31, 11 August 2008 (UTC)

color graph
I'm copying this from a note someone put in the article:
 * Note: need to define which line is green in the illustration for the people who can't see green. Other colors as well.

C RETOG 8(t/c) 19:43, 7 November 2008 (UTC)

mean is defined
The Cauchy distribution is symmetric. Therefore, the mean is x0. 75.45.17.132 (talk) 20:40, 8 November 2009 (UTC)JH


 * Your argument assumes the mean/expectation exists, ie. the integral converges. Consider the integral
 * $$\int_{-\infty}^\infty 1\,dx $$
 * The integrand 1 is symmetric about any real number x0. Hence by your argument, the integral evaluates to x0. Take two distinct values of x0. Contradiction. 131.111.213.45 (talk) 01:15, 2 December 2009 (UTC)

Second moment does not exists
Why is there a special section on the second moment, when it is already shown that the first moment does not exist? At least a remark along the lines "EX^2 is infinite, but we knew that anyway since EX does not exists" seems useful (Standard argument using Jensen's inequality for higher order moments as convex functions). —Preceding unsigned comment added by 158.143.76.83 (talk) 13:24, 15 November 2010 (UTC)

The argument above (and in the article) are both not (quite) true. the above states that the mean is undefined, but it can be shown that the mean is for practical purposes zero (or x0 for the general case). However, I see a couple of problems with the second moment calculation as presented. 1 - the logic leading to the integral decomposition makes no sense int (x^2/(x2+1))dx does not decompose to int(1)dx - int(1/(x^2+1))dx. I think what the article is trying to say is the you can approximate the integral by int(1)dx - the spike close to zero, but there is no reasoning for this rewriting. 2 - the second moment is infinite, because when x is large (+ve or -ve) x^2/(x^2+1) is approximately 1. The integral is strictly less than the approximation of int(1)dx from -inf to +inf. This mens the variance is infinite (undefined) — Preceding unsigned comment added by 82.40.165.96 (talk) 14:56, 11 December 2012 (UTC)

Forced oscillator equation
It says 'citation needed' regarding the Lorentzian function being a solution of forced oscillator equation. Why not link to the Harmonic Oscillator page, which shows the derivation of the amplitude for sinusoidal driving force? Taking the modulus-squared of this gives the intensity, which is a Lorentzian function as described in this article. 128.223.131.146 (talk) 10:50, 14 November 2009 (UTC) JMH

Cauchy Dirac distribution
The identical formula with infinitesimal gamma gives a Dirac delta function defined by Cauchy in 1827. This is mentioned at the Dirac delta function page. Any thoughts about mentioning this here? Tkuvho (talk) 10:12, 17 February 2010 (UTC)

New section - rename?
Is the new section "Wrapped Cauchy distribution" really about that? Should it actaully be renamed "Complex Cauchy distribution"? Should it be put in its own separate article? Melcombe (talk) 09:52, 5 October 2010 (UTC)


 * Wrapped Cauchy, complex Cauchy and Poisson kernel are all equivalent terms referring to the same family of distributions on the unit circle. You can work with the angular variable arg(z) on (0, 2\pi), as is done in the main article on the wrapped Cauchy, but the discontinuity introduces an unnecessary awkwardness, obscuring the fact that all three are really the same.  I'm not inclined to draw distinctions where none exist. And in a related sense, they are not very different from the ordinary Cauchy family on the real line. The Ferguson reference is the one cited in the reference list. (128.135.149.5 (talk) 19:09, 5 October 2010 (UTC))


 * Ah. Good old reliable telepathy. And still lots of supposed result for which there are no citations. Melcombe (talk) 09:22, 7 October 2010 (UTC)


 * The way this section is written now, it is rambling and unclear. E.g., what is $$\theta$$?. If $$\theta=\mathrm{arg}(z)$$ then say so. If its the other definition, say so. If they are identical, then say so. Also, if this section is dealing with the wrapped Cauchy, it should be in the wrapped Cauchy article, with a few words of introduction here. Also, the notation should be consistent with the wrapped Cauchy article. That probably means changing the notation in this article. Changing notation in the wrapped Cauchy article means that the notation in all the wrapped distribution articles should be changed as well, since they are presently consistent. PAR (talk) 21:07, 5 October 2010 (UTC)
 * There is the possibility of instead moving any stuff that isn't there already into the existing article McCullagh's parametrization of the Cauchy distributions, which may better reflect its supposed usefulness. Melcombe (talk) 09:22, 7 October 2010 (UTC)

It is a wrapped Cauchy
The probability density described as wrapped Cauchy does not correspond to the wrapped Cauchy distribution. We can always choose a c and $$\varepsilon$$ such that


 * $$\zeta=c\cos\varepsilon+ic\sin\varepsilon$$

in which case the distribution becomes


 * $$P(\theta)=\frac{1}{2\pi}\,\frac{1-|\zeta|^2}{|e^{i\theta}-\zeta|^2} = \frac{1}{2\pi}\,\frac{c^2-1}{c^2+1+2c\cos(\theta-\varepsilon)}$$

which resembles the wrapped Cauchy distribution:


 * $$P(\theta)=\frac{1}{2\pi}\,\frac{\sinh(\gamma)}{\cosh(\gamma)-\cos(\theta-\mu)}$$

but there is no way $$\sinh(\gamma)=-(c^2+1)/2c$$ because, e.g. $$\sinh(\gamma)$$ contains exponentials of $$\gamma$$ while while c^2 is a rational function of both $$\mu$$ and $$\gamma$$.PAR (talk) 03:53, 7 October 2010 (UTC)


 * You've really made a total mess of this section.
 * If x is real, (x-i)/(x+i) is a complex number of unit modulus.
 * If X~Cauchy(mu, gamma), then Z=(X-i)/(X+i) is distributed on the unit circle with density (1 - |\theta|^2)/(2\pi|z - \theta|^2) with respect to angular measure d\phi at z=e^{i\phi}.
 * The complex number \theta is not arg(z). It is the parameter of the distribution, less than one in modulus, and equal to
 * \theta = (\mu + i \gamma - i)/(\mu + i gamma + i)
 * And, contrary to your claim, the wrapped and complex cauchy families are exactly the same, and the same as the Poisson kernel.
 * 88.210.148.113 (talk) 10:59, 7 October 2010 (UTC)


 * Whoever inserted this section left a real mess. Not only in its appalling formatting and by ignoring the alrady existing article McCullagh's parametrization of the Cauchy distributions, but more importantly by using the term "wrapped Cauchy distribution" for something that is entirely different from the distribution in the article wrapped Cauchy distribution. I note that also that someone editing McCullagh's parametrization of the Cauchy distributions made exactly the same mistake. For now, let's follow McCullagh (as in the paper ref'd to in that article) and use the term circular Cauchy distribution for the non-wrapped thing in this article.

Two points:


 * This section was inserted by anonymous user 69.209.75.73, and is the only contribution under that IP. 128.135.149.5, possibly the same person, added some more, then 88.210.148.113, possibly the same person, is defending it in the discussion page.  I think this person is a new editor, and is not familiar with how to write a coherent article, or how to communicate on the talk page. No problem, I was a new editor once too, but please try to watch how other people do things and learn by imitation. To start with, if these are the same person, could this person please register under some name so that we can talk to a particular person rather than an ever-changing IP address? When you write something on the discussion page, follow it with four tildes "~", and your Wikipedia name will automatically be signed.


 * To 88.210.148.113 - I don't know who you are talking to, but yes, if x is real, (x-i)/(x+i) is a complex number of unit modulus, just as it says in the article. The statement of the probability density is correct as written, but its also the same if you define theta as arg(z) and define zeta as the parameter. It doesn't matter, its just notation. If arg(z) is defined as theta and zeta is defined as the parameter, this new section not only remains consistent, but conforms to the notation in the wrapped Cauchy article, (which is, I agree with Melcombe, now irrelevant). If the given distribution is identical to the wrapped Cauchy distribution, then I have made an error in the above argument. Please point out that error, rather than making an unsupported pronouncement as an anonymous user. PAR (talk) 13:43, 7 October 2010 (UTC)

The transformation $x\mapsto e^{ix}$ wraps the real line onto the unit circle with no stretching, so the points $x\pm 2n\pi$ are all mapped to $e^{ix}$ by wrapping. The transformation $x\mapsto (x-i)/(x+i)$ maps the real line onto the unit circle without wrapping. If X is a Gaussian variable, $e^{iX}$ is a wrapped Gaussian variable by definition; if X is Cauchy(\mu, \gamma), $(X-i)/(X+i)$ is a circular Cauchy variable, whereas $Z=e^{iX}$ is a wrapped Cauchy variable having the wrapped Cauchy distribution with parameter $\theta=e^{i \mu - \gamma}$. This is what wrapping means as I understand it. From the Cauchy characteristic function it follows that $E(Z^n) = Ee^{inX} = \theta^n$ are the Fourier coefficients of the density of the wrapped Cauchy, and therefore the wrapped Cauchy coincides with the circular Cauchy, which is the distribution of circularly transformed variable. The present article is self-contradictory on this point.


 * Ok this is where the problem lies. The wrapped variable in the wrapped Cauchy distribution article is (x mod $$2\pi$$) not $$e^{ix}$$. $$e^{ix}$$ then corresponds to z in the wrapped Cauchy article, but z is not Cauchy distributed, x is, and (x mod $$2\pi$$) is wrapped Cauchy distributed.


 * I know you are a new editor, but could you PLEASE do the following:
 * Create a user name instead of writing anonymously. That way you can use any computer and you can share your computer with other contributors without causing confusion.
 * When you respond to someone, indent your comments with one more colon than the previous person. Just like I did here.
 * Sign your name with four tildes after writing on the talk page
 * Enclose Tex or Latex in $$$$ rather than $$
 * There is no need for carriage returns when writing on a talk page or an article.
 * Never alter what someone has written on the talk page, unless it exactly preserves the original meaning. (i.e. don't change the title of this section).

Thanks PAR (talk) 19:49, 8 October 2010 (UTC)

Central Limit Theorem
"To see that this is true, compute the characteristic function of the sample mean:" Shouldn't this be done for the reader? I don't think Wikipedia should be assigning homework. If the thought is that this is easy enough to see in one's head, then I offer my disagreement. —Preceding unsigned comment added by 71.199.204.156 (talk) 18:13, 6 January 2011 (UTC)

Multivariate Cauchy Example
I'm checking out the particular bivariate example given, which is

f(x, y; x_0,y_0,\gamma)= { 1 \over \pi } \left[ { \gamma \over ((x - x_0)^2 + (y - y_0)^2 +\gamma^2)^{1.5} } \right]. $$ and the opening line of the section "A random vector $$X = (X_1, X_2, ..., X_k)$$ is said to have the multivariate Cauchy distribution if every linear combination of its components $$Y=a_1 X_1 + ... + a_k X_k$$ has a Cauchy distribution." But in this example, if we look at the line $$y = y_0$$, we get the distribution

f(x; x_0,\gamma)= { 1 \over \pi } \left[ { \gamma \over ((x - x_0)^2+\gamma^2)^{1.5} } \right]. $$ which clearly differs from the definition of the Cauchy distribution:

f(x; x_0,\gamma)= { 1 \over \pi } \left[ { \gamma \over {(x - x_0)^2+\gamma^2 }} \right]. $$ It seems something needs to be fixed here, though I don't work with this stuff much so I could easily be overlooking something. Kwvanderlinde (talk) 07:22, 8 November 2011 (UTC)


 * I don't know the answer to this but I suspect this article holds the answer. PAR (talk) 07:39, 8 November 2011 (UTC)


 * The first density above seems be substituting $$y = y_0$$, and that is not the marginal distribution. It is not clear what argument is being made here, but if the OP is saying that (X,Y) being multivariate Cauchy implies that X should be Cauchy, then it is the marginal distribution that is required, not the conditional and not what is done above. Melcombe (talk) 10:10, 8 November 2011 (UTC)


 * Thank you Melcombe. Your understanding of my argument is spot on. I forgot to account for the fact that the marginal distribution is more involved than just "slicing" the distribution. Kwvanderlinde (talk) 04:57, 9 November 2011 (UTC)


 * To help me understand, does that mean that the value on a line is equal to the integral of the density over a plane perpendicular to that line? PAR (talk) 14:20, 9 November 2011 (UTC)


 * Only in this case, where the quantity whose marginal distribution is required is one of the variables used to define the joint density. Otherwise you would first need to define a new variable to define the "line" and find the joint density involving this variable (and selected others). Once you get a joint density involving the target variable, you "just" need to integrate out the other variables. Melcombe (talk) 14:09, 10 November 2011 (UTC)

Variance is infinity
Infinity is a shorthand term for a limiting process: If, as x approaches x0, f(x) increases without bound, then the limit of f(x) as x approaches x0 is said to be equal to infinity. (This can be expressed much more precisely, but you get the idea.) For an integral between -x0 and x1 of f(x), where x0 and x1 are positive, the limit of the integral as x0 and x1 go to infinity increases without bound. So the statement that the variance is infinite is correct. PAR (talk) 19:23, 27 September 2012 (UTC)
 * Your statement about the shorthand is correct; this does not, however, apply to the variance for this distribution. Its second moment (around zero) is infinite in this sense. The variance, on the other hand, is defined as the second moment around the mean, which itself is undefined. So although it feels that since one could choose any value as the mean and get an infinite variance (and the mode or median would serve just as well, the fact is that there is no value to serve as the point at which to centre the second moment on to determine the variance. This point is made a few times in the article itself; the lead was actually in contradiction with the article content until I changed it. — Quondum 19:51, 27 September 2012 (UTC)
 * Yes, I see what you mean and I agree. PAR (talk) 06:18, 28 September 2012 (UTC)
 * Just thinking about it, if the mean is specified, or assumed to be a real number, then the variance is infinite. PAR (talk) 03:10, 5 October 2012 (UTC)

straight forward calculation of mean?
I have little knowledge as to why the Cauchy distribution is famous and so I may be entirely besides the point. However, in the article under the subtitle MEAN, it is stated that equation (1) may or may not be equal to equation (2), which IMHO is not true. The two equations are per definition of the absolute value true. Furthermore, it is stated that the difference of two infinities is undefined, which is only true provided you have no information about the infinities. In the present case we do:

MEAN

= integral of x over x*P(x)

= lim (b->infinity) of int(x/(pi*a*  (1+((x-x0)/a)^2)     )) from x0-b to x0+b

= lim (b->infinity) of a/pi *[ (0.5*log(a^2+(x0-x)^2))-x0/a*arctan((x0-x)/a)] from x0-b to x0+b

= lim (b->infinity) of a/pi * [(0.5*log(a^2+b^2))-x0/a*arctan(-b/a) -      (0.5*log(a^2+b^2))-x0/a*arctan(b/a)]

= lim (b->infinity) of x0/pi * [-arctan(-b/a) + arctan(b/a)]

= x0/pi * [-(-pi/2) + pi/2]

= x0

I used http://integrals.wolfram.com for the integration. So I am surely missing something? Could someone please enlighten me? ;-) /Jens — Preceding unsigned comment added by 117.120.16.132 (talk) 06:00, 17 December 2012 (UTC)


 * Even wolframalpha says: (integral does not converge). This is the whole point of the section, which I think it makes well. However, this is not the place for a tutorial on convergence, which is where you are missing something. — Quondum 12:18, 18 December 2012 (UTC)


 * The heck with Wolfram — it cannot be relied on. But it disappoints me to see someone on the talk page acting unkindly to another when it would take only a sentence or two to explain why someone's statement is incorrect.


 * To the unsigned poster: For one thing, your calculus is not correct. But in any case the integral over -∞ < x < ∞ of the function f(x)  =  x/(1+x2) would have to exist for the mean of the Cauchy density  x/(π(1+x2))  to exist.  According to standard definitions of what it means for an integral to exist, the fact that it has an infinite positive area and an infinite negative area is sufficient to conclude that the integral does not exist.


 * Taking the limit of the symmetric integral from -c ≤ x ≤ c (for c > 0) as c → ∞ does indeed approach a limit (which is clearly 0 by symmetry) and is called the symmetric integral, and its existence is interesting — but has no bearing on whether the original integral just plain exists, in standard terminology.Daqu (talk) 23:02, 4 September 2014 (UTC)

Very unhelpful illustration
The upper-right illustration of the Cauchy density function for various values of the parameter $$\gamma$$ is exceptionally unhelpful.

The reason for this uselessness is the utterly ridiculous choice of the y:x aspect ratio, namely 7:100. (Why would anyone want such a random aspect ratio???)

What we need is a 1:1 aspect ratio, for which many easy-to-discover parameter values create a very easy-to-understand, and faithful, depiction of the curve.Daqu (talk) 01:15, 2 September 2014 (UTC)


 * I agree. Right now the illustration fails to convey that each parametrization of the distribution is fat-tailed. We need someone with graphing abilities to redraw it. Loraof (talk) 01:46, 11 August 2017 (UTC)

Explanation of undefined moments gives a misleading approach to mathematical definitions
The section "Explanation of undefined moments" intends to convey that the same intuitive idea can be expressed by different mathematical definitions. However, the wording of this section gives the misleading impression that a mathematical definition can have optional inequivalent interpretations.

The section explains that the mean of the Cauchy distribution does not exist and then begins a discussion of how to assign the distribution a mean. It should be made clear that this attempt to assign the Cauchy distribution a mean does not assign it a value that meets the standard definition of the mean of a random variable. What is being done in the section is to define a property of a random variable that is different from the mean.

In particular the section asserts that "we may take" $$ \int_{-\infty}^{\infty} x f(x)dx \ $$ "to mean" $$\  lim_{a \rightarrow \infty} \int_{-a}^{a} x f(x) dx $$. While it is true that, in different contexts, the same mathematical notation can be used to denote two different mathematical objects, it is not true that the Cauchy principal value is a permissible interpretation of $$ \int_{-\infty} ^ {\infty} $$ in the context that this notation is used to define the mean of a random variable.

Tashiro~enwiki (talk) 22:42, 7 January 2017 (UTC)


 * Agreed - I think it is better now. PAR (talk) 00:00, 8 January 2017 (UTC)

Contradiction?
The section Cauchy distribution says


 * ''Although the sample values $$x_i$$ will be concentrated about the central value $$x_0$$, the sample mean will become increasingly variable as more samples are taken, because of the increased likelihood of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the samples themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of $$x_0$$ than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more samples are taken.

Is this a contradiction? Loraof (talk) 16:36, 20 June 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 3 external links on Cauchy distribution. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20110930062639/http://www.econ.yorku.ca/cesg/papers/berapark.pdf to http://www.econ.yorku.ca/cesg/papers/berapark.pdf
 * Added archive https://web.archive.org/web/20110816002255/http://faculty.ksu.edu.sa/69424/USEPAP/Coushy%20dist.pdf to http://faculty.ksu.edu.sa/69424/USEPAP/Coushy%20dist.pdf
 * Added archive https://web.archive.org/web/20090914055538/http://www3.stat.sinica.edu.tw/statistica/oldpdf/A7n310.pdf to http://www3.stat.sinica.edu.tw/statistica/oldpdf/A7n310.pdf

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 10:34, 1 August 2017 (UTC)

Removed self-promotion
I removed some apparently self-promotional material to an article in a minor, predatory journal (one owned by Scientific Research Publishing). The user has some form for this elsewhere on Wikipedia e.g. see Talk:Capital asset pricing model. Parts of their addition are referenced to quality articles/journals, though I have removed any original interpretation of them (i.e. what is left on Wikipedia now matches what is stated in the article). Optaldienne (talk) 13:51, 11 November 2017 (UTC)

Comparison with the normal distribution is wrong?
The current section comparing the Cauchy distribution to the normal distribution says that the Cauchy has a higher peak and lower tails. That seems to be the exact opposite of the truth: the Cauchy has much fatter tails away from the center which is why the mean doesn't exist for the Cauchy, and the peak of the standard normal is higher than the standard Cauchy. You can plot the ratio of the standard normal to the standard Cauchy to verify this: The ratio is >1 between about -1.7 and 1.7, while the ratio decays to 0 outside this region, meaning the Cauchy has fatter tails and a lower peak. 128.151.113.25 (talk) 03:34, 23 March 2023 (UTC)

Incomprehensible sentence under Occurrences and applications
From the current page: "The expression for imaginary part of complex electrical permittivity according to Lorentz model is a model VAR (value at risk) producing a much larger probability of extreme risk than Gaussian Distribution."

This above statement doesn't make sense; it unsuccessfully tries to combine electromagnetic theory with statistical risk analysis. I looked through the page history, and it appears that the above sentence was mashed together from two previously distinct bullet points:


 * The expression for imaginary part of complex electrical permittivity according to Lorentz model is a Cauchy distribution.
 * As an additional distribution to model fat tails in computational finance, Cauchy distributions can be used to model VAR (value at risk) producing a much larger probability of extreme risk than Gaussian Distribution.

I propose we unmangle the current sentence back into the prior two bullet points.

-- QB2k (talk) 10:39, 20 October 2023 (UTC)


 * Per Wikipedia's philosophy that editors "be bold," I've gone ahead and made this change.
 * -- QB2k (talk) 19:40, 25 October 2023 (UTC)