Talk:Independence (probability theory)

What does this notation mean?
In the definition of independent rv's the notation [X &le; a] and [Y &le; b] are used. What does that mean? I don't think I've seen it. - Taxman 14:18, Apr 28, 2005 (UTC)


 * It refers to the event of the random variable X taking on a value less than a. --MarkSweep 16:39, 28 Apr 2005 (UTC)


 * I am puzzled. If you didn't understand this notation, what did you think was the definition of independence of random variables? Michael Hardy 19:20, 28 Apr 2005 (UTC)
 * Well actually that meant I didn't understand it. I had learned it in the past and was familiar with the general concept only. I think we always used P(X&le; a) for consistency, but I could be wrong. In any case the switch from the P notation in the section before to this notation could use some explanation for the village idiots like me. :) I'll see what I can do without adding clutter. - Taxman 20:26, Apr 28, 2005 (UTC)


 * It's not a switch in notation at all: it's not two different notations for the same thing; it's different notations for different things. The notation [X &le; a] does not mean the probability that X &le; a; it just means the event that X &le; a.  The probability that X &le; a would still be denoted P(X &le; a) or P[X &le; a] or Pr(X &le; a) or the like. Michael Hardy 20:51, 28 Apr 2005 (UTC)

I just wrote an edit summary that says:


 * No wonder User:taxman was confused: the statement here was simply incorrect. I've fixed it by changing the word "probability" to "event". Michael Hardy 20:58, 28 Apr 2005 (UTC)

Then I looked at the edit history and saw that Taxman was the one who inserted the incorrect language. So that really doesn't explain it. Michael Hardy 20:56, 28 Apr 2005 (UTC)


 * ... and it does not make sense to speak of two probabilities being independent of each other. Events can be independent of each other, and random variables can be independent of each other, but probabilities cannot. Michael Hardy 20:58, 28 Apr 2005 (UTC)

About a role of probability measure in property of independence of events
Definition of independence of events is entered into probability theory only for the fixed probability $$\mathbf{P}$$. It means, that two events can be independent concerning one probability and to not be independent concerning another. We shall illustrate this opportunity on the simple example connected with the scheme of Bernoulli, i.e. with repeated independent trials, each of which has two outcomes $$S$$ and $$F$$ (' success ' and ' failure '). We shall assume, that $$\mathbf{P}(S) = p$$, where $$ 0 \leq p \leq 1,$$ then $$\mathbf{P}(F) = 1-p.$$ We shall lead three independent trials and we shall consider events.

$$A=\{ $$ no more than one succes $$ \}, $$ $$B=\{ $$ all identical outcomes $$ \}. $$

It is obvious, that

$$A=\{ FFF, \ FFS, \ FSF, \ SFF \}, \ \ \ \ $$ $$B=\{ FFF, \ SSS \}.$$

Hence,

$$\mathbf{P}(A)=p^3+3p^2(1-p), \ \ \ \mathbf{P}(B)=p^3+(1-p)^3, \ \ \ \mathbf{P}(AB)=p^3. $$

Now it is easy to see, that equality

$$\mathbf{P}(A)\mathbf{P}(B)=\mathbf{P}(AB) \ \ \ \ \ \ \ \ \ \ \ \ (1) $$

takes place in trivial cases $$p=0, \ p=1,$$ and also at $$p=1/2,$$, i.e. also in a symmetric case.

Attention, a conclusion: it means, that the "same" events $$A$$ and $$B$$ are independent only at $$p=0, \ p=1, \ p=1/2,$$, for other values of probability of success they are not independent.

Examples of the delusion
Misunderstanding of this fact generates delusion, that in concept of independence of events of probability theory something is put greater, than a trivial relation (1). Behind examples of this delusion far to go it is not necessary. It is enough to come on a page Statistical independence and to look at a preamble:


 * "In probability theory, to say that two events are independent intuitively means that knowing whether one of them occurs makes it neither more probable nor less probable that the other occurs. For example, the event of getting a "1" when a die is thrown and the event of getting a "1" the second time it is thrown are independent. Similarly, when we assert that two random variables are independent, we intuitively mean that knowing something about the value of one of them does not yield any information about the value of the other. For example, the number appearing on the upward face of a die the first time it is thrown and that appearing the second time are independent."

Like from the point of view of common sense all is correct, but only not from the point of view of probability theory where as independence of two events it is understood nothing, except for relation (1). Probably it is the original of the Russian version. The original favourably differs from Russian tracing-paper with one detail: intuitively. But this pleasant detail cannot rescue the position: in probability theory anything greater than the relation (1) is not put in concept of independence of events even at an intuitive level.

The same history and almost with all 'another language' mirrors of this English page. The Russian page ru:Независимость (теория вероятностей) is a tracing-paper from the English original. Here is a preamble of Russian page:


 * "В теории вероятностей случайные события называются независимыми, если знание того, случилось ли одно из них, не несёт никакой дополнительной информации о другом. Аналогично, случайные величины называют независимыми, если знание того, какое значение приняла одна из них, не даёт никакой дополнительной информации о возможных значениях другой."

From eight "mirrors" in different languages only two mirrors have avoided this delusion: Italians and Poles, and that because have written very short papers containing only the relation (1), dexterously and successfully having avoided comments.

Correction ?
Reading the arcticle I feel that the paragraphs for continuous and discrete cases are mixed in the section "Conditionally independent random variables". Formula with inequalities pertains to the continuous case, while formula with equality pertains to discrete case. 134.157.16.38 08:03, 19 June 2007 (UTC) Francis Dalaudier.

independent vs. uncorellated
We had better explain the relationship between these two concepts here. They are often confusing for me. Jackzhp (talk) 14:17, 17 April 2008 (UTC)

I remember I wrote something like the following on wiki, but I can not find them: Jackzhp (talk) 16:17, 16 April 2010 (UTC)
 * Some cases, no correlation implies independence, but usually, no correlation does not independence. While independence always implies no correlation.
 * the cases are binomial (Bernoulli) and Gaussian.


 * Yes, for two two-valued random variables no correlation implies independence. (For three such variables it implies pairwise independence). Also for jointly Gaussian. (But "jointly" is essential.) --Boris Tsirelson (talk) 18:43, 2 April 2011 (UTC)

External link
Recently added random.org seems to be not very pertinent to independence. The link is already present in the randomness article, where it really belongs. I think it may be deleted from independence (on reflection, it may even create a confusion what independence really is). ptrf (talk) 09:19, 27 February 2009 (UTC)

Independent &sigma;-algebras
Near the end of the Independent &sigma;-algebras section there was the following text:

"Using this definition, it is easy to show that if X and Y are random variables and Y is Pr-almost surely constant, then X and Y are independent, since the σ-algebra generated by an almost surely constant random variable is the trivial σ-algebra {&empty;, &Omega;}."

That proof is correct if Y is always constant but not in the stated case of Pr-almost surely constant. The difficulty is that the definition of σ-algebra generated by a random variable does not exclude probability-zero events (as it must since it is defined irrespective of the probability).

I did a quick fix, but my fix could use some polishing.

--67.87.221.55 (talk) 16:11, 2 April 2011 (UTC)


 * Thank you for the fix. --Boris Tsirelson (talk) 18:37, 2 April 2011 (UTC)

X ≤ a
The article speaks of the probability of event X ≤ a. But wouldn't this mean that P(X≤a) = a/∞, as there isn't stated a maximum value b? Also, this does not take care of the fact that what is less than a, might also be a negative number (since 0 is not denoted as a minimum value), stretching to infinity that way too. In that case, it seems to me that P(X≤a) = (∞+a)/∞. This is confusing, and impossible, as 1) it is a fraction with two infinities (not sure if that's the correct plural), 2) it's larger than one and 3) what is infinity plus a? I might be very wrong now, but this is what it looks like to me. --80.212.16.200 (talk) 15:24, 26 December 2011 (UTC)


 * Yes, you are very wrong now; :-)  see Probability distribution. Boris Tsirelson (talk) 16:23, 26 December 2011 (UTC)

Definition of random variable
Before changing I would like to get someone's opinion on this. I think that sentence "Two random variables X and Y are independent iff the π-system generated by them are independent;" is lacking in an important way, that is, it silently assumes that X and Y are defined on the same probabilistic space. I suggest simply adding this (Two random variables X and Y defined on the same probabilistic space are independent ...). Not only this would make mathematical sense, but it could also help (in a distant way) understanding that one cannot speak about dependence or independence of say, number of eyes in each of two rolled dice, without constructing a joint probabilistic space. — Preceding unsigned comment added by 193.219.42.50 (talk) 15:41, 4 April 2013 (UTC)


 * First, yes, they are supposed to be defined on the same probability space (which is usually assumed by default in probability theory). However, why to start emphasizing it only on that place? The same holds already for two events.
 * Second, the phrase you quote is quite bad for another reason. There is a notion "σ-algebra generated by a random variable", but "π-system generated by a random variable" is a bad neologism; the system of events {X ≤ a} is just one out of many π-systems.
 * Boris Tsirelson (talk) 18:39, 4 April 2013 (UTC)


 * Good point. Should I take your answer as a vote for adding this assumption for events as well as random variables, or do you think that neither is necessary? I also agree with your remark on σ-algebra, we can change it unless someone wants to defend that choice. — Preceding unsigned comment added by 193.219.42.50 (talk) 08:37, 5 April 2013 (UTC)


 * Yes, early in the article note that everything happens on the same prob space (unless indicated otherwise), I think so. Boris Tsirelson (talk) 09:15, 5 April 2013 (UTC)

Crazy terminology
"independent (alternatively statistically independent, stochastically independent, marginally independent or absolutely independent" — really? What could all that mean? What for? Does it really happen "In probability theory", or maybe in other disciplines that use probabilistic notions in their different frames? "Marginally independent" in constrast to what? Can they be "non-marginally independent"? "Non-absolutely independent"? Boris Tsirelson (talk) 11:46, 9 September 2013 (UTC)
 * I agree. Stochastically independent or statistically independent are synonyms not different categories, so "statistical" here may be used in some domains to emphasise the statistical nature. But "absolutely" and "marginally" are uncommon and should not be on top of the page, perhaps somewhere else. Limit-theorem (talk) 00:31, 10 September 2013 (UTC)
 * Actually marginal independence has a different (related) meaning, see here. The redundant word "absolutely" is sometimes added to emphasise the absence of the word "conditionally"; I don't know if there is another meaning. I agree these shouldn't be in the lead, though mentioning them somewhere later would be ok. McKay (talk) 05:33, 10 September 2013 (UTC)
 * To emphasise the absence of the word "conditionally" I'd say "(unconditionally) independent". Boris Tsirelson (talk) 05:42, 10 September 2013 (UTC)

Maybe a section dedicated to implications of independence?
Independent random variables and events have several implications. I was thinking this article could benefit from a few basic implications of independence. Obviously there are more than the following.

One example, if events are mutually independent, then their complements are mutually independent. By definition of mutually independent, any pair should also be independent. Consider
 * $$P(\overline{A_1} \cap \overline{A_2}) = P(\overline{A_1 \cup A_2}) $$
 * $$P(\overline{A_1} \cap \overline{A_2}) = 1 - P(A_1 \cup A_2).$$

Using the addition rule, $$P(A_1 \cup A_2) = P(A_1) + P(A_2) - P(A_1 \cap A_2)$$
 * $$P(\overline{A_1} \cap \overline{A_2}) = 1 - P(A_1) - P(A_2) + P(A_1 \cap A_2).$$

Using the property of independence
 * $$P(\overline{A_1} \cap \overline{A_2}) = 1 - P(A_1) - P(A_2) + P(A_1) P(A_2).$$

Take the complements of the probabilities on the right side
 * $$P(\overline{A_1} \cap \overline{A_2}) = 1 - (1 - P(\overline{A_1})) - (1-P(\overline{A_2})) + (1-P(\overline{A_1})) (1-P(\overline{A_2})).$$

Simplify
 * $$P(\overline{A_1} \cap \overline{A_2}) = P(\overline{A_1}) P(\overline{A_2}).$$

We will use this in a recursive fashion to build the next solution. We do this by replacing (in a copy-paste style) $$A_2$$ with $$A_2 \cup A_3$$ and using the properties of mutually independent. The original proof remains valid because of mutual independence.

Replace the left side.
 * $$P(\overline{A_1} \cap \overline{A_2 \cup A_3}) = P(\overline{A_1} \cap \overline{A_2} \cap \overline{A_3}).$$

Doing the same to the right side,
 * $$P(\overline{A_1} \cap \overline{A_2 \cup A_3}) = P(\overline{A_1}) P(\overline{A_2} \cap \overline{A_3}).$$

Using the result from the first iteration,
 * $$P(\overline{A_1} \cap \overline{A_2 \cup A_3}) = P(\overline{A_1}) P(\overline{A_2}) P(\overline{A_3}).$$

Combining the new left side with the new right side,
 * $$P(\overline{A_1} \cap \overline{A_2} \cap \overline{A_3}) = P(\overline{A_1}) P(\overline{A_2}) P(\overline{A_3}).$$

This above can easily be extended to an inductive proof.

As a corollary of the above property we can derive an "OR" instead of "AND" for mutually independent events. Start with
 * $$P\left(\bigcap_{i=1}^n {A_i}\right)=\prod_{i=1}^n P({A_i}).$$

Replace all A_i with their complements
 * $$P\left(\bigcap_{i=1}^n \overline{A_i}\right)=\prod_{i=1}^n P(\overline{A_i}).$$

Take the complements
 * $$1 - P\left(\overline{\bigcap_{i=1}^n \overline{A_i}}\right)=\prod_{i=1}^n 1 - P(A_i).$$
 * $$1 - P\left(\bigcup_{i=1}^n A_i\right)=\prod_{i=1}^n 1 - P(A_i).$$
 * $$P\left(\bigcup_{i=1}^n A_i\right)=1 - \prod_{i=1}^n 1 - P(A_i).$$

Even though these properties I derive here are basic, I think it would be best to find an academic source to cite. A source would likely have many other properties. Otherwise this runs the risk of being declared original research.Mouse7mouse9 22:39, 1 May 2015 (UTC) — Preceding unsigned comment added by Mouse7mouse9 (talk • contribs)


 * Yes... All that can be compressed to the joint distribution of (two or more) indicators (of the independent events). The indicators are independent random variables with two values 0,1 ("binary"); their joint distribution is the product of the marginals; each marginal is just (p,1-p) (but we have two or more numbers p). For n events we have 2n points, with their (product) probabilities, and 22 n events; some of these are more notable than others, of course. Boris Tsirelson (talk) 09:10, 2 May 2015 (UTC)

Temporal independence
Is there a term to describe that some event's probability is independent of time? As an example, the probability of a radioactive atom decaying does not depend on how long it has been waiting. In the case of discrete events, such as dice throws or coin tosses, you say that the events are independent. Seems to me it needs a different term for the continuous case. Gah4 (talk) 21:42, 12 October 2016 (UTC)


 * "probability independent of time" may refer to "stationary process" or "memorylessness". Boris Tsirelson (talk) 06:46, 13 October 2016 (UTC)


 * That is interesting. There is in quantum mechanics stationary state which is similar to your stationary process, which I could have thought of. But it is not a stationary state because it makes one transition, but no more. (In theory it is reversible, but the probability is pretty much zero.)  But I think your memorylessness is what I was asking about. Some event happens, with probability independent of how long you have waited. Thanks! Gah4 (talk) 07:59, 13 October 2016 (UTC)

Pairwise vs mutual example
The XOR of two unbiased independent random bits is a third unbiased random bit, independent of either but not both of the other two. I'm not saying the (6,9,9,6,4,1,1,4) example is wrong, but it doesn't seem to add anything. --God made the integers (talk) 20:18, 14 January 2017 (UTC)


 * I agree. Boris Tsirelson (talk) 21:50, 14 January 2017 (UTC)

Terminological revolution?
Sect. 3.3 "Pairwise and mutual independence":
 * "Consider the two probability spaces shown. (..) The first space is pairwise independent because (...)"

I never saw such terminology before. Events, random variables, sigma-algebras can be independent (or not); a probability space can be nonatomic (or not), standard (or not), etc.; it can be a product space. But can it be independent? Did I miss a recent terminological revolution? Boris Tsirelson (talk) 17:41, 13 February 2017 (UTC)

Independent stochastic process?
About Sect. 1.3 "For stochastic processes": is such terminology ("independent stochastic process") standard? In discrete time, I know "process with independent values". In continuous time, I know "process with independent increments" and "white noise". Also, I know that two (or more) stochastic processes may be independent (mutually). But in continuous time, "process with independent values" is a pathological case (it leads to a non-standard probability space), of no practical use (to the best of my knowledge). Boris Tsirelson (talk) 14:19, 20 December 2018 (UTC)

"multiple statements can be made" removed
I have reverted a fresh large addition of a text that included some trivial consequences (or traits) or events A and B being independent. While all appear to be true, unless they are written based on some WP:RS on the subject of the article, they are WP:UNDUE here. Since there is a potentially infinite number of true statements of this sort, some source for the added list is absolutely necessary). Dimawik (talk) 05:34, 17 September 2023 (UTC)

History of the notion independence in probability
The article starts with the words "Independence is a fundamental notion in probability". Yet, the history of this notion is ignored. It is, in fact, ignored even in presentations of the history of probability. Of course, probabilists had multiplied the probabilities of independent events for centuries., but the formulation remained vague. Two events were called independent when the occurence of one of the does not affect the probability of the other. The meaning of such a formulation was not clear.

The rigorous definition above, which is now standard in all probability books, was first given in 1908 by Georg Bohlmann in an article "Die Grundbegriffe der Wahrscheinlichkeitsrechnung in ihrer Anwendung auf die Lebensversicherung", Atti del .IV. Congr. Int. dei Mathem., Rom, Vol III, (1908), 244-278. This article also contains the first example showing that pairwise independence does not imply mutual independence. The proceedings of a World Congress of Mathematicians cannot be called an obscure journal, but apparently this article was forgotten. The rigorous definition of independence became standard, when it reappeared as part of the famous axiomatization of probability by Kolmogoroff in 1933. Kolmogoroff credited the definition to S. Bernstein, quoting a publication in Russian which appeared in 1927.

More information about Georg Bohlmann can be found in the online article with the address file:///C:/Users/Ulrich/Documents/Bohlmann%20paper%20in%20JEHPS.pdf. As Bohlmann had ideas about axiomatization of probability several months before Hilbert gave his talk in Paris, and Hilbert mentioned Bohlmann in a footnote of the publication of his talk, this invites to speculation if there was a connection. At that time, Bohlmann was a Privatdozent in Goettingen. Golf-ulk (talk) 14:52, 7 October 2023 (UTC)

I had in mind, that somebody else, perhaps the author of this article, checks the information which I provided and then adds some lines about the history to the article. I am a Wikipedia-beginner and English is not my mother tongue. Do I have to insert the information about the history myself? --Golf-ulk (talk) 15:08, 27 October 2023 (UTC)

Now I have added a section on the history of the concept "stochastic independenc". I have not been able to edit the references. Therefore I give the references here, and hope somebody more experienced can complete the job.

References:

12) Cited according to: Grinstead and Snell’s Introduction to Probability. In: The CHANCE Project. Version of July 4, 2006. Website (Memento vom 27. Juli 2011 in the Internet Archive)

13) Kolmogorov, Andrey (1933). Grundbegriffe der Wahrscheinlichkeitsrechnung (in German). Berlin: Julius SpringerTranslation: Kolmogorov, Andrey (1956). Translation:Foundations of the Theory of Probability (2nd ed.). New York: Chelsea. ISBN 978-0-8284-0023-7. Archived from the original on 14 September 2018. Retrieved 17 February 2016.

14) S.N. Bernstein, Probability Theory (Russian), Moscow, 1927 (4 editions, latest 1946)

15) G. Bohlmann: Lebensversicherungsmathematik, Encyklop¨adie der mathematischen Wissenschaften, Bd I, Teil 2, Artikel I D 4b (1901), 852–917

16) G. Bohlmann: Die Grundbegriffe der Wahrscheinlichkeitsrechnung in ihrer Anwendung auf die Lebensversichrung, Atti del IV. Congr. Int. dei Matem. Rom,Bd. III (1908), 244–278.

17) Ulrich Krengel: On the contributions of Georg Bohlmann to probability theory (PDF; 6,4 MB), Electronic Journal for History of Probability and Statistics, 2011, Internet file:///C:/Users/Ulrich/Documents/Bohlmann%20paper%20in%20JEHPS.pdf <!-

Thank you for your help.

signed by golf-ulk - T

Independence of infinitely many events?
I would add a short paragraph about the case of infinite families of events. GrandiniMath (talk) 21:04, 28 January 2024 (UTC)