Conditional dependence

In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs. For example, if $$A$$ and $$B$$ are two events that individually increase the probability of a third event $$C,$$ and do not directly affect each other, then initially (when it has not been observed whether or not the event $$C$$ occurs) $$\operatorname{P}(A \mid B) = \operatorname{P}(A) \quad \text{ and } \quad \operatorname{P}(B \mid A) = \operatorname{P}(B)$$ ($$A \text{ and } B$$ are independent).

But suppose that now $$C$$ is observed to occur. If event $$B$$ occurs then the probability of occurrence of the event $$A$$ will decrease because its positive relation to $$C$$ is less necessary as an explanation for the occurrence of $$C$$ (similarly, event $$A$$ occurring will decrease the probability of occurrence of $$B$$). Hence, now the two events $$A$$ and $$B$$ are conditionally negatively dependent on each other because the probability of occurrence of each is negatively dependent on whether the other occurs. We have $$\operatorname{P}(A \mid C \text{ and } B) < \operatorname{P}(A \mid C).$$

Conditional dependence of A and B given C is the logical negation of conditional independence $$((A \perp\!\!\!\perp B) \mid C)$$. In conditional independence two events (which may be dependent or not) become independent given the occurrence of a third event.

Example
In essence probability is influenced by a person's information about the possible occurrence of an event. For example, let the event $$A$$ be 'I have a new phone'; event $$B$$ be 'I have a new watch'; and event $$C$$ be 'I am happy'; and suppose that having either a new phone or a new watch increases the probability of my being happy. Let us assume that the event $$C$$ has occurred – meaning 'I am happy'. Now if another person sees my new watch, he/she will reason that my likelihood of being happy was increased by my new watch, so there is less need to attribute my happiness to a new phone.

To make the example more numerically specific, suppose that there are four possible states $$\Omega = \left\{ s_1, s_2, s_3, s_4 \right\},$$ given in the middle four columns of the following table, in which the occurrence of event $$A$$ is signified by a $$1$$ in row $$A$$ and its non-occurrence is signified by a $$0,$$ and likewise for $$B$$ and $$C.$$ That is, $$A = \left\{ s_2, s_4 \right\}, B = \left\{ s_3, s_4 \right\},$$ and $$C = \left\{ s_2, s_3, s_4 \right\}.$$ The probability of $$s_i$$ is $$1/4$$ for every $$i.$$

and so

In this example, $$C$$ occurs if and only if at least one of $$A, B$$ occurs. Unconditionally (that is, without reference to $$C$$), $$A$$ and $$B$$ are independent of each other because $$\operatorname{P}(A)$$—the sum of the probabilities associated with a $$1$$ in row $$A$$—is $$\tfrac{1}{2},$$ while $$\operatorname{P}(A\mid B) = \operatorname{P}(A \text{ and } B) / \operatorname{P}(B) = \tfrac{1/4}{1/2} = \tfrac{1}{2} = \operatorname{P}(A).$$ But conditional on $$C$$ having occurred (the last three columns in the table), we have $$\operatorname{P}(A \mid C) = \operatorname{P}(A \text{ and } C) / \operatorname{P}(C) = \tfrac{1/2}{3/4} = \tfrac{2}{3}$$ while $$\operatorname{P}(A \mid C \text{ and } B) = \operatorname{P}(A \text{ and } C \text{ and } B) / \operatorname{P}(C \text{ and } B) = \tfrac{1/4}{1/2} = \tfrac{1}{2} < \operatorname{P}(A \mid C).$$ Since in the presence of $$C$$ the probability of $$A$$ is affected by the presence or absence of $$B, A$$ and $$B$$ are mutually dependent conditional on $$C.$$