Independence

In probability theory, two events are independent, statistically independent, or stochastically independent[1] if the occurrence of one does not affect the probability of the other.

The concept of independence extends to dealing with collections of more than two events, in which case the events are pairwise independent if each pair are independent of each other, and the events are mutually independent if each event is independent of each other combination of events.

Two events

Two events [math]A[/math] and [math]B[/math] are independent (often written as [math]A \perp B[/math] or [math]A \perp\!\!\!\perp B[/math]) if their joint probability equals the product of their probabilities:

[[math]]\mathrm{P}(A \cap B) = \mathrm{P}(A)\mathrm{P}(B).[[/math]]

Why this defines independence is made clear by rewriting with conditional probabilities:

[[math]] \mathrm{P}(A \cap B) = \mathrm{P}(A)\mathrm{P}(B) \Leftrightarrow \mathrm{P}(A) = \frac{\mathrm{P}(A) \mathrm{P}(B)}{\mathrm{P}(B)} = \frac{\mathrm{P}(A \cap B)}{\mathrm{P}(B)} = \mathrm{P}(A\mid B) [[/math]]

and similarly

[[math]]\mathrm{P}(A \cap B) = \mathrm{P}(A)\mathrm{P}(B) \Leftrightarrow \mathrm{P}(B) = \mathrm{P}(B\mid A).[[/math]]

Thus, the occurrence of [math]B[/math] does not affect the probability of [math]A[/math], and vice versa. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if [math]\operatorname{P}([/math][math]A[/math]) or [math]\operatorname{P}([/math][math]B[/math]) are 0. Furthermore, the preferred definition makes clear by symmetry that when [math]A[/math] is independent of [math]B[/math], [math]B[/math] is also independent of [math]A[/math].

More than two events

A finite set of events [math]A_i[/math] is pairwise independent if and only if every pair of events is independent[2]—that is, if and only if for all distinct pairs of indices [math]m, k[/math]

[[math]]\mathrm{P}(A_m \cap A_k) = \mathrm{P}(A_m)\mathrm{P}(A_k).[[/math]]

A finite set of events is mutually independent if and only if every event is independent of any intersection of the other events[2]—that is, if and only if for every [math]n[/math]-element subset [math]A_i[/math],

[[math]]\mathrm{P}\left(\bigcap_{i=1}^n A_i\right)=\prod_{i=1}^n \mathrm{P}(A_i).[[/math]]

This is called the multiplication rule for independent events.

Note that it is not a single condition involving only the product of all the probabilities of all single events; it must hold true for all subset of events. For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse is not necessarily true.

Examples

Rolling a die

The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent. By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trials is 8 are not independent.

Drawing cards

If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are independent. By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are again not independent.

Notes

  1. Russell, Stuart; Norvig, Peter (2002). Artificial Intelligence: A Modern Approach. Prentice Hall. p. 478. ISBN 0-13-790395-2.
  2. 2.0 2.1 Feller, W (1971). "Stochastic Independence". An Introduction to Probability Theory and Its Applications. Wiley.

References