# Central Limit Theorem

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution (informally a bell curve) even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.

The central limit theorem has several variants. In its common form, the random variables must be identically distributed. In variants, convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations, if they comply with certain conditions.

The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution, is the de Moivre–Laplace theorem.

## Classical CLT

Let $\{X_1, \ldots, X_n\}$ be a random sample of size $n$ — that is, a sequence of independent and identically distributed (i.i.d.) random variables drawn from a distribution of expected value given by $\mu$ and finite variance given by $\sigma^2$. Suppose we are interested in the sample average

[$]\bar{X}_n \equiv \frac{X_1 + \cdots + X_n}{n}[$]

of these random variables. By the law of large numbers, the sample averages converge almost surely (and therefore also converge in probability) to the expected value $\mu$ as $n\to\infty$. The classical central limit theorem describes the size and the distributional form of the stochastic fluctuations around the deterministic number $\mu$ during this convergence. More precisely, it states that as $n$ gets larger, the distribution of the difference between the sample average $\bar{X}_n$ and its limit $\mu$, when multiplied by the factor $\sqrt{n}$ (that is $\sqrt{n}(\bar{X}_n - \mu)$) approximates the normal distribution with mean 0 and variance $\sigma^2$. For large enough n, the distribution of $\bar{X}_n$ is close to the normal distribution with mean $\mu$ and variance $\sigma^2/n$. The usefulness of the theorem is that the distribution of $\sqrt{n}(\bar{X}_n - \mu)$ approaches normality regardless of the shape of the distribution of the individual $X_i$. Formally, the theorem can be stated as follows:

Lindeberg–Lévy CLT. Suppose $\{X_1, \ldots, X_n\}$ is a sequence of i.i.d. random variables with $\mathbb{E}[X_i] = \mu$ and $\operatorname{Var}[X_i] = \sigma^2 \lt \infty$. Then as $n$ approaches infinity, the random variables $\sqrt{n}(\bar{X}_n - \mu)$ converge in distribution to a normal $\mathcal{N}(0, \sigma^2)$: $\sqrt{n}\left(\bar{X}_n - \mu\right)\ \xrightarrow{d}\ \mathcal{N}\left(0,\sigma^2\right) .$

In the case $\sigma \gt 0$, convergence in distribution means that the cumulative distribution functions of $\sqrt{n}(\bar{X}_n - \mu)$ converge pointwise to the cdf of the $\mathcal{N}(0, \sigma^2)$ distribution: for every real number $z$,

[$]\lim_{n\to\infty} \operatorname{P}\left[\sqrt{n}(\bar{X}_n-\mu) \le z\right] = \lim_{n\to\infty} \operatorname{P}\left[\frac{\sqrt{n}(\bar{X}_n-\mu)}{\sigma } \le \frac{z}{\sigma}\right]= \Phi\left(\frac{z}{\sigma}\right) ,[$]

where $\Phi(z)$ is the standard normal cdf evaluated at $z$. The convergence is uniform in $z$ in the sense that

[$]\lim_{n\to\infty}\;\sup_{z\in \mathbb{R}}\;\left|\operatorname{P}\left[\sqrt{n}(\bar{X}_n-\mu) \le z\right] - \Phi\left(\frac{z}{\sigma}\right)\right| = 0~,[$]

where $\sup$ denotes the least upper bound (or supremum) of the set.

## Convergence to the limit

The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables. A sum of discrete random variables is still a discrete random variable, so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution). This means that if we build a histogram of the realizations of the sum of n independent identical discrete variables, the curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity, this relation is known as de Moivre–Laplace theorem. The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values.

## Density functions

The density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov for a particular local limit theorem for sums of independent and identically distributed random variables.

## Applications and examples

### Simple example

A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample statistics to the normal distribution in controlled experiments.

### Real applications

Published literature contains a number of useful and interesting examples and applications relating to the central limit theorem. One source states the following examples:

• The probability distribution for total distance covered in a random walk (biased or unbiased) will tend toward a normal distribution.
• Flipping many coins will result in a normal distribution for the total number of heads (or equivalently total number of tails).

From another viewpoint, the central limit theorem explains the common appearance of the "bell curve" in density estimates applied to real world data. In cases like electronic noise, examination grades, and so on, we can often regard a single measured value as the weighted average of many small effects. Using generalisations of the central limit theorem, we can then see that this would often (though not always) produce a final distribution that is approximately normal.

In general, the more a measurement is like the sum of independent variables with equal influence on the result, the more normality it exhibits. This justifies the common use of this distribution to stand in for the effects of unobserved variables in models like the linear model.