Variance and Moments

Variance is the expectation of the squared deviation of a random variable from its mean, and it informally measures how far a set of (random) numbers are spread out from their mean. The variance has a central role in statistics. It is used in descriptive statistics, statistical inference, hypothesis testing, goodness of fit, Monte Carlo sampling, amongst many others. This makes it a central quantity in numerous fields such as physics, biology, chemistry, economics, and finance. The variance is the square of the standard deviation, the second central moment of a distribution, and it is often represented by [math]\sigma^2[/math] or [math]\operatorname{Var}(X)[/math].

Definition

The variance of a random variable [math]X[/math] is the expected value of the squared deviation from the mean of [math]X[/math], [math]\mu = \operatorname{E}[X][/math]:

[[math]] \operatorname{Var}(X) = \operatorname{E}\left[(X - \mu)^2 \right]. [[/math]]

This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance is also equivalent to the second cumulant of a probability distribution that generates [math]X[/math]. The variance is typically designated as [math]\operatorname{Var}(X)[/math], [math]\sigma^2_X[/math], or simply [math]\sigma^2[/math] (pronounced "sigma squared"). The expression for the variance can be expanded:

[[math]]\begin{align*} \operatorname{Var}(X) &= \operatorname{E}\left[(X - \operatorname{E}[X])^2\right] \\ &= \operatorname{E}\left[X^2 - 2X\operatorname{E}[X] + (\operatorname{E}[X])^2\right] \\ &= \operatorname{E}\left[X^2\right] - 2\operatorname{E}[X]\operatorname{E}[X] + (\operatorname{E}[X])^2 \\ &= \operatorname{E}\left[X^2 \right] - (\operatorname{E}[X])^2 \end{align*} [[/math]]

A mnemonic for the above expression is "mean of square minus square of mean".

Continuous random variable

If the random variable [math]X[/math] represents samples generated by a continuous distribution with probability density function[math]f(x)[/math], then the population variance is given by

[[math]]\operatorname{Var}(X) =\sigma^2 =\int (x-\mu)^2 \, f(x) \, dx\, =\int x^2 \, f(x) \, dx\, - \mu^2[[/math]]

where [math]\mu[/math] is the expected value,

[[math]]\mu = \int x \, f(x) \, dx\, [[/math]]

and where the integrals are definite integrals taken for [math]x[/math] ranging over the range of [math]X[/math].

If a continuous distribution does not have an expected value, as is the case for the Cauchy distribution, it does not have a variance either. Many other distributions for which the expected value does exist also do not have a finite variance because the integral in the variance definition diverges. An example is a Pareto distribution whose index [math]k[/math] satisfies [math]1 \lt k \leq 2[/math].

Discrete random variable

If the generator of random variable [math]X[/math] is discrete with probability mass function [math]x_1 \mapsto p_1, x_2 \mapsto p_2, \ldots, x_n \mapsto p_n[/math] then

[[math]]\operatorname{Var}(X) = \sum_{i=1}^n p_i\cdot(x_i - \mu)^2,[[/math]]

or equivalently

[[math]]\operatorname{Var}(X) = \sum_{i=1}^n p_i x_i ^2- \mu^2,[[/math]]

where [math]\mu[/math] is the expected value, i.e.

[[math]]\mu = \sum_{i=1}^n p_i\cdot x_i. [[/math]]

(When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.)

The variance of a set of [math]n[/math] equally likely values can be written as

[[math]] \operatorname{Var}(X) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2. [[/math]]

where [math]\mu[/math] is the expected value, i.e.,

[[math]]\mu = \frac{1}{n}\sum_{i=1}^n x_i [[/math]]

The variance of a set of [math]n[/math] equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all points from each other:[1]

[[math]] \operatorname{Var}(X) = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \frac{1}{2}(x_i - x_j)^2 = \frac{1}{n^2}\sum_i \sum_{j \gt i} (x_i-x_j)^2. [[/math]]

Properties

Basic properties

Property Formula
Non-negative [math] \operatorname{Var}(X)\ge 0 [/math]
Constant [math] \operatorname{Var}(c) = 0 \, [/math]
Translation [math] \operatorname{Var}(X + c) = \operatorname{Var}(X)[/math]
Scaling [math] \operatorname{Var}(cX) = c^2 \operatorname{Var}(X) [/math]

Formulae for the variance

A formula often used for deriving the variance of a theoretical distribution is as follows:

[[math]] \operatorname{Var}(X) =\operatorname{E}(X^2) - (\operatorname{E}(X))^2. [[/math]]

This will be useful when it is possible to derive formulae for the expected value and for the expected value of the square.

Calculation from the CDF

The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function [math]F[/math] using

[[math]] 2\int_0^\infty u( 1-F(u))\,du - \Big(\int_0^\infty 1-F(u)\,du\Big)^2. [[/math]]

This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed.

Characteristic property

The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. [math]\mathrm{argmin}_m\,\mathrm{E}((X - m)^2) = \mathrm{E}(X)\,[/math]. Conversely, if a continuous function [math]\varphi[/math] satisfies [math]\mathrm{argmin}_m\,\mathrm{E}(\varphi(X - m)) = \mathrm{E}(X)\,[/math] for all random variables [math]X[/math], then it is necessarily of the form [math]\varphi(x) = a x^2 + b[/math], where a > 0. This also holds in the multidimensional case.[2]

Standard Deviation

The standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice less robust, than the average absolute deviation.[3][4]

A useful property of the standard deviation is that, unlike the variance, it is expressed in the same units as the data. There are also other measures of deviation from the norm, including mean absolute deviation, which provide different mathematical properties from standard deviation.[5]

Identities and mathematical properties

Property Formula
Constant [math] \sigma(c) = 0 [/math]
Translation [math] \sigma(X + c) = \sigma(X) [/math]
Scaling [math] \sigma(cX) = |c| \sigma(X) [/math]

Chebyshev's Inequality

Chebyshev's inequality (also spelled as Tchebysheff's inequality) guarantees that in any probability distribution, "nearly all" values are close to the mean—the precise statement being that no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean (or equivalently, at least 1−1/k2 of the distribution's values are within k standard deviations of the mean). The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to completely arbitrary distributions (unknown except for mean and variance). For example, it can be used to prove the weak law of large numbers.

In practical usage, in contrast to the 68–95–99.7 rule, which applies to normal distributions, under Chebyshev's inequality a minimum of just 75% of values must lie within two standard deviations of the mean and 89% within three standard deviations.[6][7]

Probabilistic statement

Let [math]X[/math] (integrable) be a random variable with finite expected value [math]\mu[/math] and finite non-zero variance [math]\sigma^2[/math]. Then for any real number [math]k\gt0[/math],

[[math]] \operatorname{P}(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}. [[/math]]

Only the case [math]k \gt 1[/math] is useful. When [math]k \leq 1[/math] the right hand

[[math]] \frac{1}{k^2} \geq 1 [[/math]]

and the inequality is trivial as all probabilities are ≤ 1. As an example, using [math]k = \sqrt{2}[/math] shows that the probability that values lie outside the interval [math]\mu - \sqrt{2}\sigma, \mu + \sqrt{2}\sigma[/math] does not exceed [math]\frac{1}{2}[/math].

Because it can be applied to completely arbitrary distributions (unknown except for mean and variance), the inequality generally gives a poor bound compared to what might be deduced if more aspects are known about the distribution involved.

Moments

In mathematics, a moment is a specific quantitative measure, used in both mechanics and statistics, of the shape of a set of points. If the points represent probability density, then the zeroth moment is the total probability (i.e. one), the first moment is the mean, the second central moment is the variance, the third moment is the skewness, and the fourth moment (with normalization and shift) is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

For a bounded distribution of mass or probability, the collection of all the moments (of all orders, from 0 to [math]\infty[/math]) uniquely determines the distribution.

Significance of the moments

The [math]n[/math]-th moment of a real-valued continuous function [math]f(x)[/math] of a real variable about a value [math]c[/math] is

[[math]]\mu_n=\int_{-\infty}^\infty (x - c)^n\,f(x)\,dx.[[/math]]

The moment of a function, without further explanation, usually refers to the above expression with [math]c[/math] = 0.

For the second and higher moments, the central moments (moments about the mean, with [math]c[/math] being the mean) are usually used rather than the moments about zero, because they provide clearer information about the distribution's shape.

The [math]n[/math]-th moment about zero of a probability density function [math]f(x)[/math] is the expected value of [math]X^n[/math] and is called a raw moment or crude moment.[8] The moments about its mean [math]\mu[/math] are called central moments; these describe the shape of the function, independently of translation.

If [math]f[/math] is a probability density function, then the value of the integral above is called the [math]n[/math]-th moment of the probability distribution. More generally, if [math]f[/math] is a cumulative distribution function of any probability distribution, which may not have a density function, then the [math]n[/math]-th moment of the probability distribution is given by the Riemann–Stieltjes integral

[[math]]\mu'_n = \operatorname{E} \left [ X^n \right ] =\int_{-\infty}^\infty x^n\,dF(x)\,[[/math]]

where [math]X[/math] is a random variable that has this cumulative distribution [math]f[/math], and [math]\operatorname{E}[/math] is the expectation operator or mean.

When

[[math]]\operatorname{E}\left [\left |X^n \right | \right ] = \int_{-\infty}^\infty |x^n|\,dF(x) = \infty,[[/math]]

then the moment is said not to exist. If the [math]n[/math]-th moment about any point exists, so does the [math](n-1)[/math]-th moment (and thus, all lower-order moments) about every point.

The zeroth moment of any probability density function is 1, since the area under any probability density function must be equal to one.

Central Moments

Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to its location.

The [math]n[/math]th moment about the mean (or [math]n[/math]th central moment) of a real-valued random variable [math]X[/math] is the quantity

[[math]] \mu_n = \operatorname{E}[X^n] [[/math]]

where [math]\operatorname{E}[/math] is the expectation operator. For a continuous probability distribution with probability density function [math]f(x)[/math], the [math]n[/math]th moment about the mean μ is

[[math]] \mu_n = \operatorname{E} \left[ ( X - \operatorname{E}[X] )^n \right] = \int_{-\infty}^{+\infty} (x - \mu)^n f(x)\,\mathrm{d} x. [[/math]]

[9]

For random variables that have no mean, such as the Cauchy distribution, central moments are not defined.

The first few central moments have intuitive interpretations:

  • The "zeroth" central moment [math]\mu_0[/math] is 1.
  • The first central moment [math]\mu_1[/math] is 0 (not to be confused with the first (raw) moment itself, the expected value or mean).
  • The second central moment [math]\mu_2[/math] is called the variance, and is usually denoted [math]\sigma^2[/math], where [math]\sigma[/math] represents the standard deviation.
  • The third and fourth central moments are used to define the standardized moments which are used to define skewness and kurtosis, respectively.

Properties

The [math]n[/math]th central moment is translation-invariant, i.e. for any random variable [math]X[/math] and any constant [math]c[/math], we have

[[math]]\mu_n(X+c)=\mu_n(X).\,[[/math]]

For all [math]n[/math], the [math]n[/math]th central moment is homogeneous of degree [math]n[/math]:

[[math]]\mu_n(cX)=c^n\mu_n(X).\,[[/math]]

Only for [math]n[/math] such that [math]n[/math] equals 1, 2, or 3 do we have an additivity property for random variables [math]X[/math] and [math]Y[/math] that are independent:

[[math]]\mu_n(X+Y)=\mu_n(X)+\mu_n(Y)\,[[/math]]

.

Relation to moments about the origin

Sometimes it is convenient to convert moments about the origin to moments about the mean. The general equation for converting the [math]n[/math]th-order moment about the origin to the moment about the mean is

[[math]] \mu_n = \mathrm{E}\left[\left(X - \mathrm{E}\left[X\right]\right)^n\right] = \sum_{j=0}^n {n \choose j} (-1) ^{n-j} \mu'_j \mu^{n-j}, [[/math]]

where μ is the mean of the distribution, and the moment about the origin is given by

[[math]] \mu'_j = \int_{-\infty}^{+\infty} x^j f(x)\,dx = \mathrm{E}\left[X^j\right] [[/math]]

For the cases [math]n[/math] = 2, 3, 4 — which are of most interest because of the relations to variance, skewness, and kurtosis, respectively — this formula becomes (noting that [math]\mu = \mu'_1[/math] and [math]\mu'_0=1[/math]):,

[[math]]\mu_2 = \mu'_2 - \mu^2\,[[/math]]

which is commonly referred to as [math] \mathrm{Var}\left(X\right) = \mathrm{E}\left[X^2\right] - \left(\mathrm{E}\left[X\right]\right)^2[/math]

[[math]]\mu_3 = \mu'_3 - 3 \mu \mu'_2 +2 \mu^3\,[[/math]]
[[math]]\mu_4 = \mu'_4 - 4 \mu \mu'_3 + 6 \mu^2 \mu'_2 - 3 \mu^4.\,[[/math]]

... and so on,[10] following Pascal's triangle, i.e.

[[math]]\mu_5 = \mu'_5 - 5 \mu \mu'_4 + 10 \mu^2 \mu'_3 - 10 \mu^3 \mu'_2 + 4 \mu^5.\,[[/math]]

because [math] 5\mu^4\mu'_1 - \mu^5 \mu'_0 = 5\mu^4\mu - \mu^5 = 5 \mu^5 - \mu^5 = 4 \mu^5[/math]

Symmetric distributions

In a symmetric distribution (one that is unaffected by being reflected about its mean), all odd central moments equal zero, because in the formula for the [math]n[/math]th moment, each term involving a value of [math]X[/math] less than the mean by a certain amount exactly cancels out the term involving a value of [math]X[/math] greater than the mean by the same amount.

Higher moments

High-order moments are moments beyond 4th-order moments. As with variance, skewness, and kurtosis, these are higher-order statistics, involving non-linear combinations of the data, and can be used for description or estimation of further shape parameters. The higher the moment, the harder it is to estimate, in the sense that larger samples are required in order to obtain estimates of similar quality. This is due to the excess degrees of freedom consumed by the higher orders. Further, they can be subtle to interpret, often being most easily understood in terms of lower order moments – compare the higher derivatives of jerk and jounce in physics. For example, just as the 4th-order moment (kurtosis) can be interpreted as "relative importance of tails versus shoulders in causing dispersion" (for a given dispersion, high kurtosis corresponds to heavy tails, while low kurtosis corresponds to heavy shoulders), the 5th-order moment can be interpreted as measuring "relative importance of tails versus center (mode, shoulders) in causing skew" (for a given skew, high 5th moment corresponds to heavy tail and little movement of mode, while low 5th moment corresponds to more change in shoulders).

Sample moments

For all [math]k[/math], the [math]k[/math]-th raw moment of a population can be estimated using the [math]k[/math]-th raw sample moment

[[math]]\frac{1}{n}\sum_{i = 1}^{n} X^k_i[[/math]]

applied to a sample [math]X_1,\ldots,X_n[/math] drawn from the population.

It can be shown that the expected value of the raw sample moment is equal to the [math]k[/math]-th raw moment of the population, if that moment exists, for any sample size [math]n[/math]. It is thus an unbiased estimator. This contrasts with the situation for central moments, whose computation uses up a degree of freedom by using the sample mean. So for example an unbiased estimate of the population variance (the second central moment) is given by

[[math]]\frac{1}{n-1}\sum_{i = 1}^{n} (X_i-\bar X)^2[[/math]]

in which the previous denominator [math]n[/math] has been replaced by the degrees of freedom [math]n-1[/math], and in which [math]\bar X[/math] refers to the sample mean. This estimate of the population moment is greater than the unadjusted observed sample moment by a factor of [math]\frac{n}{n-1},[/math] and it is referred to as the "adjusted sample variance" or sometimes simply the "sample variance".

Notes

  1. Yuli Zhang,Huaiyu Wu,Lei Cheng (June 2012). Some new deformation formulas about variance and covariance. Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012). pp. 987–992.CS1 maint: uses authors parameter (link)
  2. "Why the variance?" (1998). Statistics & Probability Letters 38 (4): 329–333. doi:10.1016/S0167-7152(98)00041-8. 
  3. Gauss, Carl Friedrich (1816). "Bestimmung der Genauigkeit der Beobachtungen". Zeitschrift für Astronomie und verwandte Wissenschaften 1: 187–197. 
  4. Walker, Helen (1931). Studies in the History of the Statistical Method. Baltimore, MD: Williams & Wilkins Co. pp. 24–25.
  5. Gorard, Stephen. Revisiting a 90-year-old debate: the advantages of the mean deviation. Department of Educational Studies, University of York
  6. Kvanli, Alan H.; Pavur, Robert J.; Keeling, Kellie B. (2006). Concise Managerial Statistics. cEngage Learning. pp. 81–82. ISBN 9780324223880.
  7. Chernick, Michael R. (2011). The Essentials of Biostatistics for Physicians, Nurses, and Clinicians. John Wiley & Sons. pp. 49–50. ISBN 9780470641859.
  8. http://mathworld.wolfram.com/RawMoment.html Raw Moments at Math-world
  9. Grimmett, Geoffrey; Stirzaker, David (2009). Probability and Random Processes. Oxford, England: Oxford University Press. ISBN 978 0 19 857222 0.
  10. http://mathworld.wolfram.com/CentralMoment.html

References

  • Wikipedia contributors. "Variance". Wikipedia. Wikipedia. Retrieved 28 January 2022.
  • Wikipedia contributors. "Standard deviation". Wikipedia. Wikipedia. Retrieved 28 January 2022.
  • Wikipedia contributors. "Moment (mathematics)". Wikipedia. Wikipedia. Retrieved 28 January 2022.

Further reading

  • Spanos, Aris (1999). Probability Theory and Statistical Inference. New York: Cambridge University Press. pp. 109–130. ISBN 0-521-42408-9.