Covariance

Covariance is a measure of how much two random variables change together. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values, i.e., the variables tend to show similar behavior, the covariance is positive.[1] For example, as a balloon is blown up it gets larger in all dimensions. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, i.e., the variables tend to show opposite behavior, the covariance is negative. If a sealed balloon is squashed in one dimension then it will expand in the other two. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation.

A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which serves as an estimated value of the parameter.

Definition

The covariance between two jointly distributed real-valued random variables [math]X[/math] and [math]Y[/math] with finite second moments is defined as[2]

[[math]] \operatorname{cov}(X,Y) = \operatorname{E}{\big[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])\big]}, [[/math]]

where [math]\operatorname{E}[X][/math] is the expected value of [math]X[/math], also known as the mean of [math]X[/math]. By using the linearity property of expectations, this can be simplified to

[[math]] \begin{align*} \operatorname{cov}(X,Y) &= \operatorname{E}\left[\left(X - \operatorname{E}\left[X\right]\right) \left(Y - \operatorname{E}\left[Y\right]\right)\right] \\ &= \operatorname{E}\left[X Y - X \operatorname{E}\left[Y\right] - \operatorname{E}\left[X\right] Y + \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right]\right] \\ &= \operatorname{E}\left[X Y\right] - \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right] - \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right] + \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right] \\ &= \operatorname{E}\left[X Y\right] - \operatorname{E}\left[X\right] \operatorname{E}\left[Y\right]. \end{align*} [[/math]]

Random variables whose covariance is zero are called uncorrelated. Similarly, random vectors whose covariance matrix is zero in every entry outside the main diagonal are called uncorrelated.

The units of measurement of the covariance [math]\operatorname{Cov}(X,Y)[/math] are those of [math]X[/math] times those of [math]Y[/math]. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)

Discrete variables

If each variable has a finite set of equal-probability values, [math]x_i[/math] and [math]y_i[/math] respectively for [math]i=1,\dots , n,[/math] then the covariance can be equivalently written in terms of the means [math]\operatorname{E}(X)[/math] and [math]\operatorname{E}(Y)[/math] as

[[math]]\operatorname{cov} (X,Y)=\frac{1}{n}\sum_{i=1}^n (x_i-E(X))(y_i-E(Y)).[[/math]]

It can also be equivalently expressed, without directly referring to the means, as[3]

[[math]] \begin{align} \operatorname{cov}(X,Y) &= \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \frac{1}{2}(x_i - x_j)\cdot(y_i - y_j) \\ &= \frac{1}{n^2}\sum_i \sum_{j \gt i} (x_i-x_j)\cdot(y_i - y_j). \end{align} [[/math]]

Properties

  • variance is a special case of the covariance when the two variables are identical:

[[math]]\operatorname{cov}(X, X) =\operatorname{Var}(X)\equiv\sigma^2(X).[[/math]]

  • If [math]X, Y, W[/math], and [math]V[/math] are real-valued rarndom variables and [math]a, b, c, d [/math] are constants. then the following facts are a consequence of the definition of covariance:

[[math]] \begin{align*} \sigma(X, a) &= 0 \\ \sigma(X, X) &= \sigma^2(X) \\ \sigma(X, Y) &= \sigma(Y, X) \\ \sigma(aX, bY) &= ab\, \sigma(X, Y) \\ \sigma(X+a, Y+b) &= \sigma(X, Y) \\ \end{align*} [[/math]]

and

[[math]] \sigma(aX+bY, cW+dV) = ac\,\sigma(X,W)+ad\,\sigma(X,V)+bc\,\sigma(Y,W)+bd\,\sigma(Y,V). [[/math]]

  • For a sequence [math]X_1, \ldots, X_n [/math] of random variables, and constants [math]a_1,\ldots,a_n[/math], we have

[[math]]\sigma^2\left(\sum_{i=1}^n a_iX_i \right) = \sum_{i=1}^n a_i^2\sigma^2(X_i) + 2\sum_{i,j\,:\,i \lt j} a_ia_j\sigma(X_i,X_j) = \sum_{i,j} {a_ia_j\sigma(X_i,X_j)}. [[/math]]

Uncorrelatedness and independence

If [math]X[/math] and [math]Y[/math] are independent, then their covariance is zero. This follows because under independence,

[[math]]\text{E}[XY]=\text{E}[X] \text{E}[Y]. [[/math]]

The converse, however, is not generally true. For example, let [math]X[/math] be uniformly distributed in [-1, 1] and let [math]Y = X^2[/math]. Clearly, [math]X[/math] and [math]Y[/math] are dependent, but [math] \sigma(X,Y) = 0 [/math].

In this case, the relationship between [math]Y[/math] and [math]X[/math] is non-linear, while correlation and covariance are measures of linear dependence between two variables. This example shows that if two variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly normally distributed (but not if they are merely individually normally distributed), uncorrelatedness does imply independence.

Relationship to inner products

Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product:

Property Description
Bilinearity For constants [math]a[/math] and [math]b[/math], [math]\sigma( aX + bY, Z) = a\sigma(X,Y) + b\sigma(Y,Z). [/math]
Symmetry [math]\sigma(X,Y) = \sigma(Y,X). [/math]
Positive semi-definite [math]\sigma^2(X) = \sigma(X,X) \geq 0[/math], and [math]\sigma(X,X) = 0 [/math] implies that [math]X[/math] is a constant random variable.

As a result for random variables with finite variance, the inequality

[[math]]|\sigma(X,Y)| \le \sqrt{\sigma^2(X) \sigma^2(Y)} [[/math]]

holds via the Cauchy–Schwarz inequality.

Comments

The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the correlation coefficient. From it, one can obtain the Pearson coefficient, which gives the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.

Pearson's product-moment correlation coefficient

The most familiar measure of dependence between two quantities is the Pearson product-moment correlation coefficient, or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient". It is obtained by dividing the covariance of the two variables by the product of their standard deviations. Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton.[4]

The population correlation coefficient [math]\rho_{X,Y}[/math] between two random variables [math]X[/math] and [math]Y[/math] with expected value s [math]\mu_X[/math] and [math]\mu_Y[/math] and standard deviations [math]\sigma_X[/math] and [math]\sigma_Y[/math] is defined as:

[[math]]\rho_{X,Y}=\mathrm{corr}(X,Y)={\mathrm{cov}(X,Y) \over \sigma_X \sigma_Y} ={E[(X-\mu_X)(Y-\mu_Y)] \over \sigma_X\sigma_Y}[[/math]]

The formula for [math]\rho[/math] can be expressed in terms of uncentered moments. Since

  • [math]\mu_X=\operatorname{E}[X][/math]
  • [math]\mu_Y=\operatorname{E}[Y][/math]
  • [math]\sigma_X^2=\operatorname{E}[(X-\operatorname{E}[X])^2]=\operatorname{E}[X^2]-\operatorname{E}[X]^2[/math]
  • [math]\sigma_Y^2=\operatorname{E}[(Y-\operatorname{E}[Y])^2]=\operatorname{E}[Y^2]-\operatorname{E}[Y]^2[/math]
  • [math]\operatorname{E}[(X-\mu_X)(Y-\mu_Y)]=\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y],\,[/math]

the formula for [math]\rho[/math] can also be written as

[[math]]\rho_{X,Y}=\frac{\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y]}{\sqrt{\operatorname{E}[X^2]-\operatorname{E}[X]^2}~\sqrt{\operatorname{E}[Y^2]- \operatorname{E}[Y]^2}}.[[/math]]

The Pearson correlation is defined only if both of the standard deviations are finite and nonzero. It is a corollary of the Cauchy–Schwarz inequality that the correlation cannot exceed 1 in absolute value. The correlation coefficient is symmetric: [math]\operatorname{corr}(X,Y)=\operatorname{corr}(Y,X)[/math].

If we have a series of [math]n[/math] measurements of [math]X[/math] and [math]Y[/math] written as [math]x_i[/math] and [math]y_i[/math] for [math]i = 1, \ldots, n [/math], then the sample correlation coefficient can be used to estimate the population Pearson correlation [math]\rho_{XY}[/math] between [math]X[/math] and [math]Y[/math]. The sample correlation coefficient is written:

[[math]] \rho_{xy}=\frac{\sum\limits_{i=1}^n (x_i-\bar{x})(y_i-\bar{y})}{ns_x s_y} =\frac{\sum\limits_{i=1}^n (x_i-\bar{x})(y_i-\bar{y})} {\sqrt{\sum\limits_{i=1}^n (x_i-\bar{x})^2 \sum\limits_{i=1}^n (y_i-\bar{y})^2}}, [[/math]]

where [math]\overline{x}[/math] and [math]\overline{y}[/math] are the sample means of [math]X[/math] and [math]Y[/math], and [math]s_x[/math] and [math]s_y[/math] are the sample standard deviations of [math]X[/math] and [math]Y[/math].

This can also be written as:

[[math]] \rho_{xy}=\frac{\sum x_iy_i-n \bar{x} \bar{y}}{n s_x s_y}=\frac{n\sum x_iy_i-\sum x_i\sum y_i}{\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}. [[/math]]

Mathematical properties

The Pearson correlation is +1 in the case of a perfect direct (increasing) linear relationship (correlation), −1 in the case of a perfect decreasing (inverse) linear relationship (anticorrelation),[5] and some value between −1 and 1 in all other cases, indicating the degree of linear dependence between the variables. As it approaches zero there is less of a relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables.

If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. For example, suppose the random variable [math]X[/math] is symmetrically distributed about zero, and [math]Y = X^2 [/math]. Then [math]Y[/math] is completely determined by [math]X[/math], so that [math]X[/math] and [math]Y[/math] are perfectly dependent, but their correlation is zero; they are uncorrelated. However, in the special case when [math]X[/math] and [math]Y[/math] are jointly normal, uncorrelatedness is equivalent to independence.

The absolute values of both the sample and population Pearson correlation coefficients are less than or equal to 1. Correlations equal to 1 or −1 correspond to data points lying exactly on a line (in the case of the sample correlation), or to a bivariate distribution entirely supported on a line (in the case of the population correlation).

A key mathematical property of the Pearson correlation coefficient is that it is invariant to separate changes in location and scale in the two variables. That is, we may transform [math]X[/math] to [math]a+bX[/math] and transform [math]Y[/math] to [math]c+dY[/math], where [math]a, b, c[/math] and [math]d[/math] are constants with [math]b, d \neq 0 [/math], without changing the correlation coefficient. (This fact holds for both the population and sample Pearson correlation coefficients.) Note that more general linear transformations do change the correlation: see a later section for an application of this.

Interpretation

The correlation coefficient ranges from −1 to 1. A value of 1 implies that a linear equation describes the relationship between [math]X[/math] and [math]Y[/math] perfectly, with all data points lying on a line for which [math]Y[/math] increases as [math]X[/math] increases. A value of −1 implies that all data points lie on a line for which [math]Y[/math] decreases as [math]X[/math] increases. A value of 0 implies that there is no linear correlation between the variables.

More generally, note that [math](X_i - \overline{x})(Y_i - \overline{y})[/math] is positive if and only if [math]X_i[/math] and [math]Y_i[/math] lie on the same side of their respective means. Thus the correlation coefficient is positive if [math]X_i[/math] and [math]Y_i[/math] tend to be simultaneously greater than, or simultaneously less than, their respective means. The correlation coefficient is negative if [math]X_i[/math] and [math]Y_i[/math] tend to lie on opposite sides of their respective means. Moreover, the stronger is either tendency, the larger is the absolute value of the correlation coefficient.

Common misconceptions

Correlation and causality

The conventional dictum that "correlation does not imply causation" means that correlation cannot be used to infer a causal relationship between the variables.[6] This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with identity relations (tautologies), where no causal process exists. Consequently, establishing a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction).

A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.

Correlation and linearity

The Pearson correlation coefficient indicates the strength of a linear relationship between two variables, but its value generally does not completely characterize their relationship.[7] In particular, if the conditional mean of [math]Y[/math] given [math]X[/math], denoted [math]\operatorname{E}[Y|X][/math], is not linear in [math]X[/math], the correlation coefficient will not fully determine the form of [math]\operatorname{E}[Y|X][/math].

The Bivariate Normal Distribution

In probability theory and statistics, the bivariate normal distribution is a generalization of the one-dimensional univariate normal distribution to two dimensions. One definition is that a random vector is said to be bivariate normally distributed if every linear combination of its two components has a univariate normal distribution. The bivariate normal distribution is often used to describe, at least approximately, any set of two (possibly) correlated real-valued random variables each of which clusters around a mean value.

Definitions

Notation and parameterization

The bivariate normal distribution of a two-dimensional random vector [math]\mathbf{X} = (X_1,X_2)^{\mathrm T}[/math] can be written in the following notation:

[[math]] \mathbf{X}\ \sim\ \mathcal{N}(\boldsymbol\mu,\, \boldsymbol\Sigma), [[/math]]

or to make it explicitly known that [math]X[/math] is two-dimensional,

[[math]] \mathbf{X}\ \sim\ \mathcal{N}_2(\boldsymbol\mu,\, \boldsymbol\Sigma), [[/math]]

with two-dimensional mean vector

[[math]] \boldsymbol\mu = \operatorname{E}[\mathbf{X}] = ( \operatorname{E}[X_1], \operatorname{E}[X_2]) ^ \textbf{T}, [[/math]]

and [math]2 \times k[/math] covariance matrix

[[math]] \Sigma_{i,j} = \operatorname{E} [(X_i - \mu_i)( X_j - \mu_j)] = \operatorname{Cov}[X_i, X_j] [[/math]]

such that [math]1 \le i,j \le 2.[/math] The inverse of the covariance matrix is called the precision matrix, denoted by [math]\boldsymbol{Q}=\boldsymbol\Sigma^{-1}[/math].

Standard normal random vector

A real random vector [math]\mathbf{X} = (X_1,X_2)^{\mathrm T}[/math] is called a standard normal random vector if all of its components [math]X_k[/math] are independent and each is a zero-mean unit-variance normally distributed random variable, i.e. if [math]X_k \sim\ \mathcal{N}(0,1)[/math] for all [math]k[/math].[8]:p. 454

Centered normal random vector

A real random vector [math]\mathbf{X} = (X_1,X_2)^{\mathrm T}[/math] is called a centered normal random vector if there exists a deterministic [math]2 \times \ell[/math] matrix [math]\boldsymbol{A}[/math] such that [math]\boldsymbol{A} \mathbf{Z}[/math] has the same distribution as [math]\mathbf{X}[/math] where [math]\mathbf{Z}[/math] is a standard normal random vector with [math]\ell[/math] components.[8]:p. 454

Normal random vector

A real random vector [math]\mathbf{X} = (X_1,X_2)^{\mathrm T}[/math] is called a normal random vector if there exists a random [math]\ell[/math]-vector [math]\mathbf{Z}[/math], which is a standard normal random vector, a vector [math]\mathbf{\mu}[/math], and a [math]2 \times \ell[/math] matrix [math]\boldsymbol{A}[/math], such that [math]\mathbf{X}=\boldsymbol{A} \mathbf{Z} + \mathbf{\mu}[/math].[9]:p. 454[8]:p. 455

Density function

The bivariate normal distribution is said to be "non-degenerate" when the symmetric covariance matrix [math]\boldsymbol\Sigma[/math] is positive definite. The distribution has density[10]

[[math]] f(x,y) = \frac{1}{2 \pi \sigma_X \sigma_Y \sqrt{1-\rho^2}} \exp \left( -\frac{1}{2(1-\rho^2)}\left[ \left(\frac{x-\mu_X}{\sigma_X}\right)^2 - 2\rho\left(\frac{x-\mu_X}{\sigma_X}\right)\left(\frac{y-\mu_Y}{\sigma_Y}\right) + \left(\frac{y-\mu_Y}{\sigma_Y}\right)^2 \right] \right) [[/math]]

where [math]\rho[/math] is the correlation between [math]X[/math] and [math]Y[/math] and where [math] \sigma_X\gt0 [/math] and [math] \sigma_Y\gt0 [/math].

Joint normality

Normally distributed and independent

If [math]X[/math] and [math]Y[/math] are normally distributed and independent, this implies they are "jointly normally distributed", i.e., the pair [math](X,Y)[/math] must have multivariate normal distribution. However, a pair of jointly normally distributed variables need not be independent (would only be so if uncorrelated, [math] \rho = 0[/math] ).

Two normally distributed random variables need not be jointly bivariate normal

The fact that two random variables [math]X[/math] and [math]Y[/math] both have a normal distribution does not imply that the pair [math](X,Y)[/math] has a joint normal distribution. A simple example is one in which X has a normal distribution with expected value 0 and variance 1, and [math]Y=X[/math] if [math]|X| \gt c[/math] and [math]Y=-X[/math] if [math]|X| \lt c[/math], where [math]c \gt 0[/math]. There are similar counterexamples for more than two random variables. In general, they sum to a mixture model.

Correlations and independence

In general, random variables may be uncorrelated but statistically dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent. But, as pointed out just above, it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent.

Conditional distribution

The conditional distribution of [math]X_1[/math] given [math]X_2[/math] is[11]

[[math]]X_1\mid X_2=a \ \sim\ \mathcal{N}\left(\mu_1+\frac{\sigma_1}{\sigma_2}\rho( a - \mu_2),\, (1-\rho^2)\sigma_1^2\right). [[/math]]

where [math]\rho[/math] is the correlation coefficient between [math]X_1[/math] and [math]X_2[/math].

Bivariate conditional expectation

By taking the expectation of the conditional distribution [math]X_1\mid X_2[/math] above, the conditional expectation of [math]X_1[/math] given [math]X_2[/math] equals:

[[math]]\operatorname{E}(X_1 \mid X_2=x_2) = \mu_1 + \rho \frac{\sigma_1}{\sigma_2}(x_2 - \mu_2).[[/math]]

Notes

  1. http://mathworld.wolfram.com/Covariance.html
  2. Oxford Dictionary of Statistics, Oxford University Press, 2002, p. 104.
  3. Yuli Zhang,Huaiyu Wu,Lei Cheng (June 2012). Some new deformation formulas about variance and covariance. Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012). pp. 987–992.CS1 maint: uses authors parameter (link)
  4. "Thirteen ways to look at the correlation coefficient" (1988). The American Statistician 42 (1): 59–66. doi:10.1080/00031305.1988.10475524. 
  5. Dowdy, S. and Wearden, S. (1983). "Statistics for Research", Wiley. ISBN 0-471-08602-9 pp 230
  6. Aldrich, John (1995). "Correlations Genuine and Spurious in Pearson and Yule". Statistical Science 10 (4): 364–376. doi:10.1214/ss/1177009870. 
  7. Mahdavi Damghani, Babak (2012). "The Misleading Value of Measured Correlation". Wilmott 2012 (1): 64–73. doi:10.1002/wilm.10167. 
  8. 8.0 8.1 8.2 Lapidoth, Amos (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 978-0-521-19395-5.
  9. Gut, Allan (2009). An Intermediate Course in Probability. Springer. ISBN 978-1-441-90161-3.
  10. Simon J.D. Prince(June 2012). Computer Vision: Models, Learning, and Inference. Cambridge University Press. 3.7:"Multivariate normal distribution".
  11. Jensen, J (2000). Statistics for Petroleum Engineers and Geoscientists. Amsterdam: Elsevier. p. 207.

References

  • Wikipedia contributors. "Covariance". Wikipedia. Wikipedia. Retrieved 28 January 2022.