# Conditioning

Beliefs depend on the available information. This idea is formalized in probability theory by conditioning. Conditional probabilities, conditional expectations and conditional distributions are treated on two levels: discrete probabilities and probability density functions. Conditioning leads to a non-random result if the condition is completely specified; otherwise, if the condition is left random, the result of conditioning is also random.

## Conditional Expectation

The conditional expectation of a random variable is another random variable equal to the average of the former over each possible "condition". Conditional expectation is also known as conditional expected value or conditional mean.

The concept of conditional expectation can be nicely illustrated through the following example. Suppose we have daily rainfall data (mm of rain each day) collected by a weather station on every day of the ten year period from Jan 1, 1990 to Dec 31, 1999. The conditional expectation of daily rainfall knowing the month of the year is the average of daily rainfall over all days of the ten year period that fall in a given month. These data then may be considered either as a function of each day (so for example its value for Mar 3, 1992, would be the sum of daily rainfalls on all days that are in the month of March during the ten years, divided by the number of these days, which is 310) or as a function of just the month (so for example the value for March would be equal to the value of the previous example).

It is important to note the following.

• The conditional expectation of daily rainfall knowing that we are in a month of March of the given ten years is not a monthly rainfall data, that is it is not the average of the ten monthly total March rainfalls. That number would be 31 times higher.
• The average daily rainfall in March 1992 is not equal to the conditional expectation of daily rainfall knowing that we are in a month of March of the given ten years, because we have restricted ourselves to 1992, that is we have more conditions than just that of being in March. This shows that reasoning as "we are in March 1992, so I know we are in March, so the average daily rainfall is the March average daily rainfall" is incorrect. Stated differently, although we use the expression "conditional expectation knowing that we are in March" this really means "conditional expectation knowing nothing other than that we are in March".

## Definition

We will give a (semi) rigorous treatment of conditional expectation. We suppose that $X$ and $Y$ are random variables, then we wish to make sense of the conditional expectation of $X$ given $Y$ normally denoted by

[$] \operatorname{E}[X \mid Y]. [$]

As we have seen above with the rainfall example, we wish to make sense of the expectation of $X$ given the information obtained by observing $Y$. One can also think of this as an estimation problem: estimate the expectation of $X$ given the data point $Y$.

### When $Y$ is Discrete

Suppose that $Y$ satisfies the following:

[$] \sum_{k}p_k = 1, \, \operatorname{P}(A_k) = p_k \gt 0,\, A_k = \{Y = y_k\}. [$]

If we let

[$] 1_{A_k}(\omega) = \begin{cases} 1 & \text{if}\,\,\, Y(\omega) = y_k \\ 0 & \text{otherwise} \end{cases} [$]

then a reasonable definition for the conditional expectation is

[$] \operatorname{E}[X \mid Y] = \sum_k \frac{\operatorname{E}[X \cdot 1_{A_k}] \cdot 1_{A_k}\left(Y\right)}{p_k}. [$]

To see why this definition makes sense, suppose that $Y$ is observed to equal $y_k$, then

[$] \operatorname{E}[X \mid Y] = \operatorname{E}[X \mid y_k] = \frac{\operatorname{E}[X \cdot 1_{A_k}]}{p_k} [$]

and

[] \begin{align} \frac{\operatorname{E}[X \cdot 1_{A_k}]}{p_k} &\approx \sum_{j=-nI}^{nI} (j+1)2^{-n} \frac{\operatorname{P}[\{j2^{-n} \lt X \leq (j+1)2^{-n}\} \cap A_k ] }{\operatorname{P}(A_k)} \\ & = \sum_{j=-nI}^{nI} (j+1)2^{-n} \operatorname{P}[j2^{-n} \lt X \leq (j+1)2^{-n} \mid A_k ] \end{align} []

with $I$ and $n$ sufficiently large (we're basically just splitting the interval [-$I$,$I$] into pieces of length $2^{-n}$). The crucial property of conditional expectations is the following:

[$] $$\label{cond-exp-discrete-prop1} \operatorname{E}\left[\operatorname{E}[X \mid Y] \cdot 1_{A_j} \right] = \sum_{k}\operatorname{E[X \cdot 1_{A_k} \cdot 1_{A_j}]} = \operatorname{E}\left[X \cdot 1_{A_j} \right].$$ [$]

By the linearity of expectation and \ref{cond-exp-discrete-prop1}, we see that

[$] $$\label{cond-exp-discrete-prop2} \operatorname{E}[\operatorname{E}[X \mid Y] \cdot 1_A ] = \operatorname{E}[X \cdot 1_A]$$ [$]

for any event of the form

[$] A = A_{n_1} \cup \ldots \cup A_{n_j}. [$]

In fact, we will see below that property \ref{cond-exp-discrete-prop2} is the defining property of conditional expectation.

### When $Y$ is Continuous

In the continuous case, things get a little tricky since the event $Y = y$ has a zero probability. To explain further, suppose we had incomplete information and the only thing we knew about the observation $Y$ is that it was contained in the interval $[a,b]$, how would we define the conditional expectation of $X$ given that $Y$ belongs to said interval? Proceeding as in the discrete case, we set

[$] \operatorname{E}[X \mid A] = \frac{\operatorname{E}[X \cdot 1_{A} ]}{\operatorname{P}(A)}\,,\quad A = \{a\leq Y \leq b \} [$]

provided that $\operatorname{P}(A)\gt0$. One natural approach would be to set

[$] \operatorname{E}[X \mid y] = \lim_{\epsilon \rightarrow 0} \operatorname{E}[X \mid A_{\epsilon}]\, , A_{\epsilon} = \{y -\epsilon \leq Y \leq y + \epsilon\} [$]

provided that the limit exists. If $X$ and $Y$ have a continuous joint density $f_{X,Y}(x,y)$ then we have

[] \begin{align*} \operatorname{E}[X \mid y_0] &= \lim_{\epsilon \rightarrow 0} \frac{\int_{y_0-\epsilon}^{y_0 + \epsilon}\int_{-\infty}^{\infty} x\, f_{X,Y}(x,y) \, dy \,dx}{\int_{y_0-\epsilon}^{y_0 + \epsilon}\int_{-\infty}^{\infty} f_{X,Y}(x,y) \, dx \,dy} \\ &= \frac{\lim_{\epsilon \rightarrow 0} \epsilon^{-1} \int_{y_0-\epsilon}^{y_0 + \epsilon}\int_{-\infty}^{\infty} x\, f_{X,Y}(x,y) \, dx \,dy}{\lim_{\epsilon \rightarrow 0} \epsilon^{-1} \int_{y_0-\epsilon}^{y_0 + \epsilon}\int_{-\infty}^{\infty} f_{X,Y}(x,y) \, dx \,dy} \\ &= \frac{\int_{-\infty}^{\infty} x\, f_{X,Y}(x,y_0) \, dx }{\int_{-\infty}^{\infty} \, f_{X,Y}(x,y_0) \, dx} \\ &= \frac{\int_{-\infty}^{\infty} x\, f_{X,Y}(x,y_0) \, dx }{f_{Y}(y_0)}. \end{align*} []

provided that $f_Y(y_0)\gt 0$. Therefore a reasonable definition for the conditional expectation appears to be

[$] \operatorname{E}[X \mid Y] = \frac{\int_{-\infty}^{\infty} x\, f_{X,Y}(x,Y) \, dx }{f_{Y}(Y)}. [$]

With this definition for the conditional expectation, we can show that property \ref{cond-exp-discrete-prop2} also holds in the continuous case for a certain family of events. More precisely, we have

[$] $$\label{cond-exp-cont-prop} \operatorname{E}\left[ \operatorname{E}[X \mid Y] \cdot 1_{A}\right] = \operatorname{E}\left[X\cdot 1_{A}\right]$$ [$]

for any event $A = \{a \leq Y \leq b \}$.

### General Formulation

Now suppose that $Y$ is a general random variable. Properties \ref{cond-exp-discrete-prop2} and \ref{cond-exp-cont-prop} motivate the formal and general definition for the expectation of $X$ given $Y$. More precisely,$\operatorname{E}[X | Y ]$ is defined as the unique random variable depending on $Y$ satisfying

[$] \operatorname{E}\left[ \operatorname{E}[X \mid Y] \cdot 1_{A}\right] = \operatorname{E}\left[X\cdot 1_{A}\right] [$]

for any event $A = \{a \leq Y \leq b \}$. It can be shown that such a random variable always exists and is unique. The uniqueness is in the almost sure sense: the probability that two such random variables differ has probability zero. In particular, it follows that the previous definitions for the conditional expectation are consistent with the formal one presented here.

### Properties

We list some basic properties of conditional expectation (without derivation):

• $\operatorname{E}[aX + b \mid Y] = a\operatorname{E}[X \mid Y] + b$
• $\operatorname{E}[\operatorname{E}[X \mid Y]] = \operatorname{E}[X]$
• $\operatorname{E}[g(Y)\mid Y] = g(Y)$ for any continuous function g
• $\operatorname{E}[X \mid Y] = \operatorname{E}[X]$ when $X$ is independent of $Y$
• $\operatorname{E}[X_1 \mid Y] \leq \operatorname{E}[X_2 \mid Y]$ almost surely when $X_1 \leq X_2$ almost surely.

## Conditional Variance

We can consider a very natural random variable

[$] $$\label{cond-variance} \operatorname{Var}[X \mid Y] = \operatorname{E}\left[(X - \operatorname{E}[X \mid Y])^2 \mid Y \right]$$ [$]

called the conditional variance of $X$ given $Y$.

### Properties

Here are a few basic properties of the conditional variance (without derivation):

• $\operatorname{Var}( cX \mid Y ) = c^2 \operatorname{Var}( X \mid Y )$
• $\operatorname{Var}(X \mid Y) = \operatorname{E}[X^2 \mid Y] - \operatorname{E}[X \mid Y]^2$
• $\operatorname{Var}[X] = \operatorname{E}\left[\operatorname{Var}[X \mid Y]\right] + \operatorname{Var}\left[\operatorname{E}[X \mid Y]\right]$

### Minimizing Mean Square Error

Suppose that $X$ has a finite second moment ($\operatorname{E}[X^2] \lt \infty$) and consider the following minimization problem:

[$] $$\label{least-squares} \min \operatorname{E}[(X-Z)^2] \quad \text{Z depending on Y.}$$ [$]

The solution to \ref{least-squares} can be thought of as the best estimate of $X$ based on the information provided by $Y$. It turns out that the solution to \ref{least-squares} is unique (almost surely) and equals the conditional expectation of $X$ given $Y$:

[$] \operatorname{E}\left[(X-\operatorname{E}[X \mid Y])^2\right] \leq \operatorname{E}\left[(X-Z)^2\right] [$]

for any $Z$ random variable which depends on $Y$.

To be mathematically precise, Z needs to be measurable with respect to the sigma algebra generated by $Y$. For our purposes, you can think of $Z$ as being a suitable (like say continuous) function of $Y$.

## Conditional Probability Distribution

Given two jointly distributed random variables $X$ and $Y$, the conditional probability distribution of $X$ given $Y$ is the probability distribution of $X$ when $Y$ is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value $y$ of $Y$ as a parameter. In case that both $X$ and $Y$ are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable.

If the conditional distribution of $X$ given $Y$ is a continuous distribution, then its probability density function is known as the conditional density function. The properties of a conditional distribution, such as the moments, are often referred to by corresponding names such as the conditional mean and conditional variance.

More generally, one can refer to the conditional distribution of a subset of a set of more than two variables; this conditional distribution is contingent on the values of all the remaining variables, and if more than one variable is included in the subset then this conditional distribution is the conditional joint distribution of the included variables.

### Relation to conditional expectation

We define the conditional distribution of $X$ given $Y$ using conditional expectation:

[$] F_{X}(x \mid Y) = \operatorname{P}(X \leq x \mid Y) = \operatorname{E}[1_{A} \mid Y]\,,\quad A = \{X \leq x\}. [$]

### When Y is Discrete

When $Y$ is a discrete random variable, the conditional distribution of $X$ given $Y$ = $Y$ is given by (see When $Y$ is Discrete):

[] \begin{align} F_{X}(x \mid y) &= \begin{cases} \frac{\operatorname{E}[1_{X\leq x} \cdot 1_{Y=y}]}{\operatorname{P}(Y=y)} & \operatorname{P}(Y=y) \gt 0 \\ 0 & \text{otherwise} \end{cases} \\ &= \begin{cases} \frac{\operatorname{P}(X\leq x \, \cap \, Y = y)}{\operatorname{P}(Y=y)} & \operatorname{P}(Y=y) \gt 0 \\ 0 & \text{otherwise} \end{cases} \end{align} []

### Continuous distributions

Suppose that $X$ and $Y$ have a continuous joint density $f_{X,Y}(x,y)$ then (see When $Y$ is Continuous)

[$] $$\label{cond-prob-continuous} F(x_0 \mid Y) = \frac{\int_{-\infty}^{x_0} \, f_{X,Y}(x,Y) \, dx }{f_{Y}(Y)}.$$ [$]

We also see from \ref{cond-prob-continuous} that the distribution $F(x |Y)$ will have a density function, called the conditional density of $X$ given $Y$, given by

[$] f_{X}(x \mid Y) = \frac{f_{X,Y}(x,Y)}{f_Y(Y)}. [$]

## Bayes' Formulas

### The Joint Distribution

We can express the joint distribution function in terms of conditional and marginal distributions:

[$] F_{X,Y}(x,y) = \int_{-\infty}^{x} F_{Y}(y \mid z ) \, dF_{X}(z) = \int_{-\infty}^{y} F_{X}(x \mid z ) \, dF_{Y}(z). [$]

In particular, we obtain simple expressions for the marginal distributions

[$] F_{X}(x) = \int_{-\infty}^{x} F_{Y}(y \mid z ) \, dF_{X}(z) [$]

and for conditional probabilities for events:

[$] $$\label{bayes-cond-prob} \operatorname{P}(X \leq x \mid Y \leq y) = \frac{\int_{-\infty}^x F_Y(y \mid z ) \, dF_{X}(z)}{\int_{-\infty}^{\infty} F_Y(y \mid z ) \, dF_{X}(z)}.$$ [$]

In particular, if $X$ is a discrete distribution taking on values $x_1,\ldots,x_n$, then \ref{bayes-cond-prob} translates into

[$] \frac{\sum_{x_i \leq x} \operatorname{P}(Y \leq y \mid X = x_i)}{\sum_{x_i \leq x}\operatorname{P}(X = x_i)} [$]

which is consistent with the Bayes' formula presented in conditional probability.

### The Marginal and Conditional Densities

If $X$ and $Y$ have a joint density function, then we can express one marginal density as an average of the conditional densities:

[$] f_{X}(x) = \int_{-\infty}^{\infty}f_X(x \mid y ) f_{Y}(y) \, dy. [$]

Furthermore, one conditional density can be expressed in terms of the other as follows:

[$] f_{Y}(y \mid X ) = \frac{f_X(X \mid y) f_{Y}(y)}{f_{X}(X)} = \frac{f_X(X \mid y) f_{Y}(y)}{\int_{-\infty}^{\infty}f_{X}(X \mid y) f_{Y}(y) \, dy}. [$]

## References

• Durrett, Richard (1996), Probability: theory and examples (Second ed.)
• Pollard, David (2002), A user's guide to measure theoretic probability, Cambridge University Press
• Billingsley, Patrick (1995). Probability and Measure (3rd ed.). New York: John Wiley and Sons.