Maximum Likelihood Estimation

In general, for a fixed set of data and underlying statistical model, the method of maximum likelihood selects the set of values of the model parameters that maximizes the likelihood function. Intuitively, this maximizes the "agreement" of the selected model with the observed data, and for discrete random variables it indeed maximizes the probability of the observed data under the resulting distribution.

Principles

Suppose there is a sample [math]X_1,\ldots,X_n[/math] of [math]n[/math] independent and identically distributed observations, coming from a distribution with an unknown probability density function [math] f_0 [/math]. It is however surmised that the function [math]f_0[/math] belongs to a certain family of distributions [math] \{f(\cdot | \theta) : \theta \in \Theta \} [/math] (where [math]\theta[/math] is a vector of parameters for this family), so that [math] f_0 = f(\cdot | \theta_0) [/math]. The value [math]\theta_0[/math] is unknown and is referred to as the true value of the parameter vector. It is desirable to find an estimator [math]\hat\theta[/math] which would be as close to the true value [math]\theta_0[/math] as possible. Either or both the observed variables [math]X_i[/math] and the parameter [math]\theta[/math] can be vectors.

To use the method of maximum likelihood, one first specifies the joint density function for all observations. For an independent and identically distributed sample, this joint density function is

[[math]] \begin{equation} f(x_1,x_2,\ldots,x_n\mid\theta) = f(x_1\mid \theta)\times f(x_2|\theta) \times \cdots \times f(x_n\mid \theta). \end{equation} [[/math]]

Now we look at this function from a different perspective by considering the observed values [math]X_1,\ldots,X_n [/math] to be fixed "parameters" of this function, whereas [math]\theta[/math] will be the function's variable and allowed to vary freely; this function will be called the likelihood:

[[math]] \mathcal{L}(\theta\,;X_1,\ldots,X_n) = f(X_1,\ldots,X_n\mid\theta) = \prod_{i=1}^n f(X_i\mid\theta). [[/math]]

In practice it is often more convenient to work with the logarithm of the likelihood function, called the log-likelihood:

[[math]] \ln\mathcal{L}(\theta\,;\,X_1,\ldots,X_n) = \sum_{i=1}^n \ln f(X_i\mid\theta), [[/math]]

or the average log-likelihood:

[[math]] L_n(\theta) = \frac1n \ln\mathcal{L}. [[/math]]

The method of maximum likelihood estimates [math]\theta_0[/math], the true parameter of the distribution from which the sample is drawn, by finding a value of [math]\theta[/math] that maximizes [math]L_n(\theta)[/math]. This method of estimation defines a maximum-likelihood estimator (MLE) of [math]\theta_0[/math]

[[math]] \begin{equation} \{ \hat\theta_\mathrm{mle}\} \subseteq \{ \underset{\theta\in\Theta}{\operatorname{arg\,max}}\ L_n(\theta\,;\,x_1,\ldots,x_n) \}, \end{equation} [[/math]]

if a maximum exists. An MLE estimate is the same regardless of whether we maximize the likelihood or the log-likelihood function, since log is a monotonically increasing function.

For many models, a maximum likelihood estimator can be found as an explicit function of the observed data. For many other models, however, no closed-form solution to the maximization problem is known or available, and an MLE has to be found numerically. For some problems, there may be multiple estimates that maximize the likelihood. For other problems, no maximum likelihood estimate exists (meaning that the log-likelihood function increases without attaining the supremum value).

In the exposition above, it is assumed that the data are independent and identically distributed. The method can be applied however to a broader setting, as long as it is possible to write the joint density function [math]f(x_1,\ldots,x_n | \theta)[/math] and its parameter [math]\theta[/math] has a finite dimension which does not depend on the sample size [math]n[/math].

Properties

Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[1] However, like other estimation methods, maximum-likelihood estimation possesses a number of attractive limiting properties:

  • Consistency: the sequence of MLEs converges in probability to the value being estimated.
  • Asymptotic normality: as the sample size increases, the distribution of the MLE tends to the Gaussian distribution with mean [math]\theta_0[/math] and covariance matrix equal to the inverse of the Fisher information matrix.
  • Efficiency, i.e., it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound).
  • Second-order efficiency after correction for bias.

Under certain conditions, the maximum likelihood estimator is consistent. The consistency means that having a sufficiently large number of observations [math]n[/math], it is possible to find the value of [math]\theta_0[/math] with arbitrary precision. In mathematical terms this means that as [math]n[/math] goes to infinity the estimator [math]\hat\theta[/math] converges in probability to its true value:

[[math]] \hat\theta_\mathrm{mle}\ \xrightarrow{p}\ \theta_0. [[/math]]

Under slightly stronger conditions, the estimator converges almost surely (or strongly) to:

[[math]] \hat\theta_\mathrm{mle}\ \xrightarrow{\text{a.s.}}\ \theta_0. [[/math]]

Asymptotic normality

In a wide range of situations, maximum likelihood parameter estimates exhibit asymptotic normality - that is, they are equal to the true parameters plus a random error that is approximately normal (given sufficient data), and the error's variance decays as 1/n:

[[math]] \sqrt{n}\cdot(\hat\theta_\mathrm{mle} - \theta_0)\ \xrightarrow{D}\ N (0,I( \theta_0) ^{-1}) [[/math]]

where [math]N[/math] denotes the Normal distribution and [math]I(\theta_0)[/math] is the Fisher information matrix.

Fisher Information

Tthe Fisher information is a way of measuring the amount of information that an observable random variable [math]X[/math] carries about an unknown parameter [math]\theta[/math] of a distribution that models [math]X[/math]. Formally, it is the variance of the score, or the expected value of the observed information. The Fisher-information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates.

Definition

The Fisher information is a way of measuring the amount of information that an observable random variable [math]X[/math] carries about an unknown parameter [math]\theta[/math] upon which the probability of [math]X[/math] depends. The probability function for [math]X[/math], which is also the likelihood function for [math]\theta[/math], is a function [math]f(X;\theta)[/math]; it is the probability mass (or probability density) of the random variable [math]X[/math] conditional on the value of [math]\theta[/math]. The partial derivative with respect to [math]\theta[/math] of the natural logarithm of the likelihood function is called the score.

Proposition (Expected value of score)

Under certain regularity conditions,[2] the first moment of the score is 0.

Show Proof

[[math]] \begin{align*} \operatorname{E} \left[\left. \frac{\partial}{\partial\theta} \log f(X;\theta)\right|\theta \right] &= \operatorname{E} \left[\left. \frac{\frac{\partial}{\partial\theta} f(X;\theta)}{f(X; \theta)}\right|\theta \right] \\ &= \int \frac{\frac{\partial}{\partial\theta} f(x;\theta)}{f(x; \theta)} f(x;\theta)\; \mathrm{d}x \\ &= \int \frac{\partial}{\partial\theta} f(x;\theta)\; \mathrm{d}x \\ &= \frac{\partial}{\partial\theta} \int f(x; \theta)\; \mathrm{d}x \\ &= \frac{\partial}{\partial\theta} \; 1 \\ &= 0. \end{align*} [[/math]]

The second moment is called the Fisher information:

[[math]] \mathcal{I}(\theta)=\operatorname{E} \left[\left. \left(\frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2\right|\theta \right] = \int \left(\frac{\partial}{\partial\theta} \log f(x;\theta)\right)^2 f(x; \theta)\; \mathrm{d}x\,, [[/math]]

where, for any given value of [math]\theta[/math], the expression E[...|[math]\theta[/math]] denotes the conditional expectation over values for [math]X[/math] with respect to the probability function [math]f(x;\theta)[/math] given [math]\theta[/math]. Note that [math]0 \leq \mathcal{I}(\theta) \lt \infty[/math]. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable [math]X[/math] has been averaged out. Since the expectation of the score is zero, the Fisher information is also the variance of the score.

Proposition

If [math]\log(f(x);\theta)[/math] is twice differentiable with respect to [math]\theta[/math], then, under certain regularity conditions, the Fisher information may also be written as[3]

[[math]] \mathcal{I}(\theta) = - \operatorname{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log f(X;\theta)\right|\theta \right]\,. [[/math]]

Show Proof

[[math]] \frac{\partial^2}{\partial\theta^2} \log f(X;\theta) = \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} \;-\; \left( \frac{\frac{\partial}{\partial\theta} f(X;\theta)}{f(X; \theta)} \right)^2 = \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} \;-\; \left( \frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2 [[/math]]
and

[[math]] \operatorname{E} \left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)}\right|\theta \right] = \cdots = \frac{\partial^2}{\partial\theta^2} \int f(x; \theta)\; \mathrm{d}x = \frac{\partial^2}{\partial\theta^2} \; 1 = 0. [[/math]]

Thus, the Fisher information is the negative of the expectation of the second derivative with respect to [math]\theta[/math] of the natural natural logarithm of [math]f[/math]. Information may be seen to be a measure of the "curvature" of the support curve near the maximum likelihood estimate of [math]\theta[/math]. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information.

Information is additive, in that the information yielded by two independent experiments is the sum of the information from each experiment separately:

[[math]] \mathcal{I}_{X,Y}(\theta) = \mathcal{I}_X(\theta) + \mathcal{I}_Y(\theta). [[/math]]

This result follows from the elementary fact that if random variables are independent, the variance of their sum is the sum of their variances. In particular, the information in a random sample of size [math]n[/math] is [math]n[/math] times that in a sample of size 1, when observations are independent and identically distributed.

Reparametrization

The Fisher information depends on the parametrization of the problem. If [math]\theta[/math] and [math]\eta[/math] are two scalar parametrizations of an estimation problem, and [math]\theta[/math] is a continuously differentiable function of [math]\eta[/math], then

[[math]]{\mathcal I}_\eta(\eta) = {\mathcal I}_\theta(\theta(\eta)) \left( \frac{{\mathrm d} \theta}{{\mathrm d} \eta} \right)^2[[/math]]

where [math]{\mathcal I}_\eta[/math] and [math]{\mathcal I}_\theta[/math] are the Fisher information measures of [math]\eta[/math] and [math]\theta[/math], respectively.[4]

References

  1. Pfanzagl (1994, p. 206)
  2. Suba Rao. "Lectures on statistical inference" (PDF).
  3. Lehmann & Casella, eq. (2.5.16).
  4. Lehmann & Casella, eq. (2.5.11).

Wikipedia References