# Maximum Likelihood Estimation

In general, for a fixed set of data and underlying statistical model, the method of maximum likelihood selects the set of values of the model parameters that maximizes the likelihood function. Intuitively, this maximizes the "agreement" of the selected model with the observed data, and for discrete random variables it indeed maximizes the probability of the observed data under the resulting distribution.

## Principles

Suppose there is a sample [math]X_1,\ldots,X_n[/math] of [math]n[/math] independent and identically distributed observations, coming from a distribution with an unknown probability density function [math] f_0 [/math]. It is however surmised that the function [math]f_0[/math] belongs to a certain family of distributions [math] \{f(\cdot | \theta) : \theta \in \Theta \} [/math] (where [math]\theta[/math] is a vector of parameters for this family), so that [math] f_0 = f(\cdot | \theta_0) [/math]. The value [math]\theta_0[/math] is unknown and is referred to as the *true value* of the parameter vector. It is desirable to find an estimator [math]\hat\theta[/math] which would be as close to the true value [math]\theta_0[/math] as possible. Either or both the observed variables [math]X_i[/math] and the parameter [math]\theta[/math] can be vectors.

To use the method of maximum likelihood, one first specifies the joint density function for all observations. For an independent and identically distributed sample, this joint density function is

Now we look at this function from a different perspective by considering the observed values [math]X_1,\ldots,X_n [/math] to be fixed "parameters" of this function, whereas [math]\theta[/math] will be the function's variable and allowed to vary freely; this function will be called the likelihood:

In practice it is often more convenient to work with the logarithm of the likelihood function, called the **log-likelihood**:

or the average log-likelihood:

The method of maximum likelihood estimates [math]\theta_0[/math], the true parameter of the distribution from which the sample is drawn, by finding a value of [math]\theta[/math] that maximizes [math]L_n(\theta)[/math]. This method of estimation defines a **maximum-likelihood estimator** (**MLE**) of [math]\theta_0[/math]

if a maximum exists. An MLE estimate is the same regardless of whether we maximize the likelihood or the log-likelihood function, since log is a monotonically increasing function.

For many models, a maximum likelihood estimator can be found as an explicit function of the observed data. For many other models, however, no closed-form solution to the maximization problem is known or available, and an MLE has to be found numerically. For some problems, there may be multiple estimates that maximize the likelihood. For other problems, no maximum likelihood estimate exists (meaning that the log-likelihood function increases without attaining the supremum value).

In the exposition above, it is assumed that the data are independent and identically distributed. The method can be applied however to a broader setting, as long as it is possible to write the joint density function [math]f(x_1,\ldots,x_n | \theta)[/math] and its parameter [math]\theta[/math] has a finite dimension which does not depend on the sample size [math]n[/math].

## Properties

Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.^{[1]} However, like other estimation methods, maximum-likelihood estimation possesses a number of attractive limiting properties:

- Consistency: the sequence of MLEs converges in probability to the value being estimated.
- Asymptotic normality: as the sample size increases, the distribution of the MLE tends to the Gaussian distribution with mean [math]\theta_0[/math] and covariance matrix equal to the inverse of the Fisher information matrix.
- Efficiency, i.e., it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound).
- Second-order efficiency after correction for bias.

Under certain conditions, the maximum likelihood estimator is consistent. The consistency means that having a sufficiently large number of observations [math]n[/math], it is possible to find the value of [math]\theta_0[/math] with arbitrary precision. In mathematical terms this means that as [math]n[/math] goes to infinity the estimator [math]\hat\theta[/math] converges in probability to its true value:

Under slightly stronger conditions, the estimator converges almost surely (or *strongly*) to:

### Asymptotic normality

In a wide range of situations, maximum likelihood parameter estimates exhibit asymptotic normality - that is, they are equal to the true parameters plus a random error that is approximately normal (given sufficient data), and the error's variance decays as 1/n:

where [math]N[/math] denotes the Normal distribution and [math]I(\theta_0)[/math] is the *Fisher information* matrix.

## Fisher Information

Tthe **Fisher information** is a way of measuring the amount of information that an observable random variable [math]X[/math] carries about an unknown parameter [math]\theta[/math] of a distribution that models [math]X[/math]. Formally, it is the variance of the score, or the expected value of the observed information. The Fisher-information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates.

### Definition

The Fisher information is a way of measuring the amount of information that an observable random variable [math]X[/math] carries about an unknown parameter [math]\theta[/math] upon which the probability of [math]X[/math] depends. The probability function for [math]X[/math], which is also the likelihood function for [math]\theta[/math], is a function [math]f(X;\theta)[/math]; it is the probability mass (or probability density) of the random variable [math]X[/math] conditional on the value of [math]\theta[/math]. The partial derivative with respect to [math]\theta[/math] of the natural logarithm of the likelihood function is called the score. Under certain regularity conditions,^{[2]} it can be shown that the first moment of the score is 0.

The second moment is called the Fisher information:

where, for any given value of [math]\theta[/math], the expression E[...|[math]\theta[/math]] denotes the conditional expectation over values for [math]X[/math] with respect to the probability function [math]f(x;\theta)[/math] given [math]\theta[/math]. Note that [math]0 \leq \mathcal{I}(\theta) \lt \infty[/math]. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable [math]X[/math] has been averaged out. Since the expectation of the score is zero, the Fisher information is also the variance of the score.

If [math]\log(f(x);\theta)[/math] is twice differentiable with respect to [math]\theta[/math], then, under certain regularity conditions, the Fisher information may also be written as^{[3]}

Thus, the Fisher information is the negative of the expectation of the second derivative with respect to [math]\theta[/math] of the natural natural logarithm of [math]f[/math]. Information may be seen to be a measure of the "curvature" of the support curve near the maximum likelihood estimate of [math]\theta[/math]. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information.

Information is additive, in that the information yielded by two independent experiments is the sum of the information from each experiment separately:

- [[math]] \mathcal{I}_{X,Y}(\theta) = \mathcal{I}_X(\theta) + \mathcal{I}_Y(\theta). [[/math]]

This result follows from the elementary fact that if random variables are independent, the variance of their sum is the sum of their variances. In particular, the information in a random sample of size [math]n[/math] is [math]n[/math] times that in a sample of size 1, when observations are independent and identically distributed.

### Reparametrization

The Fisher information depends on the parametrization of the problem. If [math]\theta[/math] and [math]\eta[/math] are two scalar parametrizations of an estimation problem, and [math]\theta[/math] is a continuously differentiable function of [math]\eta[/math], then

where [math]{\mathcal I}_\eta[/math] and [math]{\mathcal I}_\theta[/math] are the Fisher information measures of [math]\eta[/math] and [math]\theta[/math], respectively.^{[4]}

## Notes

- Pfanzagl (1994, p. 206)
- Suba Rao. "Lectures on statistical inference" (PDF).
- Lehmann & Casella, eq. (2.5.16).
- Lehmann & Casella, eq. (2.5.11).

## References

- Wikipedia contributors. "Maximum likelihood estimation".
*Wikipedia*. Wikipedia. Retrieved 31 July 2021.