# Statistical Models

## Mack-Method

The Mack chain ladder method is a statistical method to estimate developmental factors in the chain ladder method. The method assumes the following:

• Distinct rows of the array/matrix $C_{i,j}$ are independent.
• $\operatorname{E}[C_{i,j+1} | C_{i,0},\ldots,C_{i,j}] = f_j C_{i,j}$ with $f_j$ a constant.
• $\operatorname{Var}[C_{i,j+1} | C_{i,0},\ldots,C_{i,j}] = \sigma_{j}^2 C_{i,j}$ with $\sigma_j$ a constant.

The goal of the Mack-method is to estimate the factors $f_j$ using the observable $C_{i,j}$. The estimators, denoted $\hat{f}_j$, then become selected age-to-age factors. The estimators are defined as follows:

[$] $$\hat{f}_j = \frac{\sum_{i=0}^{I-j}C_{i,j+1}}{\sum_{i=0}^{I-j}C_{i,j}}.$$ [$]

They have the following desirable properties:

• $\hat{f_j}$ is an unbiased estimator for $f_j$: $\operatorname{E}[\hat{f_j}] = f_j$.
• The estimator $\hat{f_j}$ is a minimum variance estimator in the following sense:

[$] \hat{f_j} = \underset{X \in S_j}{\operatorname{argmin}} \operatorname{Var}[ X | A_j ],\, S_j = \{\sum_{i=0}^{I-j}w_i C_{i,j+1}/C_{i,j} | \sum_{i=0}^{I-j}w_i = 1\} [$]

with $A_j = \cup_{i=0}^{I-j}\{C_{i,0},\ldots,C_{i,j}\}$ the claims information contained in the first $j$ periods.

## Bühlmann-Straub Credibility Model

Recalling the Bornhuetter-Ferguson method, the expected ultimate claim $\mu_i$ for accident year $i$ can be unknown. In the Bühlmann-Straub Credibility model, we estimate $\mu_i$ using Bühlmann-Straub credibility estimates applied to the developmental triangle $C_{i,j}$. To implement this method, we recall the Bühlmann-Straub credibility model assumptions when applied to the incremental claim triangle $X_{i,j} = C_{i,j+1}-C_{i,j}$:

1. $\operatorname{E}[X_{i,j}\,|\,\Theta_i] = \gamma_j\mu(\Theta_i)$
2. $\operatorname{Var}[X_{i,j}\,|\,\Theta_i] = \gamma_j\sigma^2(\Theta_i)$
3. $\Theta_i$ are independent and identically distributed
4. Conditional on $\Theta_i$, $X_{i,j}$ is independent and identically distributed
5. $\sum_{j}\gamma_j = 1$ for all $i$

The $\gamma_j$ can be interpreted as relative exposure levels that depend on the developmental year $j$. In order to implement this method, we need to specify the relative exposure levels in advance or estimate them using, say, the [[guide:|the standard estimation techniques]].

Notice that because the $\Theta_i$ are assumed to be identically distributed, the unconditional expected ultimate claim amount is the same from accident year to accident year. The Bühlmann-Straub Credibility model generally allows for the exposure levels to depend on both indices $i,j$. For stochastic reserving, we typically assume that the relative exposure levels $\gamma_j$ are the same from accident year to accident year, but the base exposure level can vary from accident year to accident year. A typical case is earned premium for accident year. If $P_i$ denotes the earned premium for accident year $i$ then we would simply scale the data $C_{i,j}/P_i$ to get claims per exposure unit. The estimate for the ultimate claim for accident year $i$ is then simply equal to $P_i$ multiplied by the estimate for the ultimate claim for accident year $i$ obtained with the scaled data.

Bühlmann-Straub Credibility Model
1. Estimate the developmental factors $f_j$ using the volume weighted method (Mack method)
2. Set $\hat{\gamma}_j = \hat{\beta}_j - \hat{\beta}_{j-1}$ for $j\gt0$ and $\hat{\gamma}_0 = \hat{\beta}_0$.
3. The estimate for the ultimate claims for accident year $i$ equals
[$]\hat{C}^{\textrm{BS2}}_{i,J} = \hat{\beta}_{I-i}\hat{C}_{i,J} + (1-\hat{\beta}_{I-i})\hat{C}^{\textrm{BS}}_{i,J}[$]
where $\hat{C}_{i,J}$ is the estimate for ultimate claims for accident year $i$ based on the developmental factors $\hat{f}_j$ and $\hat{C}^{\textrm{BS}}_{i,J}$ is the Bühlmann-Straub Credibility estimate for $\mu_i$.
4. The Bühlmann-Straub Credibility estimate for $\mu_i$ equals
[$]\hat{C}^{\textrm{BS}}_{i,J} = Z_i\hat{C}_{i,J} + (1-Z_i)\hat{\mu}[$]
where
[$]Z_i = \frac{\hat{\beta}_{I-i}}{\hat{\beta}_{I-i} + \hat{v}/\hat{a}}.[$]
Here $\hat{v}$ is an estimate for $\operatorname{E}[\sigma^2(\Theta_i)]$, and $\hat{a}$ is an estimate for $\operatorname{Var}[\mu(\Theta_i)]$.
5. Compute the estimates $\hat{\mu}$, $\hat{a}$, and $\hat{v}$ using the formulas below:
[]\begin{align*}m_i = \hat{\beta}_{I-i}, & \,\, s_i^2 = \frac{1}{I-i}\sum_{j=0}^{I-i}\hat{\gamma}_j \left(\frac{X_{i,j}}{\hat{\gamma}_j} - \hat{C}_{i,J}\right)^2 \\ m = \sum_i m_i, &\,\, \overline{C} = \frac{\sum_{i=0}^I C_{i,I-i}}{m} \\ \hat{v} = \frac{1}{I}\sum_{i=0}^{I-1} s_i^2, &\, \, \hat{a} = \frac{\sum_{i=0}^Im_i(\hat{C}_{i,J}-\overline{C})^2 - I\hat{v}}{m - \frac{1}{m}\sum_{i=0}^I m_i^2}.\end{align*}[]

## Poisson Model

The Poisson model assumes that the incremental claim counts $Y_{i,j+1} = N_{i,j+1}-N_{i,j}$ have the following properties:

• $Y_{i,j}$, $j = 0,\ldots, J$, are independent random variables for all $i,j$.
• $Y_{i,j}$ is Poisson distributed with mean $\gamma_j \mu_i$ where $\sum_j \gamma_j = 1$.

Under this model, it is a simple matter to check that the sequence $N_{i,j}$ satisfies the classical assumptions for the Bornhuetter-Ferguson method:

• $\mu_i = \operatorname{E}[N_{i,J}]$ for all $0 \leq i \leq I$
• $N_{i,J} - N_{i,j}$ is independent of $N_{i,j}$ for any $0 \leq j \lt J$
• $\gamma_j = \mu_i / \operatorname{E}[N_{i,j}]$
• $\operatorname{Var}[N_{i,j}F_j] = F_j \operatorname{Var}[N_{i,J}]$ where $F_j = \sum_{k=1}^j \gamma_k$.

Following the Bornhuetter-Ferguson method, the projection for $N_{i,J}$ equals

[$] \hat{N}_{i,J} = N_{i,I + 1 - i} + (1 - F_i^{-1})\mu_i. [$]

When the parameters $\mu_i$ and $\gamma_j$ are unknown, they can be estimated using maximum likelihood estimation.

When $\mu_i$ are known, the maximum likelihood estimators for $\gamma_j$ yield estimated development factors
[$]\hat{f}_j = \frac{\sum_{k=0}^{j+1}\hat{\gamma}_k}{\sum_{k=0}^j\hat{\gamma}_k}[$]
that are equal to those obtained via the Mack-method.

## Overdispersed Poisson Model (ODP)

Unlike the Poisson model, the overdispersed Poisson model doesn't assume anything about the probability distribution of the incremental claim count developmental triangle $Y_{i,j}$. The overdispersed model is a special case of a generalized linear model:

• $Y_{i,j}$, $j = 0,\ldots, J$, are independent random variables, for all $i, j$
• $\operatorname{E}[Y_{i,j}] = \gamma_je^{c+ \alpha_i + \beta_j}$, where $c\gt 0, \beta_1 = 0, \alpha_1 = 0$ and $\sum_{j}\gamma_j = 1$.
• $\operatorname{Var}[Y_{i,j}] = \phi \operatorname{E}[Y_{i,j}]$, where $\phi \gt 0$ is the dispersion parameter.

### Generalized Linear Models

In a generalized linear model (GLM), each outcome $Y$ of the dependent variables is assumed to be generated from a particular distribution in an exponential family, a large class of probability distributions that includes the normal, binomial, Poisson and gamma distributions, among others. The mean, $\mu$, of the distribution depends on the independent variables, $\mathbf{X}$, through:

[$]\operatorname{E}[Y|\mathbf{X}] = \mu = g^{-1}(\mathbf{X}\boldsymbol{\beta}) [$]

where $\operatorname{E}[Y|\mathbf{X}]$ is the expected value of $Y$ conditional on $\mathbf{X}$; $\mathbf{X}\boldsymbol{\beta}$ is the linear predictor, a linear combination of unknown parameters $\boldsymbol{\beta}$; $g$ is the link function. In this framework, the variance is typically a function, $V$, of the mean:

[$] \operatorname{Var}[Y|\mathbf{X}] = \operatorname{V}(g^{-1}(\mathbf{X}\boldsymbol{\beta})). [$]

The unknown parameters, $\boldsymbol{\beta}$, are typically estimated with maximum likelihood, maximum quasi-likelihood, or Bayesian techniques.

## Model components

The GLM consists of three elements:

1. A particular distribution for modeling $Y$ from among those which are considered exponential families of probability distributions,
2. A linear predictor $\eta = \mathbf{X} \boldsymbol{\beta}$, and
3. A link function $g$ such that $\operatorname{E}[Y \mid \mathbf{X}] = \mu = g^{-1}(\eta)$.

### Probability distribution

An overdispersed exponential family of distributions is a generalization of an exponential family and the exponential dispersion model of distributions and includes those families of probability distributions, parameterized by $\theta$ and $\phi$, whose density functions $f$can be expressed in the form

[$] f_Y(y \mid \theta, \phi) = h(y,\phi) \exp \left(\frac{b(\theta)T(y) - A(\theta)}{d(\phi)} \right). \,\![$]

The dispersion parameter, $\phi$, typically is known and is usually related to the variance of the distribution. The functions $h(y,\phi)$, $b(\theta)$, $T(y)$, $A(\theta)$, and $d(\phi)$ are known. Many common distributions are in this family, including the normal, exponential, gamma, Poisson, Bernoulli, and (for fixed number of trials) binomial, multinomial, and negative binomial.

If $b(\theta)$ is the identity function, then the distribution is said to be in canonical form (or natural form). Note that any distribution can be converted to canonical form by rewriting $\boldsymbol\theta$ as $\theta'$ and then applying the transformation $\theta = b(\theta')$. It is always possible to convert $A(\theta)$ in terms of the new parametrization, even if $b(\theta')$ is not a one-to-one function; see comments in the page on exponential families. If, in addition, $T(y)$ is the identity and $\phi$ is known, then $\theta$ is called the canonical parameter (or natural parameter) and is related to the mean through

[$] \mu = A'(\theta). \,\![$]

Under this scenario, the variance of the distribution can be shown to be[1]

[$] A''(\theta) d(\phi). \,\![$]

### Linear predictor

The linear predictor is the quantity which incorporates the information about the independent variables into the model. The symbol η (Greek "eta") denotes a linear predictor. It is related to the expected value of the data through the link function.

$\eta$ is expressed as linear combinations (thus, "linear") of unknown parameters $\boldsymbol{\beta}$. The coefficients of the linear combination are represented as the matrix of independent variables $\boldsymbol{X}$. $\eta$ can thus be expressed as $\eta = \mathbf{X}\boldsymbol{\beta}.\,$

The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined canonical link function which is derived from the exponential of the response's density function. However, in some cases it makes sense to try to match the domain of the link function to the range of the distribution function's mean, or use a non-canonical link function for algorithmic purposes, for example Bayesian probit regression.

When using a distribution function with a canonical parameter $\theta$, the canonical link function is the function that expresses $\theta$ in terms of $\mu$, i.e. $\theta = b(\mu)$. For the most common distributions, the mean $\mu$ is one of the parameters in the standard form of the distribution's density function, and then $b(\mu)$ is the function as defined above that maps the density function into its canonical form. When using the canonical link function, $b(\mu) = \theta = \mathbf{X}\boldsymbol{\beta}$, which allows $\mathbf{X}^{\rm T} \mathbf{Y}$ to be a sufficient statistic for $\boldsymbol{\beta}$.

Following is a table of several exponential-family distributions in common use and the data they are typically used for, along with the canonical link functions and their inverses (sometimes referred to as the mean function, as done here).

Common distributions with typical uses and canonical link functions
Distribution Support of distribution Typical uses Link name Link function, $\mathbf{X}\boldsymbol{\beta}=g(\mu)\,\!$ Mean function
Normal real: $(-\infty,+\infty)$ Linear-response data Identity $\mathbf{X}\boldsymbol{\beta}=\mu\,\!$ $\mu=\mathbf{X}\boldsymbol{\beta}\,\!$
Exponential real: $(0,+\infty)$ Exponential-response data, scale parameters Negative inverse $\mathbf{X}\boldsymbol{\beta}=-\mu^{-1}\,\!$ $\mu=-(\mathbf{X}\boldsymbol{\beta})^{-1}\,\!$
Gamma
Inverse
Gaussian
real: $(0, +\infty)$ Inverse
squared
$\mathbf{X}\boldsymbol{\beta}=\mu^{-2}\,\!$ $\mu=(\mathbf{X}\boldsymbol{\beta})^{-1/2}\,\!$
Poisson integer: $0,1,2,\ldots$ count of occurrences in fixed amount of time/space Log $\mathbf{X}\boldsymbol{\beta} = \ln(\mu) \,\!$ $\mu=\exp (\mathbf{X}\boldsymbol{\beta}) \,\!$

### GLMs in Stochastic Reserving

The generalized linear models applicable to a triangle of random variables $Y_{i,j}$ are usually restricted to GLMs satisfying the following properties:

• $\operatorname{E}[Y_{i,j}] = \mu_{i,j}$
• $\operatorname{Var}[Y_{i,j}] = \phi \mu_i^p$, where $p \geq 0$
• $\eta_{i,j} = g(\mu_{i,j}) = \mathbf{X}_{i,j}\boldsymbol{\beta}$

The following table lists three families of distributions satisfying the properties above:

Distribution Support of distribution Link function, $\mathbf{X}\boldsymbol{\beta}=g(\mu)\,\!$ Mean function $p$ $\phi$
Normal real: $(-\infty,+\infty)$ $\mathbf{X}\boldsymbol{\beta}=\mu\,\!$ $\mu=\mathbf{X}\boldsymbol{\beta}\,\!$ 0 $\sigma^2$
Exponential real: $(0,+\infty)$ $\mathbf{X}\boldsymbol{\beta}=-\mu^{-1}\,\!$ $\mu=-(\mathbf{X}\boldsymbol{\beta})^{-1}\,\!$ 2 $\phi$
Gamma
Poisson integer: $0,1,2,\ldots$ $\mathbf{X}\boldsymbol{\beta} = \ln(\mu) \,\!$ $\mu=\exp (\mathbf{X}\boldsymbol{\beta}) \,\!$ 1 1

The overdispersed Poisson model, introduced above, is a special case with log-link function

[$]\mathbf{X}_{i,j}\boldsymbol{\beta} = \log(\gamma_j) +c + \alpha_i + \beta_j = \log(\mu_{i,j})[$]

, $p = 1$, and a free dispersion parameter $\phi \gt 0$. To prevent overfitting, we typically set $\alpha _0$ and $\beta_0$ to zero.

### Estimation of Model Parameters

Suppose we have a random sample $y_1,\ldots,y_n$ where $y_i$ is sampled from a distribution with density function

[$] f(y; \, \theta_i, \phi) = \exp(\frac{y\theta_i - b(\theta_i)}{(\phi/p_i)} + c(y,\phi)). [$]

The log-likelihood function for the random sample equals

[$]$$\label{glm-log-lik} l = \sum_{i=1}^n \frac{y_i\theta_i - b(\theta_i)}{(\phi/p_i)} + c(y_i,\phi).$$[$]

If we assume a canonical link function of the form $\theta_i = g(\mu_i)$ where $g$ denotes the link function, then the log-likelihood function depends on the unknown parameters $\boldsymbol{\beta}$. To estimate these unknown parameters, we use the maximum likelihood estimator. The maximum likelihood estimates can be found using an iteratively reweighted least squares algorithm or a Newton's method with updates of the form:

[$] \boldsymbol\beta^{(t+1)} = \boldsymbol\beta^{(t)} + \mathcal{J}^{-1}(\boldsymbol\beta^{(t)}) u(\boldsymbol\beta^{(t)}), [$]

where $\mathcal{J}(\boldsymbol\beta^{(t)})$ is the observed information matrix (the negative of the Hessian matrix) and $u(\boldsymbol\beta^{(t)})$ is the score function; or a Fisher's scoring method:

[$] \boldsymbol\beta^{(t+1)} = \boldsymbol\beta^{(t)} + \mathcal{I}^{-1}(\boldsymbol\beta^{(t)}) u(\boldsymbol\beta^{(t)}), [$]

where $\mathcal{I}(\boldsymbol\beta^{(t)})$ is the Fisher information matrix. When the canonical link function is in effect, the following algorithm is equivalent to the Fisher scoring algorithm given above:

MLE estimates for GLM: iterative least squares method

We iterate the following algorithm:

1. Create the sequence
[$]z_i = \hat{\eta}_i + (y_i - \hat{\mu}_i)\frac{d\eta_i}{d\mu_i}[$]
where $\hat{\eta}_i = g(\hat{\mu}_i)$ and $\hat{\mu_i}$ is the current best estimate for $\mu_i.$
2. Compute the weights
[$]w_i = \frac{p_i}{b''(\theta_i) \left(\frac{d\eta_i}{d\mu_i}\right)^2}.[$]
3. Estimate $\boldsymbol{\beta}$ using weighted least-squares regression where $z_i$ is the dependent variable, $\mathbf{x}_i$ are the predictor variables, and $w_i$ are the weights:
[$]\hat{\boldsymbol{\beta}} = (X^T W X)^{-1}X^TWz, \, W_{ii} = w_i. [$]

For the overdispersed Poisson model we do not assume anything about the distribution of the random sample; consequently, we can't obtain maximum likelihood estimates. Instead we obtain quasi-maximum likelihood estimates by maximizing the likelihood function \ref{glm-log-lik} associated with the log-link function $\eta_i = g(\mu_i) = \log(\mu_i)$. In this special case, we have $b''(\theta_i) = \mu_i$ and $\frac{d\eta_i}{d\mu_i} = 1/\mu_i$.

QMLE estimates for overdispersed Poisson model

We iterate the following algorithm:

1. Create the sequence
[$]z_i = \log(\hat{\mu}_i) + \frac{y_i - \hat{\mu}_i}{\hat{\mu}_i}[$]
where $\hat{\mu_i}$ is the current best estimate for $\mu_i.$
2. Estimate $\boldsymbol{\beta}$ using weighted least-squares regression where $z_i$ is the dependent variable and $\mathbf{x}_i$ are the predictor variables:
[$]\hat{\boldsymbol{\beta}} = (X^T W X)^{-1}X^TWz, \, W_{ii} = \hat{\mu}_i. [$]

## References

1. McCullagh & Nelder 1989, Chapter 2.

## Wikipedia References

• Wikipedia contributors. "Generalized linear model". Wikipedia. Wikipedia. Retrieved 18 February 2023.