Linear regression, the ridge regression estimator, and eigenvalue shrinkage

High-throughput techniques measure many characteristics of a single sample simultaneously. The number of characteristics $p$ measured may easily exceed ten thousand. In most medical studies the number of samples $n$ involved often falls behind the number of characteristics measured, i.e: $p \gt n$. The resulting $(n \times p)$-dimensional data matrix $\mathbf{X}$:

[$] \begin{eqnarray*} \mathbf{X} & = & \left( X_{\ast,1} \, | \, \ldots \, | \, X_{\ast,p} \right) \, \, \, = \, \, \, \left( \begin{array}{c} X_{1,\ast} \\ \vdots \\ X_{n,\ast} \end{array} \right) \, \, \, = \, \, \, \left( \begin{array}{ccc} X_{1,1} & \ldots & X_{1,p} \\% \vdots & \ddots & \vdots \\ X_{n,1} & \ldots & X_{n,p} \end{array} \right) \end{eqnarray*} [$]

from such a study contains a larger number of covariates than samples. When $p \gt n$ the data matrix $\mathbf{X}$ is said to be high-dimensional, although no formal definition exists.

In this chapter we adopt the traditional statistical notation of the data matrix. An alternative notation would be $\mathbf{X}^{\top}$ (rather than $\mathbf{X}$), which is employed in the field of (statistical) bioinformatics. In $\mathbf{X}^{\top}$ the columns comprise the samples rather than the covariates. The case for the bioinformatics notation stems from practical arguments. A spreadsheet is designed to have more rows than columns. In case $p \gt n$ the traditional notation yields a spreadsheet with more columns than rows. When $p \gt 10000$ the conventional display is impractical. In this chapter we stick to the conventional statistical notation of the data matrix as all mathematical expressions involving $\mathbf{X}$ are then in line with those of standard textbooks on regression.

The information contained in $\mathbf{X}$ is often used to explain a particular property of the samples involved. In applications in molecular biology $\mathbf{X}$ may contain microRNA expression data from which the expression levels of a gene are to be described. When the gene's expression levels are denoted by $\mathbf{Y} = (Y_{1}, \ldots, Y_n)^{\top}$, the aim is to find the linear relation $Y_i = \mathbf{X}_{i, \ast} \bbeta$ from the data at hand by means of regression analysis. Regression is however frustrated by the high-dimensionality of $\mathbf{X}$ (illustrated in Section The ridge regression estimator and at the end of Section Constrained estimation ). These notes discuss how regression may be modified to accommodate the high-dimensionality of $\mathbf{X}$. First, linear regression is recaputilated.

Linear regression

Consider an experiment in which $p$ characteristics of $n$ samples are measured. The data from this experiment are denoted $\mathbf{X}$, with $\mathbf{X}$ as above. The matrix $\mathbf{X}$ is called the design matrix. Additional information of the samples is available in the form of $\mathbf{Y}$ (also as above). The variable $\mathbf{Y}$ is generally referred to as the response variable. The aim of regression analysis is to explain $\mathbf{Y}$ in terms of $\mathbf{X}$ through a functional relationship like $Y_i = f(\mathbf{X}_{i,\ast})$. When no prior knowledge on the form of $f(\cdot)$ is available, it is common to assume a linear relationship between $\mathbf{X}$ and $\mathbf{Y}$. This assumption gives rise to the linear regression model:

[$] \begin{eqnarray} \label{form.linRegressionModel} Y_{i} & = & \mathbf{X}_{i,\ast} \, \bbeta + \varepsilon_i \, \, \, = \, \, \, \beta_1 \, X_{i,1} + \ldots + \beta_{p} \, X_{i, p} + \varepsilon_i. \end{eqnarray} [$]

In model (\ref{form.linRegressionModel}) $\bbeta = (\beta_1, \ldots, \beta_p)^{\top}$ is the regression parameter. The parameter $\beta_j$, $j=1, \ldots, p$, represents the effect size of covariate $j$ on the response. That is, for each unit change in covariate $j$ (while keeping the other covariates fixed) the observed change in the response is equal to $\beta_j$. The second summand on the right-hand side of the model, $\varepsilon_i$, is referred to as the error. It represents the part of the response not explained by the functional part $\mathbf{X}_{i,\ast} \, \bbeta$ of the model (\ref{form.linRegressionModel}). In contrast to the functional part, which is considered to be systematic (i.e. non-random), the error is assumed to be random. Consequently, $Y_{i}$ need not be equal $Y_{i'}$ for $i \not= i'$, even if $\mathbf{X}_{i,\ast}= \mathbf{X}_{i',\ast}$. To complete the formulation of model (\ref{form.linRegressionModel}) we need to specify the probability distribution of $\varepsilon_i$. It is assumed that $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$ and the $\varepsilon_{i}$ are independent, i.e.:

[$] \begin{eqnarray*} \mbox{Cov}(\varepsilon_{i}, \varepsilon_{i'}) & = & \left\{ \begin{array}{lcc} \sigma^2 & \mbox{if} & i = i', \\ 0 & \mbox{if} & i \not= i'. \end{array} \right. \end{eqnarray*} [$]

The randomness of $\varepsilon_i$ implies that $\mathbf{Y}_i$ is also a random variable. In particular, $\mathbf{Y}_i$ is normally distributed, because $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$ and $\mathbf{X}_{i,\ast} \, \bbeta$ is a non-random scalar. To specify the parameters of the distribution of $\mathbf{Y}_i$ we need to calculate its first two moments. Its expectation equals:

[$] \begin{eqnarray*} \mathbb{E}(Y_i) & = & \mathbb{E}(\mathbf{X}_{i, \ast} \, \bbeta) + \mathbb{E}(\varepsilon_i) \, \, \, = \, \, \, \mathbf{X}_{i, \ast} \, \bbeta, \end{eqnarray*} [$]

while its variance is:

[$] \begin{eqnarray*} \mbox{Var}(Y_i) & = & \mathbb{E} \{ [Y_i - \mathbb{E}(Y_i)]^2 \} \qquad \qquad \qquad \qquad \qquad \quad = \, \, \, \mathbb{E} ( Y_i^2 ) - [\mathbb{E}(Y_i)]^2 \\ & = & \mathbb{E} [ ( \mathbf{X}_{i, \ast} \, \bbeta)^2 + 2 \varepsilon_i \mathbf{X}_{i, \ast} \, \bbeta + \varepsilon_i^2 ] - ( \mathbf{X}_{i, \ast} \, \bbeta)^2 \, \, \, = \, \, \, \mathbb{E}(\varepsilon_i^2 ) \\ & = & \mbox{Var}(\varepsilon_i ) \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \, \, = \, \, \, \sigma^2. \end{eqnarray*} [$]

Hence, $Y_i \sim \mathcal{N}( \mathbf{X}_{i, \ast} \, \bbeta, \sigma^2)$. This formulation (in terms of the normal distribution) is equivalent to the formulation of model (\ref{form.linRegressionModel}), as both capture the assumptions involved: the linearity of the functional part and the normality of the error.

Model (\ref{form.linRegressionModel}) is often written in a more condensed matrix form:

[$] \begin{eqnarray} \mathbf{Y} & = & \mathbf{X} \, \bbeta + \vvarepsilon, \label{form.linRegressionModelinMatrix} \end{eqnarray} [$]

where $\vvarepsilon = (\varepsilon_1, \varepsilon_2, \ldots, \varepsilon_n)^{\top}$ and distributed as $\vvarepsilon \sim \mathcal{N}(\mathbf{0}_{p}, \sigma^2 \mathbf{I}_{nn})$. As above model (\ref{form.linRegressionModelinMatrix}) can be expressed as a multivariate normal distribution: $\mathbf{Y} \sim \mathcal{N}(\mathbf{X} \, \bbeta, \sigma^2 \mathbf{I}_{nn})$.

Model (\ref{form.linRegressionModelinMatrix}) is a so-called hierarchical model. This terminology emphasizes that $\mathbf{X}$ and $\mathbf{Y}$ are not on a par, they play different roles in the model. The former is used to explain the latter. In model (\ref{form.linRegressionModel}) $\mathbf{X}$ is referred as the explanatory or independent variable, while the variable $\mathbf{Y}$ is generally referred to as the response or dependent variable.

The covariates, the columns of $\mathbf{X}$, may themselves be random. To apply the linear model they are temporarily assumed fixed. The linear regression model is then to be interpreted as $\mathbf{Y} \, | \, \mathbf{X} \sim \mathcal{N}(\mathbf{X} \, \bbeta, \sigma^2 \mathbf{I}_{nn})$

Example (Methylation of a tumor-suppressor gene)

Consider a study which measures the gene expression levels of a tumor-suppressor genes (TSG) and two methylation markers (MM1 and MM2) on 67 samples. A methylation marker is a gene that promotes methylation. Methylation refers to attachment of a methyl group to a nucleotide of the DNA. In case this attachment takes place in or close by the promotor region of a gene, this complicates the transcription of the gene. Methylation may down-regulate a gene. This mechanism also works in the reverse direction: removal of methyl groups may up-regulate a gene. A tumor-suppressor gene is a gene that halts the progression of the cell towards a cancerous state.

The medical question associated with these data: do the expression levels methylation markers affect the expression levels of the tumor-suppressor gene? To answer this question we may formulate the following linear regression model:

[$] \begin{eqnarray*} Y_{i, \texttt{{\tiny tsg}}} & = & \beta_0 + \beta_{\texttt{{\tiny mm1}}} X_{i, \texttt{{\tiny mm1}}} + \beta_{\texttt{{\tiny mm2}}} X_{i, \texttt{{\tiny mm2}}} + \varepsilon_i, \end{eqnarray*} [$]

with $i = 1, \ldots, 67$ and $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$. The interest focusses on $\beta_{\texttt{{\scriptsize mm1}}}$ and $\beta_{\texttt{{\scriptsize mm2}}}$. A non-zero value of at least one of these two regression parameters indicates that there is a linear association between the expression levels of the tumor-suppressor gene and that of the methylation markers.

Prior knowledge from biology suggests that the $\beta_{\texttt{{\scriptsize mm1}}}$ and $\beta_{\texttt{{\scriptsize mm2}}}$ are both non-positive. High expression levels of the methylation markers lead to hyper-methylation, in turn inhibiting the transcription of the tumor-suppressor gene. Vice versa, low expression levels of MM1 and MM2 are (via hypo-methylation) associated with high expression levels of TSG. Hence, a negative concordant effect between MM1 and MM2 (on one side) and TSG (on the other) is expected. Of course, the methylation markers may affect expression levels of other genes that in turn regulate the tumor-suppressor gene. The regression parameters $\beta_{\texttt{{\scriptsize mm1}}}$ and $\beta_{\texttt{{\scriptsize mm2}}}$ then reflect the indirect effect of the methylation markers on the expression levels of the tumor suppressor gene.

The linear regression model (\ref{form.linRegressionModel}) involves the unknown parameters: $\bbeta$ and $\sigma^2$, which need to be learned from the data. The parameters of the regression model, $\bbeta$ and $\sigma^2$ are estimated by means of likelihood maximization. Recall that $Y_i \sim \mathcal{N}( \mathbf{X}_{i,\ast} \, \bbeta, \sigma^2)$ with corresponding density: $f_{Y_i}(y_i) = (2 \, \pi \, \sigma^2)^{-1/2} \, \exp[ - (y_i - \mathbf{X}_{i\ast} \, \bbeta)^2 / 2 \sigma^2 ]$. The likelihood thus is:

[$] \begin{eqnarray*} L(\mathbf{Y}, \mathbf{X}; \bbeta, \sigma^2) & = & \prod_{i=1}^n (\sqrt{2 \, \pi} \, \sigma)^{-1} \, \exp[ - (Y_i - \mathbf{X}_{i, \ast} \, \bbeta)^2 / 2 \sigma^2 ], \end{eqnarray*} [$]

in which the independence of the observations has been used. Because of the concavity of the logarithm, the maximization of the likelihood coincides with the maximum of the logarithm of the likelihood (called the log-likelihood). Hence, to obtain maximum likelihood (ML) estimates of the parameter it is equivalent to find the maximum of the log-likelihood. The log-likelihood is:

[$] \begin{eqnarray*} \mathcal{L}(\mathbf{Y}, \mathbf{X}; \bbeta, \sigma^2) & = & \log[ L(\mathbf{Y}, \mathbf{X}; \bbeta, \sigma^2) ] \, \, \, = \, \, \, % \log \Big[ \prod_{i=1}^n f_{Y_i}(y_i) \Big] -n \, \log(\sqrt{2 \, \pi} \, \sigma) - \tfrac{1}{2} \sigma^{-2} \sum\nolimits_{i=1}^n (y_i - \mathbf{X}_{i, \ast} \, \bbeta)^2. \end{eqnarray*} [$]

After noting that $\sum_{i=1}^n (Y_i - \mathbf{X}_{i, \ast} \, \bbeta)^2 = \| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2 \, \, \, = \, \, \, (\mathbf{Y} - \mathbf{X} \, \bbeta)^{\top} \, (\mathbf{Y} - \mathbf{X} \, \bbeta)$, the log-likelihood can be written as:

[$] \begin{eqnarray*} \mathcal{L}(\mathbf{Y}, \mathbf{X}; \bbeta, \sigma^2) & = & -n \, \log(\sqrt{2 \, \pi} \, \sigma) - \tfrac{1}{2} \sigma^{-2} \, \| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2. \end{eqnarray*} [$]

In order to find the maximum of the log-likelihood, take its derivate with respect to $\bbeta$:

[$] \begin{eqnarray*} \frac{\partial }{\partial \, \beta} \mathcal{L}(\mathbf{Y}, \mathbf{X}; \bbeta, \sigma^2) & = & - \tfrac{1}{2} \sigma^{-2} \, \frac{\partial }{\partial \, \beta} \| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2 \, \, \, = \, \, \, \tfrac{1}{2} \sigma^{-2} \, \mathbf{X}^{\top} (\mathbf{Y} - \mathbf{X} \, \bbeta). \end{eqnarray*} [$]

Equate this derivative to zero gives the estimating equation for $\bbeta$:

[$] \begin{eqnarray} \label{form.normalEquation} \mathbf{X}^{\top} \mathbf{X} \, \bbeta & = & \mathbf{X}^{\top} \mathbf{Y}. \end{eqnarray} [$]

Equation (\ref{form.normalEquation}) is called to the normal equation. Pre-multiplication of both sides of the normal equation by $(\mathbf{X}^{\top} \mathbf{X})^{-1}$ now yields the ML estimator of the regression parameter: $\hat{\bbeta} = (\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \mathbf{Y}$, in which it is assumed that $(\mathbf{X}^{\top} \mathbf{X})^{-1}$ is well-defined.

Along the same lines one obtains the ML estimator of the residual variance. Take the partial derivative of the loglikelihood with respect to $\sigma^2$:

[$] \begin{eqnarray*} \frac{\partial }{\partial \, \sigma} \mathcal{L}(\mathbf{Y}, \mathbf{X}; \bbeta, \sigma^2) & = & - \sigma^{-1} + \sigma^{-3} \| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2. \end{eqnarray*} [$]

Equate the right-hand side to zero and solve for $\sigma^2$ to find $\hat{\sigma}^2 = n^{-1} \| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2$. In this expression $\bbeta$ is unknown and the ML estimate of $\bbeta$ is plugged-in.

With explicit expressions of the ML estimators at hand, we can study their properties. The expectation of the ML estimator of the regression parameter $\bbeta$ is:

[$] \begin{eqnarray*} \mathbb{E}(\hat{\bbeta}) & = & \mathbb{E}[ (\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \mathbf{Y}] \, \, \, = \, \, \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \mathbb{E}( \mathbf{Y}) \, \, \, = \, \, \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \mathbf{X} \, \bbeta \, \, \, \, \, = \, \, \, \bbeta. \end{eqnarray*} [$]

Hence, the ML estimator of the regression coefficients is unbiased.

The variance of the ML estimator of $\bbeta$ is:

[$] \begin{eqnarray*} \mbox{Var}(\hat{\bbeta}) & = & \mathbb{E} \{ [\hat{\bbeta} - \mathbb{E}(\hat{\bbeta})] [\hat{\bbeta} - \mathbb{E}(\hat{\bbeta})]^{\top} \} \\ & = & \mathbb{E} \{ [(\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \mathbf{Y} - \bbeta] \, [(\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \mathbf{Y} - \bbeta]^{\top} \} \\ & = & (\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \, \mathbb{E} \{ \mathbf{Y} \, \mathbf{Y}^{\top} \} \, \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} - \bbeta \, \bbeta^{\top} \\ & = & (\mathbf{X}^{\top} \mathbf{X})^{-1} \, \mathbf{X}^{\top} \, \{ \mathbf{X} \, \bbeta \, \bbeta^{\top} \, \mathbf{X}^{\top} + \SSigma \} \, \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} - \bbeta \, \bbeta^{\top} \\ & = & \bbeta \, \bbeta^{\top} + \sigma^2 \, (\mathbf{X}^{\top} \mathbf{X})^{-1} - \bbeta \, \bbeta^{\top} \\ & = & \sigma^2 \, (\mathbf{X}^{\top} \mathbf{X})^{-1}, \end{eqnarray*} [$]

in which we have used that $\mathbb{E} (\mathbf{Y} \mathbf{Y}^{\top}) = \mathbf{X} \, \bbeta \, \bbeta^{\top} \, \mathbf{X}^{\top} + \sigma^2 \, \mathbf{I}_{nn}$. From $\mbox{Var}(\hat{\bbeta}) = \sigma^2 \, (\mathbf{X}^{\top} \mathbf{X})^{-1}$, one obtains an estimate of the variance of the estimator of the $j$-th regression coefficient: $\hat{\sigma}^2 (\hat{\beta}_j ) = \hat{\sigma}^2 \sqrt{ [(\mathbf{X}^{\top} \mathbf{X})^{-1}]_{jj} }$. This may be used to construct a confidence interval for the estimates or test the hypothesis $H_0: \beta_j = 0$. In the latter $\hat{\sigma}^2$ should not be the maximum likelihood estimator, as it is biased. It is then to be replaced by the residual sum-of-squares divided by $n-p$ rather than $n$.

The prediction of $Y_i$, denoted $\widehat{Y}_i$, is the expected value of $Y_i$ according the linear regression model (with its parameters replaced by their estimates). The prediction of $Y_i$ thus equals $\mathbb{E}(Y_i; \hat{\bbeta}, \hat{\sigma}^2) = \mathbf{X}_{i, \ast} \hat{\bbeta}$. In matrix notation the prediction is:

[$] \begin{eqnarray*} \widehat{\mathbf{Y}} & = & \mathbf{X} \, \hat{\bbeta} \, \, \, = \, \, \, \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y} \, \, \, := \, \, \, \mathbf{H} \mathbf{Y}, \end{eqnarray*} [$]

where $\mathbf{H}$ is the hat matrix, as it ‘puts the hat’ on $\mathbf{Y}$. Note that the hat matrix is a projection matrix, i.e. $\mathbf{H}^2 = \mathbf{H}$ for

[$] \begin{eqnarray*} \mathbf{H}^2 & = & \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \, \, \, = \, \, \, \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top}. \end{eqnarray*} [$]

Thus, the prediction $\widehat{\mathbf{Y}}$ is an orthogonal projection of $\mathbf{Y}$ onto the space spanned by the columns of $\mathbf{X}$.

With $\widehat{\bbeta}$ available, an estimate of the errors $\hat{\varepsilon}_i$, dubbed the residuals are obtained via:

[$] \begin{eqnarray*} \hat{\vvarepsilon} & = & \mathbf{Y} - \widehat{\mathbf{Y}} \, \, \, = \, \, \, \mathbf{Y} - \mathbf{X} \, \hat{\bbeta} \, \, \, = \, \, \, \mathbf{Y} - \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y} \, \, \, = \, \, \, [ \mathbf{I} - \mathbf{X} \, (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} ] \, \mathbf{Y}. \end{eqnarray*} [$]

Thus, the residuals are a projection of $\mathbf{Y}$ onto the orthogonal complement of the space spanned by the columns of $\mathbf{X}$. The residuals are to be used in diagnostics, e.g. checking of the normality assumption by means of a normal probability plot.

For more on the linear regression model confer the monograph of [1].

The ridge regression estimator

If the design matrix is high-dimensional, the covariates (the columns of $\mathbf{X}$) are super-collinear. Recall collinearity in regression analysis refers to the event of two (or multiple) covariates being strongly linearly related. Consequently, the space spanned by super-collinear covariates is then a lower-dimensional subspace of the parameter space. If the design matrix $\mathbf{X}$, which contains the collinear covariates as columns, is (close to) rank deficient, it is (almost) impossible to separate the contribution of the individual covariates. The uncertainty with respect to the covariate responsible for the variation explained in $\mathbf{Y}$ is often reflected in the fit of the linear regression model to data by a large error of the estimates of the regression parameters corresponding to the collinear covariates and, consequently, usually accompanied by large values of the estimates.

Example (Collinearity)

The flotillins (the FLOT-1 and FLOT-2 genes) have been observed to regulate the proto-oncogene ERBB2 in vitro [2]. One may wish to corroborate this in vivo. To this end we use gene expression data of a breast cancer study, available as a Bioconductor package: breastCancerVDX. From this study the expression levels of probes interrogating the FLOT-1 and ERBB2 genes are retrieved. For clarity of the illustration the FLOT-2 gene is ignored. After centering, the expression levels of the first ERBB2 probe are regressed on those of the four FLOT-1 probes. The R-code below carries out the data retrieval and analysis.

# load packages
library(Biobase)
library(breastCancerVDX)

data(vdx)

# ids of genes FLOT1
idFLOT1 <- which(fData(vdx)[,5] == 10211)

# ids of ERBB2
idERBB2 <- which(fData(vdx)[,5] == 2064)

# get expression levels of probes mapping to FLOT genes
X <- t(exprs(vdx)[idFLOT1,])
X <- sweep(X, 2, colMeans(X))

# get expression levels of probes mapping to FLOT genes
Y <- t(exprs(vdx)[idERBB2,])
Y <- sweep(Y, 2, colMeans(Y))

# regression analysis
summary(lm(formula = Y[,1] ~ X[,1] + X[,2] + X[,3] + X[,4]))

# correlation among the covariates
cor(X)


Prior to the regression analysis, we first assess whether there is collinearity among the FLOT-1 probes through evaluation of the correlation matrix. This reveals a strong correlation ($\hat{\rho} = 0.91$) between the second and third probe. All other cross-correlations do not exceed the 0.20 (in an absolute sense). Hence, there is collinearity among the columns of the design matrix in the to-be-performed regression analysis.

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)   0.0000     0.0633  0.0000   1.0000
X[, 1]        0.1641     0.0616  2.6637   0.0081 **
X[, 2]        0.3203     0.3773  0.8490   0.3965
X[, 3]        0.0393     0.2974  0.1321   0.8949
X[, 4]        0.1117     0.0773  1.4444   0.1496
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.175 on 339 degrees of freedom
Multiple R-squared:  0.04834,	Adjusted R-squared:  0.03711
F-statistic: 4.305 on 4 and 339 DF,  p-value: 0.002072


The output of the regression analysis above shows the first probe to be significantly associated to the expression levels of ERBB2. The collinearity of the second and third probe reveals itself in the standard errors of the effect size: for these probes the standard error is much larger than those of the other two probes. This reflects the uncertainty in the estimates. Regression analysis has difficulty to decide to which covariate the explained proportion of variation in the response should be attributed. The large standard error of these effect sizes propagates to the testing as the Wald test statistic is the ratio of the estimated effect size and its standard error. Collinear covariates are thus less likely to pass the significance threshold.

The case of two (or multiple) covariates being perfectly linearly dependent is referred as super-collinearity. The rank of a high-dimensional design matrix is maximally equal to $n$: $\mbox{rank}(\mathbf{X}) \leq n$. Consequently, the dimension of subspace spanned by the columns of $\mathbf{X}$ is smaller than or equal to $n$. As $p \gt n$, this implies that columns of $\mathbf{X}$ are linearly dependent. Put differently, a high-dimensional $\mathbf{X}$ suffers from super-collinearity.

Example (Super-collinearity)

Consider the design matrix:

[$] \begin{eqnarray*} \mathbf{X} & = & \left( \begin{array}{rrr} 1 & -1 & 2 \\ 1 & 0 & 1 \\ 1 & 2 & -1 \\ 1 & 1 & 0 \end{array} \right) \end{eqnarray*} [$]

The columns of $\mathbf{X}$ are linearly dependent: the first column is the row-wise sum of the other two columns. The rank (more correct, the column rank) of a matrix is the dimension of space spanned by the column vectors. Hence, the rank of $\mathbf{X}$ is equal to the number of linearly independent columns: $\mbox{rank}(\mathbf{X}) = 2$.

Super-collinearity of an $(n \times p)$-dimensional design matrix $\mathbf{X}$ implies\footnote{If the (column) rank of $\mathbf{X}$ is smaller than $p$, there exists a non-trivial $\mathbf{v} \in \mathbb{R}^p$ such that $\mathbf{X} \mathbf{v} = \mathbf{0}_{p}$. Multiplication of this inequality by $\mathbf{X}^{\top}$ yields $\mathbf{X}^{\top} \mathbf{X} \mathbf{v} = \mathbf{0}_{p}$. As $\mathbf{v} \not= \mathbf{0}_{p}$, this implies that $\mathbf{X}^{\top} \mathbf{X}$ is not invertible.} that the rank of the $(p \times p)$-dimensional matrix $\mathbf{X}^{\top} \mathbf{X}$ is smaller than $p$, and, consequently, it is singular. A square matrix that does not have an inverse is called singular. A matrix $\mathbf{A}$ is singular if and only if its determinant is zero: $\mbox{det}(\mathbf{A}) = 0$.

Example (Singularity)

Consider the matrix $\mathbf{A}$ given by:

[$] \begin{eqnarray*} \mathbf{A} & = & \left( \begin{array}{rr} 1 & 2 \\ 2 & 4 \end{array} \right) \end{eqnarray*} [$]

Clearly, $\mbox{det}(\mathbf{A}) = a_{11} a_{22} - a_{12} a_{21} = 1 \times 4 - 2 \times 2 = 0$. Hence, $\mathbf{A}$ is singular and its inverse is undefined.

As $\mbox{det}(\mathbf{A})$ is equal to the product of the eigenvalues $\nu_j$ of $\mathbf{A}$, the matrix $\mathbf{A}$ is singular if one (or more) of the eigenvalues of $\mathbf{A}$ is zero. To see this, consider the spectral decomposition of $\mathbf{A}$: $\mathbf{A} = \sum_{j=1}^p \nu_j \, \mathbf{v}_j \, \mathbf{v}_j^{\top}$, where $\mathbf{v}_j$ is the eigenvector corresponding to $\nu_j$. To obtain the inverse of $\mathbf{A}$ it requires to take the reciprocal of the eigenvalues: $\mathbf{A}^{-1} = \sum_{j=1}^p \nu_j^{-1} \, \mathbf{v}_j \, \mathbf{v}_j^{\top}$. The right-hand side is undefined if $\nu_j =0$ for any $j$.

Example (Singularity, continued)

Revisit Example. Matrix $\mathbf{A}$ has eigenvalues $\nu_1 =5$ and $\nu_2=0$. According to the spectral decomposition, the inverse of $\mathbf{A}$ is: $\mathbf{A}^{-1} = \tfrac{1}{5} \, \mathbf{v}_1 \, \mathbf{v}_1^{\top} + \tfrac{1}{0} \, \mathbf{v}_2 \, \mathbf{v}_2^{\top}$. This expression is undefined as we divide by zero in the second summand on the right-hand side.

In summary, the columns of a high-dimensional design matrix $\mathbf{X}$ are linearly dependent and this super-collinearity causes $\mathbf{X}^{\top} \mathbf{X}$ to be singular. Now recall the ML estimator of the parameter of the linear regression model: $\hat{\bbeta} = (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y}$. This estimator is only well-defined if $(\mathbf{X}^{\top} \mathbf{X})^{-1}$ exits. Hence, when $\mathbf{X}$ is high-dimensional the regression parameter $\bbeta$ cannot be estimated.

Above only the practical consequence of high-dimensionality is presented: the expression $( \mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y}$ cannot be evaluated numerically. But the problem arising from the high-dimensionality of the data is more fundamental. To appreciate this, consider the normal equations: $\mathbf{X}^{\top} \mathbf{X} \bbeta = \mathbf{X}^{\top} \mathbf{Y}$. The matrix $\mathbf{X}^{\top} \mathbf{X}$ is of rank $n$, while $\bbeta$ is a vector of length $p$. Hence, while there are $p$ unknowns, the system of linear equations from which these are to be solved effectively comprises $n$ degrees of freedom. If $p \gt n$, the vector $\bbeta$ cannot uniquely be determined from this system of equations. To make this more specific let $\mathcal{U}$ be the $n$-dimensional space spanned by the columns of $\mathbf{X}$ and the $p-n$-dimensional space $\mathcal{V}$ be orthogonal complement of $\mathcal{U}$, i.e. $\mathcal{V} = \mathcal{U}^{\perp}$. Then, $\mathbf{X} \mathbf{v} = \mathbf{0}_{p}$ for all $\mathbf{v} \in \mathcal{V}$. So, $\mathcal{V}$ is the non-trivial null space of $\mathbf{X}$. Consequently, as $\mathbf{X}^{\top} \mathbf{X} \mathbf{v} = \mathbf{X}^{\top} \mathbf{0}_{p} = \mathbf{0}_{n}$, the solution of the normal equations is:

[$] \begin{eqnarray*} \hat{\bbeta} & = & ( \mathbf{X}^{\top} \mathbf{X})^{+} \mathbf{X}^{\top} \mathbf{Y} + \mathbf{v} \qquad \mbox{for all } \mathbf{v} \in \mathcal{V}, \end{eqnarray*} [$]

where $\mathbf{A}^{+}$ denotes the Moore-Penrose inverse of the matrix $\mathbf{A}$ (adopting the notation of [3]). For a square symmetric matrix, the generalized inverse is defined as:

[$] \begin{eqnarray*} \mathbf{A}^{+} & = & \sum\nolimits_{j=1}^p \nu_j^{-1} \, I_{\{ \nu_j \not= 0 \} } \, \mathbf{v}_j \, \mathbf{v}_j^{\top}, \end{eqnarray*} [$]

where $\mathbf{v}_j$ are the eigenvectors of $\mathbf{A}$ (and are not -- necessarily -- an element of $\mathcal{V}$). The solution of the normal equations is thus only determined up to an element from a non-trivial space $\mathcal{V}$, and there is no unique estimator of the regression parameter. % With this mind, the ridge estimator can then be thought of as altering the normal equations provide a unique solution for $\bbeta$.

To arrive at a unique regression estimator for studies with rank deficient design matrices, the minimum least squares estimator may be employed.

Definition [4]

The minimum least squares estimator of regression parameter minimizes the sum-of-squares criterion and is of minimum length. Formally, $\hat{\bbeta}_{\mbox{{\tiny MLS}}} = \arg \min_{\bbeta \in \mathbb{R}^p} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2$ such that $\| \hat{\bbeta}_{\mbox{{\tiny MLS}}} \|_2^2 \lt \| \bbeta \|_2^2$ for all $\bbeta$ that minimize $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2$.

If $\mathbf{X}$ is of full rank, the minimum least squares regression estimator coincides with the least squares/maximum likelihood one as the latter is a unique minimizer of the sum-of-squares criterion and, thereby, automatically also the minimizer of minimum length. When $\mathbf{X}$ is rank deficient, $\hat{\bbeta}_{\mbox{{\tiny MLS}}} = (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y}$. To see this recall from above that $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2$ is minimized by $(\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y} + \mathbf{v}$ for all $\mathbf{v} \in V$. The length of these minimizers is:

[$] \begin{eqnarray*} \| (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y} + \mathbf{v} \|_2^2 & = & \| (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y} \|_2^2 + 2 \mathbf{Y}^{\top} \mathbf{X} (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{v} + \| \mathbf{v} \|_2^2, \end{eqnarray*} [$]

which, by the orthogonality of $V$ and the space spanned by the columns of $\mathbf{X}$, equals $\| (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y} \|_2^2 + \| \mathbf{v} \|_2^2$. Clearly, any nontrivial $\mathbf{v}$, i.e. $\mathbf{v} \not= \mathbf{0}_p$, results in $\| (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y} \|_2^2 + \| \mathbf{v} \|_2^2 \gt \| (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y} \|_2^2$ and, thus, $\hat{\bbeta}_{\mbox{{\tiny MLS}}} = (\mathbf{X}^{\top} \mathbf{X})^+ \mathbf{X}^{\top} \mathbf{Y}$.

An alternative (and related) estimator of the regression parameter $\bbeta$ that avoids the use of the Moore-Penrose inverse and is able to deal with (super)-collinearity among the columns of the design matrix is the ridge regression estimator proposed by [5]. It essentially comprises of an ad-hoc fix to resolve the (almost) singularity of $\mathbf{X}^{\top} \mathbf{X}$. [5] propose to simply replace $\mathbf{X}^{\top} \mathbf{X}$ by $\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp}$ with $\lambda \in [0, \infty)$. The scalar $\lambda$ is a tuning parameter, henceforth called the penalty parameter for reasons that will become clear later. The ad-hoc fix solves the singularity as it adds a positive matrix, $\lambda \mathbf{I}_{pp}$, to a positive semi-definite one, $\mathbf{X}^{\top} \mathbf{X}$, making the total a positive definite matrix (Lemma 14.2.4 of [3]), which is invertible.

Example (Super-collinearity, continued)

Recall the super-collinear design matrix $\mathbf{X}$ of Example. Then, for (say) $\lambda = 1$:

[$] \begin{eqnarray*} \mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp} & = & \left( \begin{array}{rrr} 5 & 2 & 2 \\ 2 & 7 & -4 \\ 2 & -4 & 7 \end{array} \right). \end{eqnarray*} [$]

The eigenvalues of this matrix are 11, 7, and 1. Hence, $\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp}$ has no zero eigenvalue and its inverse is well-defined.

With the ad-hoc fix for the singularity of $\mathbf{X}^{\top} \mathbf{X}$, [5] proceed to define the ridge regression estimator:

[$] \begin{eqnarray} \label{form.ridgeRegressionEstimator} \hat{\bbeta}(\lambda) & = & (\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} \mathbf{X}^{\top} \mathbf{Y}, \end{eqnarray} [$]

for $\lambda \in [0, \infty)$. Clearly, this is for $\lambda$ strictly positive (Question discussed the consequences of a negative values of $\lambda$) a well-defined estimator, even if $\mathbf{X}$ is high-dimensional. However, each choice of $\lambda$ leads to a different ridge regression estimate. The set of all ridge regression estimates $\{ \hat{\bbeta}(\lambda) \, : \, \lambda \in [0, \infty) \}$ is called the solution or regularization path of the ridge estimator.

Example (Super-collinearity, continued)

Recall the super-collinear design matrix $\mathbf{X}$ of Example. Suppose that the corresponding response vector is $\mathbf{Y} = (1.3, -0.5, 2.6, 0.9)^{\top}$. The ridge regression estimates for, e.g. $\lambda = 1, 2$, and $10$ are then: $\hat{\bbeta}(1) = (0.614, 0.548, 0.066)^{\top}$, $\hat{\bbeta}(2) = (0.537, 0.490, 0.048)^{\top}$, and $\hat{\bbeta}(10) = (0.269, 0.267, 0.002)^{\top}$. The full solution path of the ridge estimator is shown in the left-hand side panel of Figure.

Left panel: the regularization path of the ridge estimator for the data of Example. Right panel: the ‘maximum likelihood fit’ $\widehat{Y}$ and the ‘ridge fit’ $\widehat{Y}(\lambda)$ (the dashed-dotted red line) to the observation $Y$ in the (hyper)plane spanned by the covariates.

Low-dimensionally, when $\mathbf{X}^{\top} \mathbf{X}$ is of full rank, the ridge regression estimator is linearly related to its maximum likelihood counterpart. To see this define the linear operator $\mathbf{W}_{\lambda} = (\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} \mathbf{X}^{\top} \mathbf{X}$. The ridge estimator $\hat{\bbeta}(\lambda)$ can then be expressed as $\mathbf{W}_{\lambda} \hat{\bbeta}$ for:

[$] \begin{eqnarray*} \mathbf{W}_{\lambda} \hat{\bbeta} & = & \mathbf{W}_{\lambda} (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y} \\ & = & ( \mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp} )^{-1} \mathbf{X}^{\top} \mathbf{X} (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y} \\ & = & ( \mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp} )^{-1} \mathbf{X}^{\top} \mathbf{Y} \\ & = & \hat{\bbeta}(\lambda). \end{eqnarray*} [$]

The linear operator $\mathbf{W}_{\lambda}$ thus transforms the maximum likelihood estimator of the regression parameter into its ridge regularized counterpart. High-dimensionally, no such linear relation between the ridge and the minimum least squares regression estimators exists (see Exercise).

With an estimate of the regression parameter $\bbeta$ available one can define the fit. For the ridge regression estimator the fit is defined analogous to the ML case:

[$] \begin{eqnarray*} \widehat{\mathbf{Y}}(\lambda) & = & \mathbf{X} \hat{\bbeta}(\lambda) \, \, \, = \, \, \, \mathbf{X} (\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} \mathbf{X}^{\top} \mathbf{Y} \, \, \, := \, \, \, \mathbf{H}(\lambda) \mathbf{Y}. \end{eqnarray*} [$]

For the ML regression estimator the fit could be understood as a projection of $\mathbf{Y}$ onto the subspace spanned by the columns of $\mathbf{X}$. This is depicted in the right panel of Figure, where $\widehat{Y}$ is the projection of the observation $Y$ onto the covariate space. The projected observation $\widehat{Y}$ is orthogonal to the residual $\varepsilon = Y - \widehat{Y}$. This means the fit is the point in the covariate space closest to the observation. Put differently, the covariate space does not contain a point that is better (in some sense) in explaining the observation. Compare this to the ‘ridge fit’ which is plotted as a dashed-dotted red line in the right panel of Figure. The ‘ridge fit’ is a line, parameterized by $\{ \lambda : \lambda \in \mathbb{R}_{\geq 0} \}$, where each point on this line matches to the corresponding intersection of the regularization path $\hat{\bbeta}(\lambda)$ and the vertical line $x=\lambda$. The ‘ridge fit’ $\widehat{Y}(\lambda)$ runs from the ML fit $\widehat{Y} = \widehat{Y} (0)$ to an intercept-only fit in which the covariates do not contribute to the explanation of the observation. From the figure it is obvious that for any $\lambda \gt 0$ the ‘ridge fit’ $\widehat{Y}(\lambda)$ is not orthogonal to the observation $Y$. In other words, the ‘ridge residuals’ $Y - \widehat{Y}(\lambda)$ are not orthogonal to the fit $\widehat{Y}(\lambda)$ (confer Exercise ). Hence, the ad-hoc fix of the ridge regression estimator resolves the non-evaluation of the estimator in the face of super-collinearity but yields a ‘ridge fit’ that is not optimal in explaining the observation. Mathematically, this is due to the fact that the fit $\widehat{Y}(\lambda)$ corresponding to the ridge regression estimator is not a projection of $Y$ onto the covariate space (confer Exercise).

Eigenvalue shrinkage

The effect of the ridge penalty is also studied from the perspective of singular values. Let the singular value decomposition of the $n \times p$-dimensional design matrix $\mathbf{X}$ be:

[$] \begin{eqnarray*} \mathbf{X} & = & \mathbf{U}_x \mathbf{D}_x \mathbf{V}_x^{\top}. \end{eqnarray*} [$]

In the above $\mathbf{D}_x$ an $n \times p$-dimensional block matrix. Its upper left block is a $(\mbox{rank}(\mathbf{X}) \times \mbox{rank}(\mathbf{X}))$-dimensional digonal matrix with the singular values on the diagonal. The remaining blocks, zero if $p=n$ and $\mathbf{X}$ is of full rank, one if $\mbox{rank}(\mathbf{X}) = n$ or $\mbox{rank}(\mathbf{X}) = p$, or three if $\mbox{rank}(\mathbf{X}) \lt \min \{ n, p \}$, are of appropriate dimensions and comprise zeros only. The matrix $\mathbf{U}_x$ an $n \times n$-dimensional matrix with columns containing the left singular vectors (denoted $\mathbf{u}_i$), and $\mathbf{V}_x$ a $p \times p$-dimensional matrix with columns containing the right singular vectors (denoted $\mathbf{v}_i$). The columns of $\mathbf{U}_x$ and $\mathbf{V}_x$ are orthogonal: $\mathbf{U}_x^{\top} \mathbf{U}_x = \mathbf{I}_{nn} = \mathbf{U}_x\mathbf{U}_x^{\top}$ and $\mathbf{V}_x^{\top} \mathbf{V}_x = \mathbf{I}_{pp} = \mathbf{V}_x\mathbf{V}_x^{\top}$.

The maximum likelihood estimator, which is well-defined if $n \gt p$ and $\mbox{rank}(\mathbf{X})=p$, can then be rewritten in terms of the SVD-matrices as:

[$] \begin{eqnarray*} \hat{\bbeta} & = & (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y} \qquad \qquad \qquad \, \, \, \, \, = \, \, \, (\mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{U}_x \mathbf{D}_x \mathbf{V}_x^{\top})^{-1} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ & = & (\mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{D}_x \mathbf{V}_x^{\top})^{-1} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \, \, \, = \, \, \, \mathbf{V}_x (\mathbf{D}_x^{\top} \mathbf{D}_x)^{-1} \mathbf{V}_x^{\top} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ & = & \mathbf{V}_x (\mathbf{D}_x^{\top} \mathbf{D}_x)^{-1} \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y}. \end{eqnarray*} [$]

The block structure of the design matrix implies that matrix $(\mathbf{D}_x^{\top} \mathbf{D}_x)^{-1} \mathbf{D}_x$ results in an $p \times n$-dimensional matrix with the reciprocal of the nonzero singular values on the diagonal of the left $p\times p$-dimensional upper left block. Similarly, the ridge regression estimator can be rewritten in terms of the SVD-matrices as:

[$] \begin{eqnarray} \nonumber \hat{\bbeta} (\lambda) & = & (\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} \mathbf{X}^{\top} \mathbf{Y} \\ \nonumber & = & (\mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{U}_x \mathbf{D}_x \mathbf{V}_x^{\top} + \lambda \mathbf{I}_{pp})^{-1} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ \nonumber & = & (\mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{D}_x \mathbf{V}_x^{\top} + \lambda \mathbf{V}_x \mathbf{V}_x^{\top})^{-1} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ \nonumber & = & \mathbf{V}_x (\mathbf{D}_x^{\top} \mathbf{D}_x + \lambda \mathbf{I}_{pp})^{-1} \mathbf{V}_x^{\top} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ \label{form:formRidgeSpectral} & = & \mathbf{V}_x (\mathbf{D}_x^{\top} \mathbf{D}_x + \lambda \mathbf{I}_{pp})^{-1} \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y}. \end{eqnarray} [$]

Combining the two results and writing $(\mathbf{D}_x)_{jj} = d_{x,jj}$ for the $p$ nonzero singular values on the diagonal of the upper block of $\mathbf{D}_x$ we have: $d_{x,jj}^{-1} \geq d_{x,jj} (d_{x,jj}^2 + \lambda)^{-1}$ for all $\lambda \gt 0$. Thus, the ridge penalty shrinks the singular values.

Return to the problem of the super-collinearity of $\mathbf{X}$ in the high-dimensional setting ($p \gt n$). The super-collinearity implies the singularity of $\mathbf{X}^{\top} \mathbf{X}$ and prevents the calculation of the OLS estimator of the regression coefficients. However, $\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp}$ is non-singular, with inverse: $(\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} = \sum_{j=1}^p (d_{x,jj}^2 + \lambda)^{-1} \mathbf{v}_j \mathbf{v}_j^{\top}$ where $d_{x,jj} = 0$ for $j \gt \mbox{rank}(\mathbf{X})$. The right-hand side is well-defined for $\lambda \gt 0$.

From the ‘spectral formulation’ of the ridge regression estimator (\ref{form:formRidgeSpectral}) the $\lambda$-limits can be deduced. The lower $\lambda$-limit of the ridge regression estimator $\hat{\bbeta}(0_+) = \lim_{\lambda \downarrow 0} \hat{\bbeta}(\lambda)$ coincides with the minimum least squares estimator. This is immediate when $\mathbf{X}$ is of full rank. In the high-dimensional situation, when the dimension $p$ exceeds the sample size $n$, it follows from the limit:

[$] \begin{eqnarray*} \lim_{\lambda \downarrow 0} \frac{d_{x,jj}}{d_{x,jj}^2 + \lambda} & = & \left\{ \begin{array}{lcl} d_{x,jj}^{-1} & \mbox{if} & d_{x,jj} \not= 0 \\ 0 & \mbox{if} & d_{x,jj} = 0 \end{array} \right. \end{eqnarray*} [$]

Then, $\lim_{\lambda \downarrow 0} \hat{\bbeta}(\lambda) = \hat{\bbeta}_{\mbox{{\tiny MLS}}}$. Similarly, the upper $\lambda$-limit is evident from the fact that $\lim_{\lambda \rightarrow \infty} d_{x,jj} (d_{x,jj}^2 + \lambda)^{-1} = 0$, which implies $\lim_{\lambda \rightarrow \infty} \hat{\bbeta}(\lambda) = \mathbf{0}_p$. Hence, all regression coefficients are shrunken towards zero as the penalty parameter increases. This also holds for $\mathbf{X}$ with $p \gt n$. Furthermore, this behaviour is not strictly monotone in $\lambda$: $\lambda_{a} \gt \lambda_b$ does not necessarily imply $|\hat{\beta}_j (\lambda_a) | \lt |\hat{\beta}_j (\lambda_b) |$. Upon close inspection this can be witnessed from the ridge solution path of $\beta_3$ in Figure.

Principal components regression

Principal component regression is a close relative to ridge regression that can also be applied in a high-dimensional context. Principal components regression explains the response not by the covariates themselves but by linear combinations of the covariates as defined by the principal components of $\mathbf{X}$. Let $\mathbf{U}_x \mathbf{D}_x \mathbf{V}_x^{\top}$ be the singular value decomposition of $\mathbf{X}$. The $k_0$-th principal component of $\mathbf{X}$ is then $\mathbf{X} \mathbf{v}_{k_0}$, henceforth denoted $\mathbf{z}_i$. Let $\mathbf{Z}_k$ be the matrix of the first $k$ principal components, i.e. $\mathbf{Z}_k = \mathbf{X} \mathbf{V}_{x,k}$ where $\mathbf{V}_{x,k}$ contains the first $k$ right singular vectors as columns. Principal components regression then amounts to regressing the response $\mathbf{Y}$ onto $\mathbf{Z}_{k}$, that is, it fits the model $\mathbf{Y} = \mathbf{Z}_k \ggamma + \vvarepsilon$. The least squares estimator of $\ggamma$ then is (with some abuse of notation):

[$] \begin{eqnarray*} \hat{\ggamma} & = & (\mathbf{Z}_k^{\top} \mathbf{Z}_k)^{-1} \mathbf{Z}_k^{\top} \mathbf{Y} \\ & = & (\mathbf{V}_{x,k}^{\top} \mathbf{X}^{\top} \mathbf{X} \mathbf{V}_{x,k})^{-1} \mathbf{V}_k^{\top} \mathbf{X}^{\top} \mathbf{Y} \\ & = & (\mathbf{V}_{x,k}^{\top} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{U}_x \mathbf{D}_x \mathbf{V}_x^{\top} \mathbf{V}_{x,k})^{-1} \mathbf{V}_{x,k}^{\top} \mathbf{V}_x \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ & = & (\mathbf{I}_{kp} \mathbf{D}_x^{\top} \mathbf{D}_x \mathbf{I}_{pk})^{-1} \mathbf{I}_{kp} \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ & = & (\mathbf{D}_{x,k}^{\top} \mathbf{D}_{x,k})^{-1} \mathbf{D}_{x,k}^{\top} \mathbf{U}_x^{\top} \mathbf{Y} \\ & = & (\mathbf{D}_{x,k}^{\top} \mathbf{D}_{x,k})^{-1} \mathbf{D}_{x,k}^{\top} \mathbf{U}_x^{\top} \mathbf{Y}, \end{eqnarray*} [$]

where $\mathbf{D}_{x,k}$ is a submatrix of $\mathbf{D}_x$ formed from $\mathbf{D}_x$ by removal of the last $p-k$ columns. Similarly, $\mathbf{I}_{kp}$ and $\mathbf{I}_{pk}$ are obtained from $\mathbf{I}_{pp}$ by removal of the last $p-k$ rows and columns, respectively. The principal component regression estimator of $\bbeta$ then is $\hat{\bbeta}_{\mbox{{\tiny pcr}}} = \mathbf{V}_{x,k} (\mathbf{D}_{x,k}^{\top} \mathbf{D}_{x,k})^{-1} \mathbf{D}_{x,k}^{\top} \mathbf{U}_x^{\top} \mathbf{Y}$. When $k$ is set equal to the column rank of $\mathbf{X}$, and thus to the rank of $\mathbf{X}^{\top} \mathbf{X}$, the principal component regression estimator $\hat{\bbeta}_{\mbox{{\tiny pcr}}} = (\mathbf{X}^{\top} \mathbf{X})^- \mathbf{X}^{\top} \mathbf{Y}$, where $\mathbf{A}^-$ denotes the Moore-Penrose inverse of matrix $\mathbf{A}$.

The relation between ridge and principal component regression becomes clear when their corresponding estimators are written in terms of the singular value decomposition of $\mathbf{X}$:

[$] \begin{eqnarray*} \hat{\bbeta}_{\mbox{{\tiny pcr}}} & = & \mathbf{V}_{x,k} (\mathbf{I}_{kp} \mathbf{D}_x^{\top} \mathbf{D}_x \mathbf{I}_{pk})^{-1} \mathbf{I}_{kp} \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y}, \\ \hat{\bbeta} (\lambda) & = & \mathbf{V}_x (\mathbf{D}_x^{\top} \mathbf{D}_x + \lambda \mathbf{I}_{pp})^{-1} \mathbf{D}_x^{\top} \mathbf{U}_x^{\top} \mathbf{Y}. \end{eqnarray*} [$]

Both operate on the singular values of the design matrix. But where principal component regression thresholds the singular values of $\mathbf{X}$, ridge regression shrinks them (depending on their size). Hence, one applies a discrete map on the singular values while the other a continuous one.

General References

van Wieringen, Wessel N. (2021). "Lecture notes on ridge regression". arXiv:1509.09169 [stat.ME].

References

1. Draper, N. R. and Smith, H. (1998).Applied Regression Analysis (3rd edition).John Wiley & Sons
2. Pust, S., Klokk, T., Musa, N., Jenstad, M., Risberg, B., Erikstein, B., Tcatchoff, L., Liest\ol, K., Danielsen, H., Van Deurs, B., and K, S. (2013).Flotillins as regulators of ErbB2 levels in breast cancer.Oncogene, 32(29), 3443--3451
3. Harville, D. A. (2008).Matrix Algebra From a Statistician's Perspective.Springer, New York
4. Ishwaran, H. and Rao, J. S. (2014).Geometry and properties of generalized ridge regression in high dimensions.Contempory Mathematics, 622, 81--93
5. Hoerl, A. E. and Kennard, R. W. (1970).Ridge regression: biased estimation for nonorthogonal problems.Technometrics, 12(1), 55--67