# Uniqueness, analytic solutions, and sparsity

In this chapter we return to the linear regression model, which is still fitted in penalized fashion but this time with a so-called lasso penalty. Yet another penalty? Yes, but the resulting estimator will turn out to have interesting properties. The outline of this chapter loosely follows that of its counterpart on ridge regression (Chapter Ridge regression ). The chapter can -- at least partially -- be seen as an elaborated version of the original work on lasso regression, i.e. [1], with most topics covered and visualized more extensively and incorporating results and examples published since.

Recall that ridge regression finds an estimator of the parameter of the linear regression model through the minimization of:

[$] \begin{eqnarray} \label{form.penLeastSquares} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + f_{\mbox{{\scriptsize pen}}}(\bbeta, \lambda), \end{eqnarray} [$]

with $f_{\mbox{{\scriptsize pen}}}(\bbeta, \lambda) = \lambda \| \bbeta \|_2^2$. The particular choice of the penalty function originated in a post-hoc motivation of the ad-hoc fix to the singularity of the matrix $\mathbf{X}^{\top} \mathbf{X}$, stemming from the design matrix $\mathbf{X}$ not being of full rank (i.e. $\mbox{rank}(\mathbf{X}) \lt p$). The ad-hoc nature of the fix suggests that the choice for the squared Euclidean norm of $\bbeta$ as a penalty is arbitrary and other choices may be considered, some of which were already encountered in Chapter Generalizing ridge regression .

One such choice is the so-called lasso penalty, giving rise to lasso regression, as introduced by [1]. Like ridge regression, lasso regression fits the linear regression model $\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon$ with the standard assumption on the error $\vvarepsilon$. Like ridge regression, it does so by minimization of the sum of squares augmented with a penalty. Hence, lasso regression too minimizes loss function (\ref{form.penLeastSquares}). The difference with ridge regression is in the penalty function. Instead of the squared Euclidean norm, lasso regression uses the $\ell_1$-norm: $f_{\mbox{{\scriptsize pen}}}(\bbeta, \lambda_1) = \lambda_1 \| \bbeta \|_1$, the sum of the absolute values of the regression parameters multiplied by the lasso penalty parameter $\lambda_1$. To distinguish the ridge and lasso penalty parameters, they are henceforth denoted $\lambda_2$ and $\lambda_1$, respectively, with the subscript referring to the norm used in the penalty. The lasso regression loss function is thus:

[$] \begin{eqnarray} \label{form.lassoLossFunction} \mathcal{L}_{\mbox{{\scriptsize lasso}}}(\bbeta; \lambda) & = & \| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2 + \lambda_1 \| \bbeta \|_1 \, \, \, = \, \, \, \sum\nolimits_{i=1}^n (Y_i - \mathbf{X}_{i\ast} \, \bbeta)^2 + \lambda_1 \sum\nolimits_{j=1}^p | \beta_j |. \end{eqnarray} [$]

The lasso regression estimator is then defined as the minimizer of this loss function. As with the ridge regression loss function, the maximum likelihood estimator (should it exists) of $\bbeta$ minimizes the first part, while the second part is minimized by setting $\bbeta$ equal to the $p$ dimensional zero vector. For $\lambda_1$ close to zero, the lasso estimate is close to the maximum likelihood estimate. Whereas for large $\lambda_1$, the penalty term overshadows the sum-of-squares, and the lasso estimate is small (in some sense). Intermediate choices of $\lambda_1$ mold a compromise between those two extremes, with the penalty parameter determining the contribution of each part to this compromise. The lasso regression estimator thus is not one but a whole sequence of estimators of $\bbeta$, one for every $\lambda_1 \in \mathbb{R}_{\gt0}$. This sequence is the lasso regularization path, defined as $\{ \hat{\bbeta}(\lambda_1) \, : \, \lambda_1 \in \mathbb{R}_{\gt0} \}$. To arrive at a final lasso estimator of $\bbeta$, like its ridge counterpart, the lasso penalty parameter $\lambda_1$ needs to be chosen (see Section \ref{sect:lassoPenaltyChoice}).

Contour plots of the sum-of-squares and the lasso regression loss (left and right panel, respectively). The dotted grey line represent level sets. The red line and dot represent the location of minimum in both panels.

The $\ell_1$ penalty of lasso regression is equally arbitrary as the $\ell_2^2$-penalty of ridge regression. The latter ensured the existence of a well-defined estimator of the regression parameter $\bbeta$ in the presence of super-collinearity in the design matrix $\mathbf{X}$, in particular when the dimension $p$ exceeds the sample size $n$. The augmentation of the sum-of-squares with the lasso penalty achieves the same. This is illustrated in Figure. For the high-dimensional setting with $p=2$ and $n=1$ and arbitrary data the level sets of the sum-of-squares $\| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2$ and the lasso regression loss $\| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2 + \lambda_1 \| \bbeta \|_1$ are plotted (left and right panel, respectively). In both panels the minimum is indicated in red. For the sum-of-squares the minimum is a line. As pointed out before in Section The ridge regression estimator of Chapter Ridge regression on ridge regression, this minimum is determined up to an element of the null set of the design matrix $\mathbf{X}$, which in this case is non-trivial. In contrast, the lasso regression loss exhibits a unique well-defined minimum. Hence, the augmentation of the sum-of-squares with the lasso penalty yields a well-defined estimator of the regression parameter. (This needs some attenuation: in general the minimum of the lasso regression loss need not be unique, confer Section Uniqueness ).

The mathematics involved in the derivation in this chapter tends to be more intricate than for ridge regression. This is due to the non-differentiability of the lasso penalty at zero. This has consequences on all aspects of the lasso regression estimator as is already obvious in the right-hand panel of Figure: confer the non-differentiable points of the lasso regression loss level sets.

## Uniqueness

The lasso regression loss function is the sum of the sum-of-squares criterion and a sum of absolute value functions. Both are convex in $\bbeta$: the former is not strict convex due to the high-dimensionality and the absolute value function is convex due to its piece-wise linearity. Thereby the lasso loss function too is convex but not strict. Consequently, its minimum need not be uniquely defined. But, the set of solutions of a convex minimization problem is convex (Theorem 9.4.1, Fletcher, 1987). Hence, would there exist multiple minimizers of the lasso loss function, they can be used to construct a convex set of minimizers. Thus, if $\hat{\bbeta}_a (\lambda_1)$ and $\hat{\bbeta}_b (\lambda_1)$ are lasso estimators, then so are $(1-\theta) \hat{\bbeta}_a (\lambda_1) + \theta \hat{\bbeta}_b (\lambda_1)$ for $\theta \in (0,1)$. This is illustrated in Example.

Example (Perfectly super-collinear covariates)

Consider the standard linear regression model $Y_i = \mathbf{X}_{i,\ast} \bbeta + \varepsilon_i$ for $i=1, \ldots, n$ and with the $\varepsilon_i$ i.i.d. normally distributed with zero mean and a common variance. The rows of the design matrix $\mathbf{X}$ are of length two, neither column represents the intercept, but $\mathbf{X}_{\ast, 1} = \mathbf{X}_{\ast, 2}$. Suppose an estimate of the regression parameter $\bbeta$ of this model is obtained through the minimization of the sum-of-squares augmented with a lasso penalty, $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1$ with penalty parameter $\lambda_1 \gt 0$. To find the minimizer define $u = \beta_1 + \beta_2$ and $v = \beta_1 - \beta_2$ and rewrite the lasso loss criterion to:

[$] \begin{eqnarray*} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1 & = & \| \mathbf{Y} - \mathbf{X}_{\ast,1} u \|_2^2 + \tfrac{1}{2} \lambda_1 ( | u + v |+ | u - v |). \end{eqnarray*} [$]

The function $| u + v |+ | u - v |$ is minimized with respect to $v$ for any $v$ such that $|v| \lt |u|$ and the corresponding minimum equals $2| u|$. The estimator of $u$ thus minimizes:

[$] \begin{eqnarray*} \| \mathbf{Y} - \mathbf{X}_{\ast,1} u \|_2^2 + \lambda_1 | u|. \end{eqnarray*} [$]

For sufficiently small values of $\lambda_1$ the estimate of $u$ will be unequal to zero. Then, any $v$ such that $|v| \lt |u|$ will yield the same minimum of the lasso loss function. Consequently, $\hat{\bbeta}(\lambda_1)$ is not uniquely defined as $\hat{\beta}_1(\lambda_1) = \tfrac{1}{2} [\hat{u}(\lambda_1) + \hat{v}(\lambda_1)]$ need not equal $\hat{\beta}_2(\lambda_1) = \tfrac{1}{2} [\hat{u}(\lambda_1) - \hat{v}(\lambda_1)]$ for any $\hat{v}(\lambda_1)$ such that $0 \lt | \hat{v}(\lambda_1) | \lt |\hat{u}(\lambda_1)|$.

The lasso regression estimator $\hat{\bbeta}(\lambda_1)$ need not be unique, but its linear predictor $\mathbf{X} \hat{\bbeta}(\lambda_1)$ is. This can be proven by contradiction [2]. Suppose there exists two lasso regression estimators of $\bbeta$, denoted $\hat{\bbeta}_a (\lambda_1)$ and $\hat{\bbeta}_b (\lambda_1)$, such that $\mathbf{X} \hat{\bbeta}_a (\lambda_1) \not= \mathbf{X} \hat{\bbeta}_b (\lambda_1)$. Define $c$ to be the minimum of the lasso loss function. Then, by definition of the lasso estimators $\hat{\bbeta}_a (\lambda_1)$ and $\hat{\bbeta}_b (\lambda_1)$ satisfy:

[$] \begin{eqnarray*} \| \mathbf{Y} - \mathbf{X} \hat{\bbeta}_a (\lambda_1) \|_2^2 + \lambda_1 \| \hat{\bbeta}_a (\lambda_1) \|_1 & = & c \, \, \, = \, \, \, \| \mathbf{Y} - \mathbf{X} \hat{\bbeta}_b (\lambda_1) \|_2^2 + \lambda_1 \| \hat{\bbeta}_b (\lambda_1) \|_1. \end{eqnarray*} [$]

For $\theta \in (0,1)$ we then have:

[$] \begin{eqnarray*} & & \hspace{-1cm} \| \mathbf{Y} - \mathbf{X} [(1-\theta) \hat{\bbeta}_a (\lambda_1) + \theta \hat{\bbeta}_b (\lambda_1)] \|_2^2 + \lambda_1 \| (1-\theta) \hat{\bbeta}_a (\lambda_1) + \theta \hat{\bbeta}_b (\lambda_1) \|_1 \\ & = & \| (1-\theta)[ \mathbf{Y} - \mathbf{X} \hat{\bbeta}_a (\lambda_1) ] + \theta [ \mathbf{Y} - \mathbf{X} \hat{\bbeta}_b (\lambda_1)] \|_2^2 + \lambda_1 \| (1-\theta) \hat{\bbeta}_a (\lambda_1) + \theta \hat{\bbeta}_b (\lambda_1) \|_1 \\ & \lt & (1-\theta) \| \mathbf{Y} - \mathbf{X} \hat{\bbeta}_a (\lambda_1) \|_2^2 + \theta \| \mathbf{Y} - \mathbf{X} \hat{\bbeta}_b (\lambda_1) \|_2^2 + (1-\theta) \lambda_1 \| \hat{\bbeta}_a (\lambda_1) \|_1 + \theta \lambda_1 \| \hat{\bbeta}_b (\lambda_1) \|_1 \\ & = & (1-\theta) c + \theta c \, \, \, = \, \, \, c, \end{eqnarray*} [$]

by the strict convexity of $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2$ in $\mathbf{X} \bbeta$ and the convexity of $\| \bbeta \|_1$ on $\theta \in (0,1)$. This implies that $(1-\theta) \hat{\bbeta}_a (\lambda_1) + \theta \hat{\bbeta}_b (\lambda_1)$ yields a lower minimum of the lasso loss function and contradicts our assumption that $\hat{\bbeta}_a (\lambda_1)$ and $\hat{\bbeta}_b (\lambda_1)$ are lasso regression estimators. Hence, the lasso linear predictor is unique.

Example (Perfectly super-collinear covariates, revisited)

Revisit the setting of Example, where a linear regression model without intercept and only two but perfectly correlated covariates is fitted to data. The example revealed that the lasso estimator need not be unique. The lasso predictor, however, is

[$] \begin{eqnarray*} \widehat{\mathbf{Y}}(\lambda_1) \, \, \, = \, \, \,\mathbf{X} \hat{\bbeta}(\lambda_1) \, \, \, = \, \, \, \mathbf{X}_{\ast,1} \hat{\beta}_1(\lambda_1) + \mathbf{X}_{\ast,2} \hat{\beta}_2(\lambda_1) & = & \mathbf{X}_{\ast,1} [ \hat{\beta}_1(\lambda_1) + \hat{\beta}_2(\lambda_1)] \, \, \, = \, \, \, \mathbf{X}_{\ast,1} \hat{u}(\lambda_1), \end{eqnarray*} [$]

with $u$ defined and (uniquely) estimated as in Example and $v$ dropping from the predictor.

Example

The issues, non- and uniqueness of the lasso-estimator and predictor, respectively, raised above are illustrated in a numerical setting. Hereto data are generated in accordance with the linear regression model $\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon$ where the $n=5$ rows of $\mathbf{X}$ are sampled from $\mathcal{N}[\mathbf{0}_p, (1-\rho) \mathbf{I}_{pp} + \rho \mathbf{1}_{pp}]$ with $p=10$, $\rho=0.99$, $\bbeta = (\mathbf{1}_3^{\top}, \mathbf{0}_{p-3}^{\top})^{\top}$ and $\vvarepsilon \sim \mathcal{N}(\mathbf{0}_p, \tfrac{1}{10} \mathbf{I}_{nn})$. With these data the lasso estimator of the regression parameter $\bbeta$ for $\lambda_1=1$ is evaluated using two different algorithms (see Section Estimation ). Employed implementations of the algorithms are those available through the R-packages penalized and glmnet. Both estimates, denoted $\hat{\bbeta}_{\texttt{p}} (\lambda_1)$ and $\hat{\bbeta}_{\texttt{g}} (\lambda_1)$ (the subscript refers to the first letter of the package), are given in Table.

 $\beta_1$ $\beta_2$ $\beta_3$ $\beta_4$ $\beta_5$ $\beta_6$ $\beta_7$ $\beta_8$ $\beta_9$ $\beta_{10}$ penalized $\hat{\bbeta}_{\texttt{p}} (\lambda_1)$ 0.267 0.000 1.649 0.093 0.000 0.000 0.000 0.571 0.000 0.269 glmnet $\hat{\bbeta}_{\texttt{g}} (\lambda_1)$ 0.269 0.000 1.776 0.282 0.195 0.000 0.000 0.325 0.000 0.000

The table reveals that the estimates differ, in particular in their support (i.e. the set of nonzero values of the estimate of $\bbeta$). This is troublesome when it comes to communication of the optimal model. From a different perspective the realized loss $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1$ for each estimate is approximately equal to $2.99$, with the difference possibly due to convergence criteria of the algorithms. On another note, their corresponding predictors, $\mathbf{X} \hat{\bbeta}_{\texttt{p}} (\lambda_1)$ and $\mathbf{X} \hat{\bbeta}_{\texttt{g}} (\lambda_1)$, correlate almost perfectly: $\mbox{cor}[\mathbf{X} \hat{\bbeta}_{\texttt{p}} (\lambda_1), \mathbf{X} \hat{\bbeta}_{\texttt{g}} (\lambda_1)] = 0.999$. These results thus corroborate the non-uniqueness of the estimator and the uniqueness of the predictor.

The R-script provides the code to reproduce the analysis.

# set the random seed
set.seed(4)

library(penalized)
library(glmnet)
library(mvtnorm)

# set sample size
p <- 10

# create covariance matrix
Sigma <- matrix(0.99, p, p)
diag(Sigma) <- 1

# sample the design matrix
n <- 5
X <- rmvnorm(10, sigma=Sigma)

# create a sparse beta vector
betas <- c(rep(1, 3), rep(0, p-3))

# sample response
Y <- X %*% betas + rnorm(n, sd=0.1)

# evaluate lasso estimator with two methods
Bhat1 <- matrix(as.numeric(coef(penalized(Y, X, lambda1=1, unpenalized=~0),
"all")), ncol=1)
Bhat2 <- matrix(as.numeric(coef(glmnet(X, Y, lambda=1/(2*n), standardize=FALSE,
intercept=FALSE)))[-1], ncol=1)

# compare estimates
cbind(Bhat1, Bhat2)

# compare the loss
sum((Y - X %*% Bhat1)^2) + sum(abs(Bhat1))
sum((Y - X %*% Bhat2)^2) + sum(abs(Bhat2))

# compare predictor
cor(X %*% Bhat1, X %*% Bhat2)

Note that in the code above the evaluation of the lasso estimator appears to employ a different lasso penalty parameter $\lambda_1$ for each package. This is due to the fact that internally (after removal of standardization of $\mathbf{X}$ and $\mathbf{Y}$) the loss functions optimized are $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1$ vs. $(2n)^{-1} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1$. Rescaling of $\lambda_1$ resolves this issue.

## Analytic solutions

In general, no explicit expression for the lasso regression estimator exists. There are exceptions, as illustrated in Examples example.orthonormalDesignLasso and example:lassoPequals. Nonetheless, it is possible to show properties of the lasso estimator, amongst others of the smoothness of its regularization path (Theorem) and the limiting behaviour as $\lambda_1 \rightarrow \infty$ (see the end of this section).

(Theorem 2, [3])

The lasso regression loss function (\ref{form.lassoLossFunction}) yields a piecewise linear, in $\lambda_1$, regularization path $\{ \hat{\bbeta}(\lambda_1) : \mathbb{R}_{\gt0} \}$.

Confer [3]. ■

This piecewise linear nature of the lasso solution path is illustrated in the left-hand panel of Figure of an arbitrary data set. At each vertical dotted line a discontinuity in the derivative with respect to $\lambda_1$ of the regularization path of a lasso estimate of an element of $\bbeta$ may occur. The plot also foreshadows the $\lambda_1 \rightarrow \infty$ limiting behaviour of the lasso regression estimator: the estimator tends to zero. This is no surprise knowing that the ridge regression estimator exhibits the same behaviour and the lasso regression loss function is of similar form as that of ridge regression: a sum-of-squares plus a penalty term (which is linear in the penalty parameter).

The left panel shows the regularization path of the lasso regression estimator for simulated data. The vertical grey dotted lines indicate the values of $\lambda_1$ at which there is a discontinuity in the derivative (with respect to $\lambda_1$) of the lasso regularization path of one the regression estimates. The right panel displays the soft (solid, red) and hard (grey, dotted) threshold functions.

For particular cases, an orthormal design (Example) and $p=2$ (Example), an analytic expression for the lasso regression estimator exists. While the latter is of limited use, the former is exemplary and will come of use later in the numerical evaluation of the lasso regression estimator in the general case (see Section Estimation ).

Example (Orthonormal design matrix)

Consider an orthonormal design matrix $\mathbf{X}$, i.e. $\mathbf{X}^{\top} \mathbf{X} = \mathbf{I}_{pp} = (\mathbf{X}^{\top} \mathbf{X})^{-1}$. The lasso estimator then is:

[$] \begin{eqnarray*} \hat{\beta}_j (\lambda_1) & = & \mbox{sign} (\hat{\beta}_j ) (|\hat{\beta}_j| - \tfrac{1}{2} \lambda_1)_+, \end{eqnarray*} [$]

where $\hat{\bbeta} = (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y} = \mathbf{X}^{\top} \mathbf{Y}$ is the maximum likelihood estimator of $\bbeta$ and $\hat{\beta}_j$ its $j$-th element and $f(x) = (x)_+ = max\{x, 0\}$. This expression for the lasso regression estimator can be obtained as follows. Rewrite the lasso regression loss criterion:

[$] \begin{eqnarray*} \min_{\bbeta} \| \mathbf{Y} - \mathbf{X} \, \bbeta \|^2_2 + \lambda_1 \| \bbeta \|_1 & = & \min_{\bbeta} \mathbf{Y}^{\top} \mathbf{Y} - \mathbf{Y}^{\top} \mathbf{X} \, \bbeta - \bbeta^{\top} \mathbf{X}^{\top} \mathbf{Y} + \bbeta^{\top} \mathbf{X}^{\top} \mathbf{X} \, \bbeta + \lambda_1 \sum\nolimits_{j=1}^p | \beta_j | \\ & \propto & \min_{\bbeta} - \hat{\bbeta}^{\top} \, \bbeta - \bbeta^{\top} \hat{\bbeta} + \bbeta^{\top} \, \bbeta + \lambda_1 \sum\nolimits_{j=1}^p | \beta_j | \\ & = & \min_{\beta_1, \ldots, \beta_p} \sum\nolimits_{j=1}^p \big( - 2 \hat{\beta}_j^{\mbox{\tiny OLS}} \, \beta_j + \beta_j^2 + \lambda_1 | \beta_j | \big) \\ & = & \sum\nolimits_{j=1}^p (\min_{\beta_j} - 2 \hat{\beta}_j \, \beta_j + \beta_j^2 + \lambda_1 | \beta_j | ). \end{eqnarray*} [$]

The minimization problem can thus be solved per regression coefficient. This gives:

[$] \begin{eqnarray*} \min_{\beta_j} \, - 2 \hat{\beta}_j \, \beta_j + \beta_j^2 + \lambda_1 | \beta_j | & = & \left\{ \begin{array}{lcr} \min_{\beta_j} \, - 2 \hat{\beta}_j \, \beta_j + \beta_j^2 + \lambda_1 \beta_j & \mbox{if} & \beta_j \gt 0, \\ \min_{\beta_j} - 2 \hat{\beta}_j \, \beta_j + \beta_j^2 - \lambda_1 \beta_j & \mbox{if} & \beta_j \lt 0. \end{array} \right. \end{eqnarray*} [$]

The minimization within the sum over the covariates is with respect to each element of the regression parameter separately. Optimization with respect to the $j$-th one gives:

[$] \begin{eqnarray*} \hat{\beta}_j (\lambda_1) & = & \left\{ \begin{array}{lcr} \hat{\beta}_j - \frac{1}{2} \lambda_1 & \mbox{if} \,\, \hat{\beta}_j (\lambda_1) \gt 0 \\ \hat{\beta}_j + \frac{1}{2} \lambda_1 & \mbox{if}\,\, \hat{\beta}_j (\lambda_1) \lt 0 \\ 0 & \mbox{otherwise} \end{array} \right. \end{eqnarray*} [$]

This case-wse solution can be compressed to the form of the lasso regression estimator above.

The analytic expression for the lasso regression estimator above provides insight in how it relates to the maximum likelihood estimator of $\bbeta$. The right-hand side panel of Figure depicts this relationship. Effectively, the lasso regression estimator thresholds (after a translation) its maximum likelihood counterpart. The function is also referred to as the soft-threshold function (for contrast the hard-threshold function is also plotted -- dotted line -- in Figure).

Example (Orthogonal design matrix)

The analytic solution of the lasso regression estimator for experiments with an orthonormal design matrix applies to those with an orthogonal design matrix. This is illustrated by a numerical example. Use the lasso estimator with $\lambda_1=10$ to fit the linear regression model to the response data and the design matrix:

[$] \begin{eqnarray*} \mathbf{Y}^{\top} & = & \left( \begin{array}{rrrrrr} -4.9 & -0.8 & -8.9 & 4.9 & 1.1 & -2.0 \end{array} \right), \\ \mathbf{X}^{\top} & = & \left( \begin{array}{rrrrrr} 1 & -1 & 3 & -3 & 1 & 1 \\ -3 & -3 & -1 & 0 & 3 & 0 \end{array} \right). \end{eqnarray*} [$]

Note that the design matrix is orthogonal, i.e. its columns are orthogonal (but not normalized to one). The orthogonality of $\mathbf{X}$ yields a diagonal $\mathbf{X}^{\top} \mathbf{X}$, and so is its inverse $(\mathbf{X}^{\top} \mathbf{X})^{-1}$. Here $\mbox{diag}(\mathbf{X}^{\top} \mathbf{X}) = (22, 28)$. Rescale $\mathbf{X}$ to an orthonormal design matrix, denoted $\tilde{\mathbf{X}}$, and rewrite the lasso regression loss function to:

[$] \begin{eqnarray*} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1 & = & \Bigg\| \, \mathbf{Y} - \mathbf{X} \left( \begin{array}{rrrrrr} \sqrt{22} & 0 \\ 0 & \sqrt{28} & \end{array} \right)^{-1} \left( \begin{array}{rrrrrr} \sqrt{22} & 0 \\ 0 & \sqrt{28} & \end{array} \right) \bbeta \, \Bigg\|_2^2 + \lambda_1 \| \bbeta \|_1 \\ & = & \| \mathbf{Y} - \tilde{\mathbf{X}} \ggamma \|_2^2 + (\lambda_1 / \sqrt{22}) | \gamma_1 | + (\lambda_1 / \sqrt{28}) | \gamma_2 |, \end{eqnarray*} [$]

where $\ggamma = (\sqrt{22} \beta_1, \sqrt{28} \beta_2)^{\top}$. By the same argument this loss can be minimized with respect to each element of $\ggamma$ separately. In particular, the soft-threshold function provides an analytic expression for the estimates of $\ggamma$:

[$] \begin{eqnarray*} \hat{\gamma}_1 (\lambda_1 / \sqrt{22}) & = & \mbox{sign} (\hat{\gamma}_1 ) [|\hat{\gamma}_1| - \tfrac{1}{2} (\lambda_1 / \sqrt{22})]_+ \, \, \, = \, \, - [ 9.892513 - \tfrac{1}{2} (10 / \sqrt{22})]_+ \, \, \, = \, \, -8.826509, \\ \hat{\gamma}_2 (\lambda_1 / \sqrt{28}) & = & \mbox{sign} (\hat{\gamma}_2 ) [|\hat{\gamma}_2| - \tfrac{1}{2} (\lambda_1 / \sqrt{28})]_+ \, \, \, = \, \, \, \, \, \, [5.537180 - \tfrac{1}{2} (10 / \sqrt{28})]_+ \, \, \, = \, \, \, \, \, \, \, 4.592269, \end{eqnarray*} [$]

where $\hat{\gamma}_1$ and $\hat{\gamma}_2$ are the ordinary least square estimates of $\gamma_1$ and $\gamma_2$ obtained from regressing $\mathbf{Y}$ on the corresponding column of $\tilde{\mathbf{X}}$. Rescale back and obtain the lasso regression estimate: $\hat{\bbeta}(10) = (-1.881818, 0.8678572)^{\top}$.

Example ($p=2$ with equivariant covariates, [4])

Let $p=2$ and suppose the design matrix $\mathbf{X}$ has equivariant covariates. Without loss of generality they are assumed to have unit variance. We may thus write

[$] \begin{eqnarray*} \mathbf{X}^{\top} \mathbf{X} & = & \left( \begin{array}{rr} 1 & \rho \\ \rho & 1 \end{array} \right), \end{eqnarray*} [$]

for some $\rho \in (-1, 1)$. The lasso regression estimator is then of similar form as in the orthonormal case but the soft-threshold function now depends on $\lambda_1$, $\rho$ and the maximum likelihood estimate $\hat{\bbeta}$ (see Exercise).

Apart from the specific cases outlined in the two examples above no other explicit solution for the minimizer of the lasso regression loss function appears to be known. Locally though, for large enough values of $\lambda_1$, an analytic expression for solution can also be derived. Hereto we point out that (details to be included later) the lasso regression estimator satisfies the following estimating equation:

[$] \begin{eqnarray*} \mathbf{X}^{\top} \mathbf{X} \hat{\bbeta} (\lambda_1) & = & \mathbf{X}^{\top} \mathbf{Y} - \tfrac{1}{2} \lambda_1 \hat{\mathbf{z}} \end{eqnarray*} [$]

for some $\hat{\mathbf{z}} \in \mathbb{R}^p$ with $(\hat{\mathbf{z}})_j = \mbox{sign}\{[\hat{\bbeta}(\lambda_1)]_j\}$ whenever $[\hat{\bbeta}_j(\lambda_1)]_j \not= 0$ and $(\hat{\mathbf{z}})_j \in [-1,1]$ if $[\hat{\bbeta}_j(\lambda_1)]_j = 0$. Then:

[$] \begin{eqnarray*} 0 \, \, \, \leq \, \, \, [ \hat{\bbeta} (\lambda_1)]^{\top} \mathbf{X}^{\top} \mathbf{X} \hat{\bbeta} (\lambda_1) & = & [ \hat{\bbeta} (\lambda_1)]^{\top} (\mathbf{X}^{\top} \mathbf{Y} - \tfrac{1}{2} \lambda_1 \hat{\mathbf{z}} ) \, \, \, = \, \, \, \sum\nolimits_{j=1}^p [\hat{\bbeta} (\lambda_1)]_j (\mathbf{X}^{\top} \mathbf{Y} - \tfrac{1}{2} \lambda_1 \hat{\mathbf{z}} )_j. \end{eqnarray*} [$]

For $\lambda_1 \gt 2 \| \mathbf{X}^{\top} \mathbf{Y} \|_{\infty}$ the summands on the right-hand side satisfy:

[$] \begin{eqnarray*} \begin{array}{rclcl} \, [\hat{\bbeta} (\lambda_1)]_j (\mathbf{X}^{\top} \mathbf{Y} - \tfrac{1}{2} \lambda_1 \hat{\mathbf{z}} )_j & \lt & 0 & \mbox{if} & [\hat{\bbeta} (\lambda_1)]_j \gt 0, \\ \, [\hat{\bbeta} (\lambda_1)]_j (\mathbf{X}^{\top} \mathbf{Y} - \tfrac{1}{2} \lambda_1 \hat{\mathbf{z}} )_j & = & 0 & \mbox{if} & [\hat{\bbeta} (\lambda_1)]_j = 0, \\ \, [\hat{\bbeta} (\lambda_1)]_j (\mathbf{X}^{\top} \mathbf{Y} - \tfrac{1}{2} \lambda_1 \hat{\mathbf{z}} )_j & \lt & 0 & \mbox{if} & [\hat{\bbeta} (\lambda_1)]_j \lt 0. \end{array} \end{eqnarray*} [$]

This implies that $\hat{\bbeta} (\lambda_1) = \mathbf{0}_p$ if $\lambda_1 \gt 2 \| \mathbf{X}^{\top} \mathbf{Y} \|_{\infty}$, where $\| \mathbf{a} \|_{\infty}$ is the supremum norm of vector $\mathbf{a}$ defined as $\| \mathbf{a} \|_{\infty} = \max \{ |a_1|, |a_2|, \ldots, |a_p | \}$.

## Sparsity

The change from the $\ell_2^2$-norm to the $\ell_1$-norm in the penalty may seem only a detail. Indeed, both ridge and lasso regression fit the same linear regression model. But the attractiveness of the lasso lies not in what it fits, but in a consequence of how it fits the linear regression model. The lasso estimator of the vector of regression parameters may contain some or many zero's. In contrast, ridge regression yields an estimator of $\bbeta$ with elements (possibly) close to zero, but unlikely to be equal to zero. Hence, lasso penalization results in $\hat{\beta}_j (\lambda_1) = 0$ for some $j$ (in particular for large values of $\lambda_1$, see Section Uniqueness ), while ridge penalization yields an estimate of the $j$-th element of the regression parameter $\hat{\beta}_j (\lambda_2) \not= 0$. A zero estimate of a regression coefficient means that the corresponding covariate has no effect on the response and can be excluded from the model. Effectively, this amounts to variable selection. Where traditionally the linear regression model is fitted by means of maximum likelihood followed by a testing step to weed out the covariates with effects indistinguishable from zero, lasso regression is a one-step-go procedure that simulatenously estimates and selects.

The in-built variable selection of the lasso regression estimator is a geometric accident. To understand how it comes about the lasso regression loss optimization problem (\ref{form.lassoLossFunction}) is reformulated as a constrained estimation problem (using the same argumentation as previously employed for ridge regression, see Section Constrained estimation ):

[$] \begin{eqnarray*} \hat{\bbeta} (\lambda_1) & = & \arg \min\nolimits_{\bbeta \in \mathbb{R}^p \, : \, \|\bbeta \|_1 \leq c(\lambda_1) } \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2. \end{eqnarray*} [$]

where $c(\lambda_1) = \| \hat{\bbeta}(\lambda_1)\|_1$. Again, this is the standard least squares problem, with the only difference that the sum of the (absolute value of the) regression parameters $\beta_1, \beta_2, \ldots, \beta_p$ is required to be smaller than $c(\lambda_1)$. The effect of this requirement is that the lasso estimator of the regression parameter coefficients can no longer assume any value (from $-\infty$ to $\infty$, as is the case in standard linear regression), but are limited to a certain range of values. With the lasso and ridge regression estimators minimizing the same sum-of-squares, the key difference with the constrained estimation formulation of ridge regression is not in the explicit form of $c(\lambda_1)$ (and is set to some arbitrary convenient value in the remainder of this section) but in what is bounded by $c(\lambda_1)$ and the domain of acceptable values for $\bbeta$ that it implies. For the lasso regression estimator the domain is specified by a bound on the $\ell_1$-norm of the regression parameter while for its ridge counterpart the bound is applied to the squared $\ell_2$-norm of $\bbeta$. The parameter constraints implied by the lasso and ridge norms result in balls in different norms:

[$] \begin{eqnarray*} & & \{ \bbeta \in \mathbb{R}^p \, : \, | \beta_1 | + | \beta_2 | + \ldots + | \beta_p | \leq c_1(\lambda_1) \}, \\ & & \{ \bbeta \in \mathbb{R}^p \, : \, \, \beta_1^2 \, \, + \, \beta_2^2 \, \, + \ldots + \, \beta_p^2 \, \leq c_2(\lambda_2) \}, \end{eqnarray*} [$]

respectively, and where $c(\cdot)$ is now equipped with a subscript referring to the norm to stress that it is different for lasso and ridge. The left-hand panel of Figure visualizes these parameter constraints for $p=2$ and $c_1(\lambda_1) = 2 = c_2(\lambda_2)$. In the Euclidean space ridge yields a spherical constraint for $\bbeta$, while a diamond-like shape for the lasso. The lasso regression estimator is then that $\bbeta$ inside this diamond domain which yields the smallest sum-of-squares (as is visualized by right-hand panel of Figure).

Left panel: The lasso parameter constraint ($|\beta_1| + |\beta_2| \leq 2$) and its ridge counterpart ($\beta_1^2 + \beta_2^2 \leq 2$). Solution path of the ridge estimator and its variance. Right panel: the lasso regression estimator as a constrained least squares estimtor.

The selection property of the lasso is due to the fact that the diamond-shaped parameter constraint has its corners falling on the axes. For a point to lie on an axis, one coordinate needs to equal zero. The lasso regression estimator coincides with the point inside the diamond closest to the maximum likelihood estimator. This point may correspond to a corner of the diamond, in which case one of the coordinates (regression parameters) equals zero and, consequently, the lasso regression estimator does not select this element of $\bbeta$. Figure illustrates the selection property for the case with $p=2$ and an orthonormal design matrix. An orthornormal design matrix yields level sets (orange dotted circles in Figure) of the sum-of-squares that are spherical and centered around the maximum likelihood estimate (red dot in Figure). For maximum likelihood estimates inside the grey areas the closest point in the diamond-shaped parameter domain will be on one of its corners. Hence, for these maximum likelihood estimates the corresponding lasso regression estimate will include on a single covariate in the model. The geometrical explanation of the selection property of the lasso regression estimator also applies to non-orthonormal design matrices and in dimensions larger than two. In particular, high-dimensionally, the sum-of-squares may be a degenerated ellipsoid, that can and will still hit a corner of the diamond-shaped parameter domain. Finally, note that a zero value of the lasso regression estimate does imply neither that the parameter is indeed zero nor that it will be statistically indistinguishable from zero.

Larger values of the lasso penalty parameter $\lambda_1$ induce tighter parameter constraints. Consequently, the number of zero elements in the lasso regression estimator of $\bbeta$ increases as $\lambda_1$ increases. However, where $\| \hat{\bbeta}(\lambda_1) \|_1$ decreases monotonically as $\lambda_1$ increases (left panel of Figure for an example and Exercise Exercise), the number of non-zero coefficients does not. Locally, at some finite $\lambda_1$, the number of non-zero elements in $\hat{\bbeta}(\lambda_1)$ may temporarily increase with $\lambda_1$, to only go down again as $\lambda_1$ is sufficiently increased (as in the $\lambda_1 \rightarrow \infty$ limit the number of non-zero elements is zero, see the argumentation at the end of Section Analytic solutions ). The right panel of Figure illustrates this behavior for an arbitrary data set.

Contour plots of the sum-of-squares and the lasso regression loss (left and right panel, respectively). The dotted grey line represent level sets. The red line and dot represent the the location of minimum in both panels.

The attractiveness of the lasso regression estimator is in its simultaneous estimation and selection of parameters. For large enough values of the penalty parameter $\lambda_1$ the estimated regression model comprises only a subset of the supplied covariates. In high-dimensions (demanding a large penalty parameter) the number of selected parameters by the lasso regression estimator is usually small (relative to the total number of parameters), thus producing a so-called sparse model. Would one adhere to the parsimony principle, such a sparse and thus simpler model is preferable over a full model. Simpler may be better, but too simple is worse. The phenomenon or system that is to be described by the model need not be sparse. For instance, in molecular biology the regulatory network of the cell is no longer believed to be sparse [5]. Similarly, when analyzing brain image data, the connectivity of the brain is not believed to be sparse.

### Maximum number of selected covariates

The number of parameter/covariates selected by the lasso regression estimator is bounded non-trivially. The cardinality (i.e. the number of included covariates) of every lasso estimated linear regression model is smaller than or equal to $\min\{n, p\}$ [6]. According to [6] this is obvious from the analysis of the LARS algorithm of [7] (which is to be discussed in Section \ref{sect:LARS}). For now we just provide an R-script that generates the regularization paths using the lars-package for the diabetes data included in the package for a random number of samples $n$ not exceeding the number of covariates $p$.

# activate library
library(lars)

data(diabetes)
X <- diabetes$x Y <- diabetes$y

# set sample size
n  <- sample(1:ncol(X), 1)
id <- sample(1:length(Y), n)

# plot regularization paths
plot(lars(X[id,], Y[id], intercept=FALSE))

Irrespective of the drawn sample size $n$ the plotted regularization paths all terminate before the $n+1$-th variate enters the model. This could of course be circumstantial evidence at best, or even be labelled a bug in the software.

But even without the LARS algorithm the nontrivial part of the inequality, that the number of selected variates $p$ does not exceed the sample size $n$, can be proven [8].

(Theorem 6, [8])

If $p \gt n$ and $\hat{\bbeta}(\lambda_1)$ is a minimizer of the lasso regresssion loss function (\ref{form.lassoLossFunction}), then $\hat{\bbeta}(\lambda_1)$ has at most $n$ non-zero entries.

Confer [8]. ■

In the high-dimensional setting, when $p$ is large compared to $n$ small, this implies a considerable dimension reduction. It is, however, somewhat unsatisfactory that it is the study design, i.e. the inclusion of the number of samples, that determines the upperbound of the model size.

## General References

van Wieringen, Wessel N. (2021). "Lecture notes on ridge regression". arXiv:1509.09169 [stat.ME].

## References

1. Tibshirani, R. (1996).Regularized shrinkage and selection via the lasso.Journal of the Royal Statistical Society B, 58(1), 267--288
2. Tibshirani, R. J. (2013).The lasso problem and uniqueness.Electronic Journal of Statistics, 7, 1456--1490
3. Rosset, S. and Zhu, J. (2007).Piecewise linear regularized solution paths.The Annals of Statistics, pages 1012--1030
4. Leng, C., Lin, Y., and Wahba, G. (2006).A note on the lasso and related procedures in model selection.Statistica Sinica, pages 1273--1284
5. Boyle, E. A., Li, Y. I., and Pritchard, J. K. (2017).An expanded view of complex traits: from polygenic to omnigenic.Cell, 169(7), 1177--1186
6. Bühlmann, P.; Van De Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer Science & Business Media.
7. Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. (2004).Least angle regression.The Annals of Statistics, 32(2), 407--499
8. Osborne, M. R., Presnell, B., and Turlach, B. A. (2000).On the lasso and its dual.Journal of Computational and Graphical Statistics, 9(2), 319--337