# Estimation, moments, and the bayesian connection

## Estimation

In the absence of an analytic expression for the optimum of the lasso loss function, much attention is devoted to numerical procedures to find it.

In the original lasso paper [1] reformulates the lasso optimization problem to a quadratic program. A quadratic problem optimizes a quadratic form subject to linear constraints. This is a well-studied optimization problem for which many readily available implementations exist (e.g., the quadprog-package in R). The quadratic program that is equivalent to the lasso regression problem, which minimizes the least squares criterion, $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2$ subject to $\| \bbeta \|_1 \lt c(\lambda_1)$, is:

[$] \begin{eqnarray} \label{form:quadProgam} \min_{\mathbf{R} \bbeta \geq \mathbf{0}} \tfrac{1}{2} (\mathbf{Y} - \mathbf{X} \bbeta )^{\top} (\mathbf{Y} - \mathbf{X} \bbeta ), \end{eqnarray} [$]

where $\mathbf{R}$ is a $q \times p$ dimensional linear constraint matrix that specifies the linear constraints on the parameter $\bbeta$. For $p=2$ the domain implied by lasso parameter constraint $\{ \bbeta \in \mathbb{R}^2 : \| \bbeta \|_1 \lt c(\lambda_1) \}$ is equal to:

[$] \begin{eqnarray*} & & \{ \bbeta \in \mathbb{R}^2 : \beta_1 + \beta_2 \leq c(\lambda_1) \} \cap \{ \bbeta \in \mathbb{R}^2 : \beta_1 - \beta_2 \geq -c(\lambda_1) \} \cap \{ \bbeta \in \mathbb{R}^2 : \beta_1 - \beta_2 \leq c(\lambda_1) \} \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \cap \, \{ \bbeta \in \mathbb{R}^2 : \beta_1 + \beta_2 \geq - c(\lambda_1) \}. \end{eqnarray*} [$]

This collection of linear parameter constraints can be aggregated, when using:

[$] \begin{eqnarray*} \mathbf{R} & = & \left( \begin{array}{rr} 1 & 1 \\ -1 & -1 \\ 1 & -1 \\ -1 & 1 \end{array} \right), \end{eqnarray*} [$]

into $\{ \bbeta \in \mathbb{R}^2 : \mathbf{R} \bbeta \geq -c(\lambda_1) \}$.

To solve the quadratic program (\ref{form:quadProgam}) it is usually reformulated in terms of its dual. Hereto we introduce the Lagrangian:

[$] \begin{eqnarray} \label{form.lagrangian} L(\bbeta, \nnu) & = & \tfrac{1}{2} (\mathbf{Y} - \mathbf{X} \bbeta )^{\top} (\mathbf{Y} - \mathbf{X} \bbeta ) + \nnu^{\top} \mathbf{R} \bbeta, \end{eqnarray} [$]

where $\nnu = (\nu_1, \ldots, \nu_{q})^{\top}$ is the vector of non-negative multipliers. The dual function is now defined as $\inf_{\bbeta} L(\bbeta, \nnu)$. This infimum is attained at:

[$] \begin{eqnarray} \label{form.solution.primal.problem} \tilde{\bbeta} & = & (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y} + (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{R}^{\top} \nnu, \end{eqnarray} [$]

which can be verified by equating the first order partial derivative with respect to $\bbeta$ of the Lagrangian to zero and solving for $\bbeta$. Substitution of $\bbeta = \bbeta^*$ into the dual function gives, after changing the minus sign:

[$] \begin{eqnarray*} \tfrac{1}{2} \nnu^{\top} \mathbf{R} (\mathbf{X}^{\top} \mathbf{X} )^{-1} \mathbf{R}^{\top} \nnu + \nnu^{\top} \mathbf{R} (\mathbf{X}^{\top} \mathbf{X} )^{-1} \mathbf{X}^{\top} \mathbf{Y} + \tfrac{1}{2} \mathbf{Y}^{\top} \mathbf{X} (\mathbf{X}^{\top} \mathbf{X} )^{-1} \mathbf{X}^{\top} \mathbf{Y}. \end{eqnarray*} [$]

The dual problem minimizes this expression (from which the last term is dropped as is does not involve $\nnu$) with respect to $\nnu$, subject to $\nnu \geq \mathbf{0}$. Although also a quadratic programming problem, the dual problem a) has simpler constraints and b) is defined on a lower dimensional space (if the number of columns of $\mathbf{R}$ exceeds its number of rows) than the primal problem. If $\tilde{\nnu}$ is the solution of the dual problem, the solution of the primal problem is obtained from Equation (\ref{form.solution.primal.problem}). Note that in the first term on the right hand side of Equation (\ref{form.solution.primal.problem}) we recognize the unconstrained least squares estimator of $\bbeta$. Refer to, e.g., [2] for more on quadratic programming.

Example (Orthogonal design matrix, continued)

The evaluation of the lasso regression estimator by means of quadratic programming is illustrated using the data from the numerical Example. The R-script below solves, the implementation of the quadprog-package, the quadratic program associated with the lasso regression problem of the aforementioned example.

# load library

# data
Y <- matrix(c(-4.9, -0.8, -8.9, 4.9, 1.1, -2.0), ncol=1)
X <- t(matrix(c(1, -1, 3, -3, 1, 1, -3, -3, -1, 0, 3, 0), nrow=2, byrow=TRUE))

L1norm <- 1.881818 + 0.8678572

solve.QP(t(X) %*% X, t(X) %*% Y,
t(matrix(c(1, 1, -1, -1, 1, -1, -1, 1), ncol=2, byrow=TRUE)),
L1norm*c(-1, -1, -1, -1))\$solution


The resulting estimates coincide with those found earlier.

For relatively small $p$ quadratic programming is a viable option to find the lasso regression estimator. For large $p$ it is practically not feasible. Above the linear constraint matrix $\mathbf{R}$ is $4 \times 2$ dimensional for $p=2$. When $p =3$, it requires a linear constraint matrix $\mathbf{R}$ with eight rows. In general, $2^p$ linear constraints are required to fully specify the lasso parameter constraint on the regression parameter. Already when $p=100$, the specification of only the linear constraint matrix $\mathbf{R}$ will take endlessly, leave alone solving the corresponding quadratic program.

### Iterative ridge

Why develop something new, when one can also make do with existing tools? The loss function of the lasso regression estimator can be optimized by iterative application of ridge regression (as pointed out in [3]). It requires an approximation of the lasso penalty, or the absolute value function. Set $p=1$ and let $\beta_0$ be an initial parameter value for $\beta$ around which the absolute value function $| \beta |$ is to be approximated. Its quadratic approximation then is:

[$] \begin{eqnarray*} |\beta | & \approx & |\beta_0 | + \tfrac{1}{2} | \beta_0 |^{-2} (\beta^2 - \beta_0^2 ). \end{eqnarray*} [$]

An illustration of this approximation is provided in the left panel of Figure.

Left panel: quadratic approximation (i.e. the ridge penalty) to the absolute value function (i.e. the lasso penalty). Right panel: Illustration of the coordinate descent algorithm. The dashed grey lines are the level sets of the lasso regression loss function. The red arrows depict the parameter updates. These arrows are parallel to either the $\beta_1$ or the $\beta_2$ parameter axis, thus indicating that the regression parameter $\bbeta$ is updated coordinate-wise.

The lasso regression estimator is evaluated through iterative application of the ridge regression estimator. This iterative procedure needs initiation by some guess $\bbeta^{(0)}$ for $\bbeta$. For example, the ridge estimator itself may serve as such. Then, at the $k+1$-th iteration an update $\bbeta^{(k+1)}$ of the lasso regression estimator of $\bbeta$ is to be found. Application of the quadratic approximation to the absolute value functions of the elements of $\bbeta$ (around the $k$-th update $\bbeta^ {(k)}$) in the lasso penalty yields an approximation to the lasso regression loss function:

[$] \begin{eqnarray*} \| \mathbf{Y} - \mathbf{X} \, \bbeta^{(k+1)} \|^2_2 + \lambda_1 \| \bbeta^{(k+1)} \|_1 & \approx & \| \mathbf{Y} - \mathbf{X} \, \bbeta^{(k+1)} \|^2_2 + \lambda_1 \| \bbeta^{(k)} \|_1 \\ & & + \tfrac{1}{2} \lambda_1 \sum\nolimits_{j=1}^p |\beta_j^{(k)}|^{-1} [ \beta_j^{(k+1)} ]^2 - \tfrac{1}{2} \lambda_1 \sum\nolimits_{j=1}^p |\beta_j^{(k)}|^{-1} [ \beta_j^{(k)} ]^2 \\ & \propto & \| \mathbf{Y} - \mathbf{X} \, \bbeta^{(k+1)} \|^2_2 + \tfrac{1}{2} \lambda_1 \sum\nolimits_{j=1}^p |\beta_j^{(k)}|^{-1} [ \beta_j^{(k+1)} ]^2. \end{eqnarray*} [$]

The loss function now contains a weighted ridge penalty. In this one recognizes a generalized ridge regression loss function (see Chapter Generalizing ridge regression ). As its minimizer is known, the approximated lasso regression loss function is optimized by:

[$] \begin{eqnarray*} \bbeta^{(k+1)}(\lambda_1) & = & \{ \mathbf{X}^{\top} \mathbf{X} + \lambda_1 \PPsi[ \bbeta^{(k)}(\lambda_1) ] \}^{-1} \mathbf{X}^{\top} \mathbf{Y} \end{eqnarray*} [$]

where

[$] \begin{eqnarray*} \mbox{diag}\{ \PPsi[ \bbeta^{(k)}(\lambda_1) ] \} & = & (1/|\beta_1^{(k)}|, 1/|\beta_2^{(k)}|, \ldots, 1/|\beta_p^{(k)}|). \end{eqnarray*} [$]

The thus generated sequence of updates $\{ \bbeta^{(k)}(\lambda_1) \}_{k=0}^{\infty}$ converges (under ‘nice’ conditions) to the lasso regression estimator $\hat{\bbeta}(\lambda_1)$.

A note of caution. The in-built variable selection property of the lasso regression estimator may -- for large enough choices of the penalty parameter $\lambda_1$ -- cause elements of $\bbeta^{(k)}(\lambda_1)$ to become arbitrary close to zero (or, in R exceed machine precision and thereby being effectively zero) after enough updates. Consequently, the ridge penalty parameter for the $j$-th element of regression parameter may approach infinity, as the $j$-th element of $\PPsi[ \bbeta^{(k)}(\lambda_1)]$ equals $|\beta_j^{(k)}|^{-1}$. To accommodate this, the iterative ridge regression algorithm for the evaluation of the lasso regression estimator requires a modification. Effectively, that amounts to the removal of $j$-th covariate from the model all together (for its estimated regression coefficient is indistinguishable from zero). After its removal, it does not return to the set of covariates. This may be problematic if two covariates are (close to) super-collinear.

Another method of finding the lasso regression estimator and implemented in the penalized-package [4] makes use of gradient ascent. Gradient ascent/descent is an maximization/minization method that finds the optimum of a smooth function by iteratively updating a first-order local approximation to this function. Gradient ascents runs through the following sequence of steps repetitively until convergence:

1. Choose a starting value.
2. Calculate the derivative of the function, and determine the direction in which the function increases most. This direction is the path of steepest ascent.
3. Proceed in this direction, until the function no longer increases.
4. Recalculate at this point the gradient to determine a new path of steepest ascent.
5. Repeat the above until the (region around the) optimum is found.

The procedure above is illustrated in Figure. The top panel shows the choice of the initial value. From this point the path of the steepest ascent is followed until the function no longer increases (right panel of Figure). Here the path of steepest ascent is updated along which the search for the optimum is proceeded (bottom panel of Figure).

The use of gradient ascent to find the lasso regression estimator is frustrated by the non-differentiability (with respect to any of the regression parameters) of the lasso penalty function at zero. In [4] this is overcome by the use of a generalized derivative. Define the directional or G\^{ateaux} derivative of the function $f : \mathbb{R}^p \rightarrow \mathbb{R}$ at $\mathbf{x} \in \mathbb{R}^p$ in the direction of $\mathbf{v} \in \mathbb{R}^p$ as:

[$] \begin{eqnarray*} f'(\mathbf{x}) & = & \lim_{\tau \downarrow 0} \tau^{-1} \big[ f(\mathbf{x} + \tau \mathbf{v}) - f(\mathbf{x}) \big], \end{eqnarray*} [$]

assuming this limit exists. The Gâteaux derivative thus gives the infinitesimal change in $f$ at $\mathbf{x}$ in the direction of $\mathbf{v}$. As such $f'(\mathbf{x})$ is a scalar (as is immediate from the definition when noting that $f(\cdot) \in \mathbb{R}$) and should not be confused with the gradient (the vector of partial derivatives). Furthermore, at each point $\mathbf{x}$ there are infinitely many Gâteaux differentials (as there are infinitely many choices for $\mathbf{v} \in \mathbb{R}^p$). In the particular case when $\mathbf{v} = \mathbf{e}_j$, $\mathbf{e}_j$ the unit vector along the axis of the $j$-th coordinate, the directional derivative coincides with the partial derivative of $f$ in the direction of $x_j$. Relevant for the case at hand is the absolute value function $f(x) = | x|$ with $x \in \mathbb{R}$. Evaluation of the limits in its Gâteaux derivative yields:

[$] \begin{eqnarray*} f'(x) & = & \left\{ \begin{array}{lcl} \mbox{v} \frac{x}{ | x| } & \mbox{if} & x \not= 0, \\ \mbox{v} & \mbox{if} & x=0, \end{array} \right. \end{eqnarray*} [$]

for any $\mbox{v} \in \mathbb{R} \setminus \{ 0 \}$. Hence, the Gâteaux derivative of $| x|$ does exits at $x=0$. In general, the Gâteaux differential may be uniquely defined by limiting the directional vectors $\mathbf{v}$ to i) those with unit length (i.e. $\| \mathbf{v} \| = 1$) and ii) the direction of steepest ascent. Using the Gâteaux derivative a gradient of $f(\cdot)$ at $\mathbf{x} \in \mathbb{R}^p$ may then be defined as:

[$] \begin{eqnarray} \label{def:GateauxDerivative} \nabla f(\mathbf{x}) & = & \left\{ \begin{array}{lcl} f'(\mathbf{x}) \cdot \mathbf{v}_{\mbox{{\tiny opt}}} & \mbox{if} & f'(\mathbf{x}) \geq 0, \\ \mathbf{0}_{p} & \mbox{if} & f'(\mathbf{x}) \lt 0, \end{array} \right. \end{eqnarray} [$]

in which $\mathbf{v}_{\mbox{{\tiny opt}}} = \arg \max_{\{ \mathbf{v} \, : \, \| \mathbf{v} \| = 1 \} } f'(\mathbf{x})$. This is the direction of steepest ascent, $\mathbf{v}_{\mbox{{\tiny opt}}}$, scaled by Gâteaux derivative, $f'(\mathbf{x})$, in the direction of $\mathbf{v}_{\mbox{{\tiny opt}}}$.

[4] applies the definition of the Gâteaux gradient to the lasso penalized likelihood using the direction of steepest ascent as $\mathbf{v}_{\mbox{{\tiny opt}}}$. The resulting partial Gâteaux derivative with respect to the $j$-th element of the regression parameter $\bbeta$ is:

[$] \begin{eqnarray*} \frac{\partial}{ \partial \beta_j } \mathcal{L}_{\mbox{{\tiny lasso}}}(\mathbf{Y}, \mathbf{X}; \bbeta) & = & \left\{ \begin{array}{ll} \frac{\partial}{ \partial \beta_j } \mathcal{L}(\mathbf{Y}, \mathbf{X}; \bbeta) - \lambda_1 \mbox{sign} (\beta_j) & \mbox{if} \,\, \beta_j \not= 0 \\ \frac{\partial}{ \partial \beta_j } \mathcal{L}(\mathbf{Y}, \mathbf{X}; \bbeta) - \lambda_1 \mbox{sign} \big[ \frac{\partial}{ \partial \beta_j } \mathcal{L}(Y, X; \bbeta) \big] & \mbox{if} \,\, \beta_j = 0 \mbox{ and } | \partial \mathcal{L}/ \partial \beta_j | \gt \lambda_1 \\ 0 & \mbox{otherwise}, \end{array} \right. \end{eqnarray*} [$]

where $\partial \mathcal{L}/ \partial \beta_j = \sum_{j'=1}^p (\mathbf{X}^\top \mathbf{X})_{j', j} \beta_j - (\mathbf{X}^\top \mathbf{Y})_{j}$. This can be understood through a case-by-case study. The partial derivative above is assumed to be clear for the $\beta_j \not= 0$ and the ‘otherwise’ cases. That leaves the clarification of the middle case. When $\beta_j = 0$, the direction of steepest ascent of the penalized loglikelihood points either into $\{ \bbeta \in \mathbb{R}^p \, : \, \beta_j \gt 0 \}$, or $\{ \bbeta \in \mathbb{R}^p \, : \, \beta_j \lt 0 \}$, or stays in $\{ \bbeta \in \mathbb{R}^p \, : \, \beta_j = 0 \}$. When the direction of steepest ascent points into the positive or negative half-hyperplanes, the contribution of $\lambda_1 | \beta_j|$ to the partial Gâteaux derivative is simply $\lambda_1$ or $-\lambda_1$, respectively. Then, only when the partial derivative of the log-likelihood % in the same direction together with this contribution is larger then zero, the penalized loglikelihood improves and the direction is of steepest ascent. Similarly, the direction of steepest ascent may be restricted to $\{ \bbeta \in \mathbb{R}^p \, | \, \beta_j = 0 \}$ and the contribution of $\lambda_1 | \beta_j|$ to the partial Gâteaux derivative vanishes. Then, only if the partial derivative of the loglikelihood is positive, this direction is to be pursued for the improvement of the penalized loglikelihood.

Convergence of gradient ascent can be slow close to the optimum. This is due to its linear approximation of the function. Close to the optimum the linear term of the Taylor expansion vanishes and is dominated by the second-order quadratic term. To speed-up convergence close to the optimum the gradient ascent implementation offered by the penalized-package switches to a Newton-Raphson procedure.

### Coordinate descent

Coordinate descent is another optimization algorithm that may be used to evaluate the lasso regression estimator numerically, as is done by the implemention offered via the glmnet-package. Coordinate descent, instead of following the gradient of steepest descent (as in Section Gradient ascent ), minimizes the loss function along the coordinates one-at-the-time. For the $j$-th regression parameter this amounts to finding:

[$] \begin{eqnarray*} \arg \min_{\beta_j} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1 & = & \arg \min_{\beta_j} \| \mathbf{Y} - \mathbf{X}_{\ast, \setminus j} \bbeta_{\setminus j} - \mathbf{X}_{\ast, j} \beta_{j} \|_2^2 + \lambda_1 | \beta_j |_1 \\ & = & \arg \min_{\beta_j} \| \tilde{\mathbf{Y}} - \mathbf{X}_{\ast, j} \beta_{j} \|_2^2 + \lambda_1 | \beta_j |_1, \end{eqnarray*} [$]

where $\tilde{\mathbf{Y}} = \mathbf{Y} - \mathbf{X}_{\ast, \setminus j} \bbeta_{\setminus j}$. After a simple rescaling of both $\mathbf{X}_{\ast, j}$ and $\beta_j$, the minimization of the lasso regression loss function with respect to $\beta_j$ is equivalent to one with an orthonormal design matrix. From Example it is known that the minimizer is obtained by application of the soft-threshold function to the corresponding maximum likelihood estimator (now derived from $\tilde{\mathbf{Y}}$ and $\mathbf{X}_j$). The coordinate descent algorithm iteratively runs over the $p$ elements until convergence. The right panel of Figure provides an illustration of the coordinate descent algorithm.

Convergence of the coordinate descent algorithm to the minimum of the lasso regression loss function is warranted by the convexity of this function. At each minization step the coordinate descent algorithm yields an update of the parameter estimate that corresponds to an equal or smaller value of the loss function. It, together with the compactness of diamond-shaped parameter domain and the boundedness (from below) of the lasso regression loss function, implies that the coordinate descent algorithm converges to the minimum of this lasso regression loss function. Although convergence is assured, it may take many steps for it to be reached. In particular, when

1. two covariates are strongly collinear,
2. one of the two covariate contributes only slightly more to the response,
3. and the algorithm is initiated with the weaker explanatory covariate.

The coordinate descent algorithm will then take may iterations to replace the latter covariate by the preferred one. In such cases simultaneous updating, as is done by the gradient ascent algorithm (Section Gradient ascent ), may be preferable.

## Moments

In general the moments of the lasso regression estimator appear to be unknown. In certain cases an approximation can be given. This is pointed out here. Use the quadratic approximation to the absolute value function of Section Iterative ridge and approximate the lasso regression loss function around the lasso regression estimate:

[$] \begin{eqnarray*} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1 & \approx & \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \tfrac{1}{2} \lambda_1 \sum\nolimits_{j=1}^p |\hat{\beta}(\lambda_1)|^{-1} \, \beta_j^2. \end{eqnarray*} [$]

Optimization of the right-hand side of the preceeding display with respect to $\bbeta$ gives a ‘ridge approximation’ to the lasso estimator:

[$] \begin{eqnarray*} \hat{\bbeta}(\lambda_1) & \approx & \{\mathbf{X}^\top \mathbf{X} + \lambda_1 \mathbf{\Psi}[\hat{\bbeta}(\lambda_1)] \}^{-1} \mathbf{X}^\top \mathbf{Y}, \end{eqnarray*} [$]

with $(\mathbf{\Psi}[\hat{\bbeta}(\lambda_1)])_{jj} = | \hat{\beta}_j(\lambda_1) |^{-1}$ if $\hat{\beta}_j(\lambda_1) \not=0$. Now use this ‘ridge approximation’ to obtain the approximation to the moments of the lasso regression estimator:

[$] \begin{eqnarray*} \mathbb{E} [\hat{\bbeta}(\lambda_1)] & \approx & \mathbb{E} \big( \{\mathbf{X}^\top \mathbf{X} + \lambda_1 \mathbf{\Psi}[\hat{\bbeta}(\lambda_1)] \}^{-1} \mathbf{X}^\top \mathbf{Y} \big) \, \, \, = \, \, \, \{\mathbf{X}^\top \mathbf{X} + \lambda_1 \mathbf{\Psi}[\hat{\bbeta}(\lambda_1)] \}^{-1} \mathbf{X}^\top \mathbf{X} \bbeta \end{eqnarray*} [$]

and

[$] \begin{eqnarray*} \mbox{Var} [\hat{\bbeta}(\lambda_1)] & \approx & \mbox{Var}\big( \{\mathbf{X}^\top \mathbf{X} + \lambda_1 \mathbf{\Psi}[\hat{\bbeta}(\lambda_1)] \}^{-1} \mathbf{X}^\top \mathbf{Y} \big) \\ & = & \sigma^2 \{\mathbf{X}^\top \mathbf{X} + \lambda_1 \mathbf{\Psi}[\hat{\bbeta}(\lambda_1)] \}^{-1} \mathbf{X}^\top \mathbf{X} \{\mathbf{X}^\top \mathbf{X} + \lambda_1 \mathbf{\Psi}[\hat{\bbeta}(\lambda_1)] \}^{-1}. \end{eqnarray*} [$]

These approximations can only be used when the lasso regression estimate is not sparse, which is at odds with its attractiveness. A better approximation of the variance of the lasso regression estimator can be found in [5], but even this becomes poor when many elements of $\bbeta$ are estimated as zero.

Although the above approximations are only crude, they indicate that the moments of the lasso regression estimator exhibit similar behaviour as those of its ridge counterpart. The (approximation of the) mean $\mathbb{E}[\hat{\bbeta}(\lambda_1)]$ tends to zero as $\lambda_1 \rightarrow \infty$. This was intuitively already expected from the form of the lasso regression loss function, in which the penalty term dominates for large $\lambda_1$ and is minimized for $\hat{\bbeta}(\lambda_1) = \mathbf{0}_p$. This may also be understood geometrically when appealing to the equivalent constrained estimation formulation of the lasso regression estimator. The parameter constraint shrinks to zero with increasing $\lambda_1$. Hence, so must the estimator. Similarly, the (approximation of the) variance of the lasso regression estimator vanishes as the penalty parameter $\lambda_1$ grows. Again, its loss function provides the intuition: for large $\lambda_1$ the penalty term, which does not depend on data, dominates. Or, from the perspective of the constrained estimation formulation, the parameter constraint shrinks to zero as $\lambda_1 \rightarrow \infty$. Hence, so must the variance of the estimator, as less and less room is left for it to fluctuate.

The behaviour of the mean squared error, bias squared plus variance, of the lasso regression estimator in terms of $\lambda_1$ is hard to characterize exactly without knowledge of the quality of the approximations. In particular, does a $\lambda_1$ exists such that the MSE of the lasso regression estimator outperforms that of its maximum likelihood counterpart? Nonetheless, a first observation may be obtained from reasoning in extremis. Suppose $\bbeta = \mathbf{0}_p$, which corresponds to an empty or maximally sparse model. A large value of $\lambda_1$ then yields a zero estimate of the regression parameter: $\hat{\bbeta}(\lambda_1) = \mathbf{0}_p$. The bias squared is thus minimized: $\| \hat{\bbeta}(\lambda_1) - \bbeta \|_2^2 = 0$. With the bias vanished and the (approximation of the) variance decreasing in $\lambda_1$, so must the MSE decrease for $\lambda_1$ larger than some value. So, for an empty model the lasso regression estimator with a sufficiently large penalty parameter yields a better MSE than the maximum likelihood estimator. For very sparse models this property may be expected to uphold, but for non-sparse models the bias squared will have a substantial contribution to the MSE, and it is thus not obvious whether a $\lambda_1$ exists that yields a favourable MSE for the lasso regression estimator. This is investigated in silico in [6]. The simulations presented there indicate that the MSE of the lasso regression estimator is particularly sensitive to the actual $\bbeta$. Moreover, for a large part of the parameter space $\bbeta \in \mathbb{R}^p$ the MSE of $\hat{\bbeta}(\lambda_1)$ is behind that of the maximum likelihood estimator.

## The Bayesian connection

The lasso regression estimator, being a penalized estimator, knows a Bayesian formulation, much like the (generalized) ridge regression estimator could be viewed as a Bayesian estimator when imposing a Gaussian prior (cf. Chapter Bayesian regression and Section The Bayesian connection ). Instead of normal prior, the lasso regression estimator requires (as suggested by the form of the lasso penalty) a zero-centered Laplacian (or double exponential) prior for it to be viewed as a Bayesian estimator. A zero-centered Laplace distributed random variable $X$ has density $f_X(x) = \tfrac{1}{2} b^{-1} \exp(-|x| /b)$ with scale parameter $b \gt 0$. The top panel of Figure shows the Laplace prior, and for contrast the normal prior of the ridge regression estimator. This figure reveals that the ‘lasso prior’ puts more mass close to zero and in the tails than the Gaussian ‘ridge prior’. This corroborates with the tendency of the lasso regression estimator to produce either zero or large (compared to ridge) estimates.

Top panel: the Laplace prior associated with the Bayesian counterpart of the lasso regression estimator. Bottom left panel: the posterior distribution of the regression parameter for various Laplace priors. Bottom right panel: posterior mode vs. the penalty parameter $\lambda_1$.

The lasso regression estimator corresponds to the maximum a posteriori (MAP) estimator of $\bbeta$, when the prior is a Laplace distribution. The posterior distribution is then proportional to:

[$] \begin{eqnarray*} \prod_{i=1}^n (2 \pi \sigma^2)^{-1/2} \exp [ - (2\sigma^2)^{-1} (Y_i \mathbf{X}_{i,\ast} \bbeta)^2] \times \prod_{j=1}^p (2b)^{-1} \exp( -|\beta_j |/b). \end{eqnarray*} [$]

The posterior is not a well-known and characterized distribution. This is not necessary as interest concentrates here on its maximum. The location of the posterior mode coincides with the location of the maximum of logarithm of the posterior. The log-posterior is proportional to: $- (2\sigma^2)^{-1} \| \mathbf{Y} - \mathbf{X}\bbeta \|_2^2 - b^{-1} \| \bbeta \|_1$, with its maximizer minimizing $\| \mathbf{Y} - \mathbf{X}\bbeta \|_2^2 + (2 \sigma^2 / b) \| \bbeta \|_1$. In this one recognizes the form of the lasso regression loss function. It is thus clear that the scale parameter of the Laplace distribution reciprocally relates to lasso penalty parameter $\lambda_1$, similar to the relation of the ridge penalty parameter $\lambda_2$ and the variance of the Gaussian prior of the ridge regression estimator.

The posterior may not be a standard distribution, in the univariate case ($p=1$) it can be visualized. Specifically, the behaviour of the MAP can then be illustrated, which -- as the MAP estimator corresponds to the lasso regression estimator -- should also exhibit the selection property. The bottom left panel of Figure shows the posterior distribution for various choices of the Laplace scale parameter (i.e. lasso penalty parameter). Clearly, the mode shifts towards zero as the scale parameter decreases / lasso penalty parameter increases. In particular, the posterior obtained from the Laplace prior with the smallest scale parameter (i.e. largest penalty parameter $\lambda_1$), although skewed to the left, has a mode placed exactly at zero. The Laplace prior may thus produce MAP estimators that select. However, for smaller values of the lasso penalty parameter the Laplace prior is not concentrated enough around zero and the contribution of the likelihood in the posterior outweighs that of the prior. The mode is then not located at zero and the parameter is ‘selected’ by the MAP estimator. The bottom right panel of Figure plots the mode of the normal-Laplace posterior vs. the Laplace scale parameter. In line with Theorem it is piece-wise linear.

[7] go beyond the elementary correspondence of the frequentist lasso estimator and the Bayesian posterior mode and formulate the Bayesian lasso regression model. To this end they exploit the fact that the Laplace distribution can be written as a scale mixture of normal distributions with an exponentional mixing density. This allows the construction of a Gibbs sampler for the Bayesian lasso estimator. Finally, they suggest to impose a gamma-type hyperprior on the (square of the) lasso penalty parameter. Such a full Bayesian formulation of the lasso problem enables the construction of credible sets (i.e. the Bayesian counterpart of confidence intervals) to express the uncertainty of the maximum a posterior estimator. However, the lasso regression estimator may be seen as a Bayesian estimator, in the sense that it coincides with the posterior mode, the ‘lasso’ posterior distribution cannot be blindly used for uncertainty quantification. In high-dimensional sparse settings the ‘lasso’ posterior distribution of $\bbeta$ need not concentrate around the true parameter, even though its mode is a good estimator of the regression parameter (cf. Section 3 and Theorem 7 of [8]).

## General References

van Wieringen, Wessel N. (2021). "Lecture notes on ridge regression". arXiv:1509.09169 [stat.ME].

## References

1. Tibshirani, R. (1996).Regularized shrinkage and selection via the lasso.Journal of the Royal Statistical Society B, 58(1), 267--288
2. Bertsekas, D. P. (2014).Constrained Optimization and Lagrange Multiplier Methods.Academic press
3. Fan, J. and Li, R. (2001).Variable selection via nonconcave penalized likelihood and its oracle properties.Journal of the American statistical Association, 96(456), 1348--1360
4. Goeman, J. J. (2010).$L_1$ penalized estimation in the Cox proportional hazards model.Biometrical Journal, 52, 70--84
5. Osborne, M. R., Presnell, B., and Turlach, B. A. (2000).On the lasso and its dual.Journal of Computational and Graphical Statistics, 9(2), 319--337
6. Hansen, B. E. (2015).The risk of James--Stein and lasso shrinkage.Econometric Reviews, 35(8-10), 1456--1470
7. Park, T. and Casella, G. (2008).The Bayesian lasso.Journal of the American Statistical Association, 103(482), 681--686
8. Castillo, I., Schmidt-Hieber, J., and Van der Vaart, A. W. (2015).Bayesian linear regression with sparse priors.The Annals of Statistics, 43(5), 1986--2018