⧼exchistory⧽
6 exercise(s) shown, 0 hidden
Jun 24'23

Consider the linear regression model $\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon$ with $\vvarepsilon \sim \mathcal{N} ( \mathbf{0}_n, \sigma^2 \mathbf{I}_{nn})$. This model is fitted to data, $\mathbf{X}_{1,\ast} = (4, -2)$ and $Y_1 =(10)$, using the ridge regression estimator $\hat{\bbeta}(\lambda) = (\mathbf{X}^{\top}_{1,\ast} \mathbf{X}_{1,\ast} + \lambda \mathbf{I}_{22})^{-1} \mathbf{X}_{1,\ast}^{\top} Y_1$.

• Evaluate the ridge regression estimator for $\lambda=5$.
• Suppose $\bbeta = (1,-1)^{\top}$. Evaluate the bias of the ridge regression estimator.
• Decompose the bias into a component due to the regularization and one attributable to the high-dimensionality of the study.
• Would $\bbeta$ have equalled $(2,-1)^{\top}$, the bias' component due to the high-dimensionality vanishes. Explain why.
Jun 24'23

The linear regression model, $\mathbf{Y} =\mathbf{X} \bbeta + \vvarepsilon$ with $\vvarepsilon \sim \mathcal{N}(\mathbf{0}_n, \sigma^2 \mathbf{I}_{nn})$, is fitted by to the data with the following response, design matrix, and relevant summary statistics:

[$] \begin{eqnarray*} \mathbf{X} = \left( \begin{array}{rr} 0.3 & -0.7 \end{array} \right), \, \mathbf{Y} = \left( \begin{array}{r} 0.2 \end{array} \right), \, \mathbf{X}^{\top} \mathbf{X} = \left( \begin{array}{rr} 0.09 & -0.21 \\ -0.21 & 0.49 \end{array} \right), \mbox{ and } \, \mathbf{X}^{\top} \mathbf{Y} = \left( \begin{array}{r} 0.06 \\ -0.14 \end{array} \right). \end{eqnarray*} [$]

Hence, $p=2$ and $n=1$. The fitting uses the ridge regression estimator.

• Section Expectation states that the regularization path of the ridge regression estimator, i.e. $\{ \hat{\bbeta}(\lambda) : \lambda \gt 0\}$, is confined to a line in $\mathbb{R}^2$. Give the details of this line and draw it in the $(\beta_1, \beta_2)$-plane.
• Verify numerically, for a set of penalty parameter values, whether the corresponding estimates $\hat{\bbeta}(\lambda)$ are indeed confined to the line found in part a). Do this by plotting the estimates in the $(\beta_1, \beta_2)$-plane (along with the line found in part a). In this use the following set of $\lambda$'s:
lambdas <- exp(seq(log(10^(-15)), log(1), length.out=100))

• Part b) reveals that, for small values of $\lambda$, the estimates fall outside the line found in part a). Using the theory outlined in Section Expectation , the estimates can be decomposed into a part that falls on this line and a part that is orthogonal to it. The latter is given by $(\mathbf{I}_{22} - \mathbf{P}_x) \hat{\bbeta}(\lambda)$ where $\mathbf{P}_x$ is the projection matrix onto the space spanned by the columns of $\mathbf{X}$. Evaluate the projection matrix $\mathbf{P}_x$.
• Numerical inaccuracy, resulting from the ill-conditionedness of $\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{22}$, causes $(\mathbf{I}_{22} - \mathbf{P}_x) \hat{\bbeta}(\lambda) \not= \mathbf{0}_2$. Verify that the observed non-null $(\mathbf{I}_{22} - \mathbf{P}_x) \hat{\bbeta}(\lambda)$ are indeed due to numerical inaccuracy. Hereto generate a log-log plot of the condition number of $\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{22}$ vs. the $\| (\mathbf{I}_{22} - \mathbf{P}_x) \hat{\bbeta}(\lambda) \|_2$ for the provided set of $\lambda$'s.
Jun 24'23

Provide an alternative proof of Theorem that states the existence of a positive value of the penalty parameter for which the ridge regression estimator has a superior MSE compared to that of its maximum likelihood counterpart. Hereto show that the derivative of the MSE with respect to the penalty parameter is negative at zero. In this use the following results from matrix calculus:

[$] \begin{eqnarray*} \frac{d}{d \lambda} \mbox{tr} [ \mathbf{A} (\lambda ) ] & = & \mbox{tr} \Big[ \frac{d}{d \lambda} \mathbf{A} (\lambda ) \Big], \qquad \frac{d}{d \lambda} (\mathbf{A} + \lambda \mathbf{B})^{-1} \, \, \, = \, \, \, - (\mathbf{A} + \lambda \mathbf{B})^{-1} \mathbf{B} (\mathbf{A} + \lambda \mathbf{B})^{-1}, \end{eqnarray*} [$]

and the chain rule

[$] \begin{eqnarray*} \frac{d}{d \lambda} \mathbf{A} (\lambda ) \, \mathbf{B} (\lambda ) & = & \Big[ \frac{d}{d \lambda} \mathbf{A} (\lambda ) \Big] \, \mathbf{B} (\lambda ) + \mathbf{A} (\lambda ) \, \Big[ \frac{d}{d \lambda} \mathbf{B} (\lambda ) \Big], \end{eqnarray*} [$]

where $\mathbf{A} (\lambda)$ and $\mathbf{B} (\lambda)$ are square, symmetric matrices parameterized by the scalar $\lambda$.

Note: the proof in the lecture notes is a stronger one, as it provides an interval on the penalty parameter where the MSE of the ridge regression estimator is better than that of the maximum likelihood one.

Jun 24'23

Recall that there exists $\lambda \gt 0$ such that $MSE(\hat{\bbeta}) \gt MSE[\hat{\bbeta}(\lambda)]$. Verify that this carries over to the linear predictor. That is, then there exists a $\lambda \gt 0$ such that$MSE(\widehat{\mathbf{Y}}) = MSE(\mathbf{X} \hat{\bbeta}) \gt MSE[\mathbf{X}\hat{\bbeta}(\lambda)]$.

Jun 24'23

Consider the standard linear regression model $Y_i = \mathbf{X}_{i,\ast} \bbeta + \varepsilon_i$ for $i=1, \ldots, n$ and with the $\varepsilon_i$ i.i.d. normally distributed with zero mean and a common but unknown variance. Information on the response, design matrix and relevant summary statistics are:

[$] \begin{eqnarray*} \mathbf{X}^{\top} = \left( \begin{array}{rrr} 2 & 1 & -2 \end{array} \right), \, \mathbf{Y}^{\top} = \left( \begin{array}{rrr} -1 & -1 & 1 \end{array} \right), \, \mathbf{X}^{\top} \mathbf{X} = \left( \begin{array}{r} 9 \end{array} \right), \mbox{ and } \, \mathbf{X}^{\top} \mathbf{Y} = \left( \begin{array}{r} -5 \end{array} \right), \end{eqnarray*} [$]

from which the sample size and dimension of the covariate space are immediate.

• Evaluate the ridge regression estimator $\hat{\bbeta}(\lambda)$ with $\lambda=1$.
• Evaluate the variance of the ridge regression estimator, i.e.$\widehat{\mbox{Var}}[\hat{\bbeta}(\lambda)]$, for $\lambda = 1$. In this the error variance $\sigma^2$ is estimated by $n^{-1} \| \mathbf{Y} - \mathbf{X} \hat{\bbeta}(\lambda) \|_2^2$.
• Recall that the ridge regression estimator $\hat{\bbeta}(\lambda)$ is normally distributed. Consider the interval
[$] \begin{eqnarray*} \mathcal{C} & = & \big(\hat{\bbeta}(\lambda) - 2 \{ \widehat{\mbox{Var}}[\hat{\bbeta}(\lambda)] \}^{1/2}, \, \hat{\bbeta}(\lambda) + 2 \{ \widehat{\mbox{Var}}[\hat{\bbeta}(\lambda)] \}^{1/2} \big). \end{eqnarray*} [$]
Is this a genuine (approximate) $95\%$ confidence interval for $\bbeta$? If so, motivate. If not, what is the interpretation of this interval?
• Suppose the design matrix is augmented with an extra column identical to the first one. Is the estimate of the error variance unaffected, or not? Motivate.
Consider the standard linear regression model $Y_i = \mathbf{X}_{i,\ast} \bbeta + \varepsilon_i$ for $i=1, \ldots, n$ and with $\varepsilon_i \sim_{i.i.d.} \mathcal{N}(0, \sigma^2)$. The ridge regression estimator of $\bbeta$ is denoted by $\hat{\bbeta}(\lambda)$ for $\lambda \gt 0$.
[$] \begin{eqnarray*} \mbox{tr}\{ \mbox{Var}[ \widehat{\mathbf{Y}} (\lambda)] \} \, \, \, = \, \, \, \sigma^2 \sum\nolimits_{j=1}^p (\mathbf{D}_x)_{jj}^4 [(\mathbf{D}_x)_{jj}^2 + \lambda ]^{-2}, \end{eqnarray*} [$]
where $\widehat{\mathbf{Y}} (\lambda) = \mathbf{X} \hat{\bbeta}(\lambda)$ and $\mathbf{D}_x$ is the diagonal matrix containing the singular values of $\mathbf{X}$ on its diagonal.
[$] \begin{eqnarray*} R^2 & = & [\mbox{Var}(\mathbf{Y}) - \mbox{Var}(\widehat{\mathbf{Y}})] / [\mbox{Var}(\mathbf{Y}) ] \, \, \, = \, \, \, [ \mbox{Var}(\mathbf{Y} - \widehat{\mathbf{Y}}) ] / [ \mbox{Var}(\mathbf{Y}) ], \end{eqnarray*} [$]
where $\widehat{\mathbf{Y}} = \mathbf{X} \hat{\bbeta}$ with $\hat{\bbeta} = (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y}$. Show that the second equality does not hold when $\widehat{\mathbf{Y}}$ is now replaced by the ridge regression predictor defined as $\widehat{\mathbf{Y}}(\lambda) = \mathbf{H}(\lambda) \mathbf{Y}$ where $\mathbf{H}(\lambda) = \mathbf{X} (\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} \mathbf{X}^{\top}$. Hint: Use the fact that $\mathbf{H}(\lambda)$ is not a projection matrix, i.e. $\mathbf{H}(\lambda) \not= [\mathbf{H}(\lambda)]^2$.