⧼exchistory⧽
10 exercise(s) shown, 0 hidden
Jun 25'23

Consider a pathway comprising of three genes called $A$, $B$, and $C$. Let random variables $Y_{i,a}$, $Y_{i,b}$, and $Y_{i,c}$ be the random variable representing the expression of levels of genes $A$, $B$, and $C$ in sample $i$. Hundred realizations, i.e. $i=1, \ldots, n$, of $Y_{i,a}$, $Y_{i,b}$, and $Y_{i,c}$ are available from an observational study. In order to assess how the expression levels of gene $A$ are affect by that of genes $B$ and $C$ a medical researcher fits the

[$] \begin{eqnarray*} Y_{i,a} &= & \beta_b Y_{i,b} + \beta_c Y_{i,c} + \varepsilon_{i}, \end{eqnarray*} [$]

with $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$. This model is fitted by means of ridge regression, but with a separate penalty parameter, $\lambda_{b}$ and $\lambda_{c}$, for the two regression coefficients, $\beta_b$ and $\beta_c$, respectively.

• Write down the ridge penalized loss function employed by the researcher.
• Does a different choice of penalty parameter for the second regression coefficient affect the estimation of the first regression coefficient? Motivate your answer.
• The researcher decides that the second covariate $Y_{i,c}$ is irrelevant. Instead of removing the covariate from model, the researcher decides to set $\lambda_{c} = \infty$. Show that this results in the same ridge estimate for $\beta_b$ as when fitting (again by means of ridge regression) the model without the second covariate.
Jun 25'23

Consider the linear regression model $Y_i = \beta_1 X_{i,1} + \beta_2 X_{i,2} + \varepsilon_i$ for $i=1, \ldots, n$. Suppose estimates of the regression parameters $(\beta_1, \beta_2)$ of this model are obtained through the minimization of the sum-of-squares augmented with a ridge-type penalty:

[$] \begin{eqnarray*} \sum\nolimits_{i=1}^n (Y_i - \beta_1 X_{i,1} - \beta_2 X_{i,2})^2 + \lambda (\beta_1^2 + \beta_2^2 + 2 \nu \beta_1 \beta_2), \end{eqnarray*} [$]

with penalty parameters $\lambda \in \mathbb{R}_{\gt 0}$ and $\nu \in (-1, 1)$.

• Recall the equivalence between constrained and penalized estimation (cf. Section Constrained estimation ). Sketch (for both $\nu=0$ and $\nu=0.9$) the shape of the parameter constraint induced by the penalty above and describe in words the qualitative difference between both shapes.
• When $\nu = -1$ and $\lambda \rightarrow \infty$ the estimates of $\beta_1$ and $\beta_2$ (resulting from minimization of the penalized loss function above) converge towards each other: $\lim_{\lambda \rightarrow \infty} \hat{\beta}_1(\lambda, -1) = \lim_{\lambda \rightarrow \infty} \hat{\beta}_2(\lambda, -1)$. Motivated by this observation a data scientists incorporates the equality constraint $\beta_1 = \beta = \beta_2$ explicitly into the model, and s/he estimates the ‘joint regression parameter’ $\beta$ through the minimization (with respect to $\beta$) of:
[$] \begin{eqnarray*} \sum\nolimits_{i=1}^n (Y_i - \beta X_{i,1} - \beta X_{i,2})^2 + \delta \beta^2, \end{eqnarray*} [$]
with penalty parameter $\delta \in \mathbb{R}_{\gt 0}$. The data scientist is surprised to find that resulting estimate $\hat{\beta}(\delta)$ does not have the same limiting (in the penalty parameter) behavior as the $\hat{\beta}_1(\lambda, -1)$, i.e. $\lim_{\delta \rightarrow \infty} \hat{\beta} (\delta) \not= \lim_{\lambda \rightarrow \infty} \hat{\beta}_1(\lambda, -1)$. Explain the misconception of the data scientist.
• Assume that i) $n \gg 2$, ii) the unpenalized estimates $(\hat{\beta}_1(0, 0), \hat{\beta}_2(0, 0))^{\top}$ equal $(-2,2)$, and iii) that the two covariates $X_1$ and $X_2$ are zero-centered, have equal variance, and are strongly negatively correlated. Consider $(\hat{\beta}_1(\lambda, \nu), \hat{\beta}_2(\lambda, \nu))^{\top}$ for both $\nu=-0.9$ and $\nu=0.9$. For which value of $\nu$ do you expect the sum of the absolute value of the estimates to be largest? Hint: Distinguish between small and large values of $\lambda$ and think geometrically!
Jun 25'23

Show that the genalized ridge regression estimator, $\hat{\bbeta}(\mathbf{\Delta}) = (\mathbf{X}^{\top} \mathbf{X} + \mathbf{\Delta})^{-1} \mathbf{X}^{\top} \mathbf{Y}$, too (as in Question) can be obtained by ordinary least squares regression on an augmented data set. Hereto consider the Cholesky decomposition of the penalty matrix: $\mathbf{\Delta} = \mathbf{L}^{\top} \mathbf{L}$ Now augment the matrix $\mathbf{X}$ with $p$ additional rows comprising the matrix $\mathbf{L}$, and augment the response vector $\mathbf{Y}$ with $p$ zeros.

Jun 25'23

Consider the linear regression model $\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon$ with $\vvarepsilon \sim \mathcal{N}(\mathbf{0}_p, \sigma^2 \mathbf{I}_{pp})$. Assume $\bbeta \sim \mathcal{N}(\bbeta_0, \sigma^2 \mathbf{\Delta}^{-1})$ with $\bbeta_0 \in \mathbb{R}^p$ and $\mathbf{\Delta} \succ 0$ and a gamma prior on the error variance. Verify (i.e., work out the details of the derivation) that the posterior mean coincides with the generalized ridge estimator defined as:

[$] \begin{eqnarray*} \hat{\bbeta} & = & (\mathbf{X}^{\top} \mathbf{X} + \mathbf{\Delta})^{-1} (\mathbf{X}^{\top} \mathbf{Y} + \mathbf{\Delta} \bbeta_0). \end{eqnarray*} [$]

Jun 25'23

Consider the Bayesian linear regression model $\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon$ with $\vvarepsilon \sim \mathcal{N}(\mathbf{0}_n, \sigma^2 \mathbf{I}_{nn})$, a multivariate normal law as conditional prior distribution on the regression parameter: $\bbeta \, | \, \sigma^2 \sim \mathcal{N}(\bbeta_0, \sigma^2 \mathbf{\Delta}^{-1})$, and an inverse gamma prior on the error variance $\sigma^2 \sim \mathcal{IG}(\gamma, \delta)$. The consequences of various choices for the hyper parameters of the prior distribution on $\bbeta$ are studied.

• Consider the following conditional prior distributions on the regression parameters $\bbeta \, | \, \sigma^2 \sim \mathcal{N}(\bbeta_0, \sigma^2 \mathbf{\Delta}_a^{-1})$ and $\bbeta \, | \, \sigma^2 \sim \mathcal{N}(\bbeta_0, \sigma^2 \mathbf{\Delta}_b^{-1})$ with precision matrices $\mathbf{\Delta}_a, \mathbf{\Delta}_b \in \mathcal{S}_{++}^p$ such that $\mathbf{\Delta}_a \succeq \mathbf{\Delta}_b$, i.e. $\mathbf{\Delta}_a = \mathbf{\Delta}_b + \mathbf{D}$ for some positive semi-definite symmetric matrix of appropriate dimensions. Verify:
[$] \begin{eqnarray*} \mbox{Var}(\bbeta \, | \, \sigma^2, \mathbf{Y}, \mathbf{X}, \bbeta_0, \mathbf{\Delta}_a) & \preceq & \mbox{Var}(\bbeta \, | \, \sigma^2, \mathbf{Y}, \mathbf{X}, \bbeta_0, \mathbf{\Delta}_b), \end{eqnarray*} [$]
i.e. the smaller (in the positive definite ordering) the variance of the prior the smaller that of the posterior.
• In the remainder of this exercise assume $\mathbf{\Delta}_a =\mathbf{\Delta} = \mathbf{\Delta}_b$. Let $\bbeta_t$ be the ‘true’ or ‘ideal’ value of the regression parameter, that has been used in the generation of the data, and show that a better initial guess yields a better posterior probability at $\bbeta_t$. That is, take two prior mean parameters $\bbeta_0 = \bbeta_0^{\mbox{{\tiny (a)}}}$ and $\bbeta_0 = \bbeta_0^{\mbox{{\tiny (b)}}}$ such that the former is closer to $\bbeta_t$ than the latter. Here close is defined in terms of the Mahalabonis distance, which for, e.g. $\bbeta_t$ and $\bbeta_0^{\mbox{{\tiny (a)}}}$ is defined as $d_M(\bbeta_t, \bbeta_0^{\mbox{{\tiny (a)}}}; \mathbf{\Sigma}) = [(\bbeta_t - \bbeta_0^{\mbox{{\tiny (a)}}})^{\top} \mathbf{\Sigma}^{-1} (\bbeta_t - \bbeta_0^{\mbox{{\tiny (a)}}})]^{1/2}$ with positive definite covariance matrix $\mathbf{\Sigma}$ with $\mathbf{\Sigma} = \sigma^2 \mathbf{\Delta}^{-1}$. Show that the posterior density $\pi_{\bbeta \, | \, \sigma^2} (\bbeta \, | \, \sigma^2, \mathbf{X}, \mathbf{Y}, \bbeta_0^{\mbox{{\tiny (a)}}}, \mathbf{\Delta})$ is larger at $\bbeta =\bbeta_t$ than with the other prior mean paramater.
• Adopt the assumptions of part b) and show that a better initial guess yields a better posterior mean. That is, show
[$] \begin{eqnarray*} d_M[\bbeta_t, \mathbb{E}(\bbeta \, | \, \sigma^2, \mathbf{Y}, \mathbf{X}, \bbeta_0^{\mbox{{\tiny (a)}}}, \mathbf{\Delta}); \mathbf{\Sigma}] & \leq & d_M[\bbeta_t, \mathbb{E}(\bbeta \, | \, \sigma^2, \mathbf{Y}, \mathbf{X}, \bbeta_0^{\mbox{{\tiny (b)}}}, \mathbf{\Delta}); \mathbf{\Sigma}], \end{eqnarray*} [$]
now with $\mathbf{\Sigma} = \sigma^2 (\mathbf{X}^{\top} \mathbf{X} + \mathbf{\Delta})^{-1}$.
Jun 25'23

The ridge penalty may be interpreted as a multivariate normal prior on the regression coefficients: $\bbeta \sim \mathcal{N}(\mathbf{0}, \lambda^{-1} \mathbf{I}_{pp})$. Different priors may be considered. In case the covariates are spatially related in some sense (e.g. genomically), it may of interest to assume a first-order autoregressive prior: $\bbeta \sim \mathcal{N}(\mathbf{0}, \lambda^{-1} \mathbf{\Sigma}_a)$, in which $\mathbf{\Sigma}_a$ is a $(p \times p)$-dimensional correlation matrix with $(\mathbf{\Sigma}_a)_{j_1, j_2} = \rho^{ | j_1 - j_2 | }$ for some correlation coefficient $\rho \in [0, 1)$. Hence,

[$] \begin{eqnarray*} \mathbf{\Sigma}_a \, \, \, = \, \, \, \left( \begin{array}{cccc} 1 & \rho & \ldots & \rho^{p-1} \\ \rho & 1 & \ldots & \rho^{p-2} \\ \vdots & \vdots & \ddots & \vdots \\ \rho^{p-1} & \rho^{p-2} & \ldots & 1 \end{array} \right). \end{eqnarray*} [$]

• The penalized loss function associated with this AR(1) prior is:
[$] \begin{eqnarray*} \mathcal{L}(\bbeta; \lambda, \mathbf{\Sigma}_a) & = & \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda \bbeta^{\top} \mathbf{\Sigma}_a^{-1} \bbeta. \end{eqnarray*} [$]
Find the minimizer of this loss function.
• What is the effect of $\rho$ on the ridge estimates? Contrast this to the effect of $\lambda$. Illustrate this on (simulated) data.
• Instead of an AR(1) prior assume a prior with a uniform correlation between the elements of $\bbeta$. That is, replace $\mathbf{\Sigma}_a$ by $\mathbf{\Sigma}_u$, given by $\mathbf{\Sigma}_u = (1-\rho) \mathbf{I}_{pp} + \rho \mathbf{1}_{pp}$. Investigate (again on data) the effect of changing from the AR(1) to the uniform prior on the ridge regression estimates.
Jun 25'23

Consider the standard linear regression model $Y_i = \mathbf{X}_{i,\ast} \bbeta + \varepsilon_i$ for $i=1, \ldots, n$. Suppose estimates of the regression parameters $\bbeta$ of this model are obtained through the minimization of the sum-of-squares augmented with a ridge-type penalty:

[$] \begin{eqnarray*} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda \big[ (1-\alpha) \| \bbeta - \bbeta_{t,a} \|_2^2 + \alpha \| \bbeta - \bbeta_{t,b} \|_2^2 \big], \end{eqnarray*} [$]

for known $\alpha \in [0,1]$, nonrandom $p$-dimensional target vectors $\bbeta_{t,a}$ and $\bbeta_{t,b}$ with $\bbeta_{t,a} \not= \bbeta_{t,b}$, and penalty parameter $\lambda \gt 0$. Here $\mathbf{Y} = (Y_1, \ldots, Y_n)^{\top}$ and $\mathbf{X}$ is $n \times p$ matrix with the $n$ row-vectors $\mathbf{X}_{i,\ast}$ stacked.

• When $p \gt n$ the sum-of-squares part does not have a unique minimum. Does the above employed penalty warrant a unique minimum for the loss function above (i.e., sum-of-squares plus penalty)? Motivate your answer.
• Could it be that for intermediate values of $\alpha$, i.e. $0 \lt \alpha \lt 1$, the loss function assumes smaller values than for the boundary values $\alpha=0$ and $\alpha=1$? Motivate your answer.
• Draw the parameter constraint induced by this penalty for $\alpha = 0, 0.5$ and $1$ when $p = 2$
• Derive the estimator of $\bbeta$, defined as the minimum of the loss function, explicitly.
• Discuss the behaviour of the estimator $\alpha = 0, 0.5$ and $1$ for $\lambda \rightarrow \infty$.
Jun 25'23

Revisit Exercise. There the standard linear regression model $Y_i = \mathbf{X}_{i,\ast} \bbeta + \varepsilon_i$ for $i=1, \ldots, n$ and with $\varepsilon_i \sim_{i.i.d.} \mathcal{N}(0, \sigma^2)$ is considered. The model comprises a single covariate and an intercept. Response and covariate data are: $\{(y_i, x_{i,1})\}_{i=1}^4 = \{ (1.4, 0.0), (1.4, -2.0), (0.8, 0.0), (0.4, 2.0) \}$.

• Evaluate the generalized ridge regression estimator of $\bbeta$ with target $\bbeta_0 = \mathbf{0}_2$ and penalty matrix $\mathbf{\Delta}$ given by $(\mathbf{\Delta})_{11} = \lambda = (\mathbf{\Delta})_{22}$ and $(\mathbf{\Delta})_{12} = \tfrac{1}{2} \lambda = (\mathbf{\Delta})_{21}$ in which $\lambda = 8$.
• A data scientist wishes to leave the intercept unpenalized. Hereto s/he sets in part a) $(\mathbf{\Delta})_{11} = 0$. Why does the resulting estimate not coincide with the answer to Exercise? Motivate.
Jun 25'23

Consider the linear regression model: $\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon$ with $\vvarepsilon \sim \mathcal{N} ( \mathbf{0}_{n}, \sigma^2 \mathbf{I}_{nn})$. Let $\hat{\bbeta}(\lambda) = (\mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} \mathbf{X}^{\top} \mathbf{Y}$ be the ridge regression estimator with penalty parameter $\lambda$. The shrinkage of the ridge regression estimator propogates to the scale of the ‘ridge prediction’ $\mathbf{X} \hat{\bbeta}(\lambda)$. To correct (a bit) for the shrinkage, [1] propose the alternative ridge regression estimator: $\hat{\bbeta}(\alpha) = [ (1-\alpha) \mathbf{X}^{\top} \mathbf{X} + \alpha \mathbf{I}_{pp}]^{-1} \mathbf{X}^{\top} \mathbf{Y}$ with shrinkage parameter $\alpha \in [0,1]$.

• Let $\alpha = \lambda ( 1+ \lambda)^{-1}$. Show that $\hat{\bbeta}(\alpha) = (1+\lambda) \hat{\bbeta}(\lambda)$ with $\hat{\bbeta}(\lambda)$ as in the introduction above.
• Use part a) and the parametrization of $\alpha$ provided there to show that the some shrinkage has been undone. That is, show: $\mbox{Var}[ \mathbf{X} \hat{\bbeta}(\lambda)] \lt \mbox{Var}[ \mathbf{X} \hat{\bbeta}(\alpha)]$ for any $\lambda \gt 0$.
• Use the singular value decomposition of $\mathbf{X}$ to show that $\lim_{\alpha \downarrow 0} \hat{\bbeta}(\alpha) = (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{Y}$ (should it exist) and $\lim_{\alpha \uparrow 1} \hat{\bbeta}(\alpha) = \mathbf{X}^{\top} \mathbf{Y}$.
• Derive the expectation, variance and mean squared error of $\hat{\bbeta}(\alpha)$.
• Temporarily assume that $p=1$ and let $\mathbf{X}^{\top} \mathbf{X} = c$ for some $c \gt 0$. Then, $\mbox{MSE}[\hat{\bbeta}(\alpha)] = (c -1)^2 \beta^2 + \sigma^2 c [ (1-\alpha) c + \alpha ]^{-2}$. Does there exist an $\alpha \in (0,1)$ such that the mean squared error of $\hat{\bbeta}(\alpha)$ is smaller than that of its maximum likelihood counterpart? Motivate. % Hint: distinguish between different values of $c$.
• Now assume $p \gt 1$ and an orthonormal design matrix. Specify the regularization path of the alternative ridge regression estimator $\hat{\bbeta}(\alpha)$.
1. de Vlaming, R. and Groenen, P. J. F. (2015).The current and future use of ridge regression for prediction in quantitative genetics.BioMed Research International, page Article ID 143712
Jun 25'23

Consider the linear regression model $\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon$ with $\vvarepsilon \sim \mathcal{N}(\mathbf{0}_n, \sigma^2 \mathbf{I}_nn)$. Goldstein & Smith (1974) proposed a novel generalized ridge estimator of its $p$-dimensional regression parameter:

[$] \begin{eqnarray*} \hat{\bbeta}_m(\lambda) & = & [ (\mathbf{X}^{\top} \mathbf{X})^m + \lambda \mathbf{I}_{pp} ]^{-1} (\mathbf{X}^{\top} \mathbf{X})^{m-1} \mathbf{X}^{\top} \mathbf{Y}, \end{eqnarray*} [$]

with penalty parameter $\lambda \gt 0$ and ‘shape’ or ‘rate’ parameter $m$.

• Assume, only for part a), that $n=p$ and the design matrix is orthonormal. Show that, irrespectively of the choice of $m$, this generalized ridge regression estimator coincides with the ‘regular’ ridge regression estimator.
• Consider the generalized ridge loss function $\| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \bbeta^{\top} \mathbf{A} \bbeta$ with $\mathbf{A}$ a $p \times p$-dimensional symmetric matrix. For what $\mathbf{A}$, does $\hat{\bbeta}_m(\lambda)$ minimize this loss function?
• Let $d_j$ be the $j$-th singular value of $\mathbf{X}$. Show that in $\hat{\bbeta}_m(\lambda)$ the singular values are shrunken as $(d_j^{2m} + \lambda)^{-1} d_j^{2m-1}$. Hint: use the singular value decomposition of $\mathbf{X}$.
• Do, for positive singular values, larger $m$ lead to more shrinkage? Hint: Involve particulars of the singular value in your answer.
• Express $\mathbb{E}[\hat{\bbeta}_m(\lambda)]$ in terms of the design matrix, model and shrinkage parameters ($\lambda$ and $m$).
• Express $\mbox{Var}[\hat{\bbeta}_m(\lambda)]$ in terms of the design matrix, model and shrinkage parameters ($\lambda$ and $m$).