We perform the best subset, forward stepwise and backward stepwise on a single data set and obtain p+1 models containing $0, 1, \ldots, p$ predictors.
Which of the three models with $k$ predictors has the smalles training RSS ?
Which of the three models with $k$ predictors has the smallest test RSS ?
True or False
The predictors in the $k$-variable model identified by forward stepwise are a subset of the predictors in the $(k+1)$-variable model identified by forward stepwise selection.
The predictors in the $k$-variable model identified by backward stepwise are a subset of the predictors in the $(k+1)$-variable model identified by backward stepwise selection.
The predictors in the $k$-variable model identified by backward stepwise are a subset of the predictors in the $(k + 1)$-variable model identified by forward stepwise selection.
The predictors in the $k$-variable model identified by forward stepwise are a subset of the predictors in the $(k+1)$-variable model identified by backward stepwise selection.
The predictors in the $k$-variable model identified by best subset are a subset of the predictors in the $(k + 1)$-variable model identified by best subset selection.
For parts (a) through (c), indicate the correct choice and explain the answer.
The lasso, relative to least square is
More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrese in variance.
More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
Repeat (a) for ridge regression relative to least squares.
Repeat (b) for non-linear methods relative to least squares.
Suppose we estimate regression coefficients in a linear model by minimizing \begin{align} \sum_{i=1}^{n}\bigg( y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_{j}x_{ij} \bigg)^{2} \text{ subject to } \sum_{j=1}^{p}\mid \beta_{j} \mid \leq s \end{align} for a particular value of $s$. For the following parts, indicate which of the options is correct and explain.
As we increase $s$ from 0, the training RSS will
Increase initially, and then eventually start decreasing in an inverted U shape.
Decrease initially, and then eventually start increasing in a U shape.
Steadily increase.
Steadily decrease.
Remain constant.
Repeat (a) for test RSS.
Repeat (a) for variance.
Repeat (a) for (squared) bias.
Repeat (a) for the irreducible error.
Suppose we estimate regression coefficients in a linear model by minimizing \begin{align} \sum_{i=1}^{n}\bigg( y_{i} - \beta_{0} - \sum_{j=1}^{p}\beta_{j}x_{ij} \bigg)^{2} + \lambda \sum_{j=1}^{p}\beta_{j}^{2} \end{align} for a particular value of $\lambda$. For the following parts, indicate which of the options is correct and explain.
As we increase $s$ from 0, the training RSS will
Increase initially, and then eventually start decreasing in an inverted U shape.
Decrease initially, and then eventually start increasing in a U shape.
Steadily increase.
Steadily decrease.
Remain constant.
Repeat (a) for test RSS.
Repeat (a) for variance.
Repeat (a) for (squared) bias.
Repeat (a) for the irreducible error.
It is well-known that ridge regression tends to give similar coefficient values to correlated variables, whereas the lasso may give quite different coefficient values to correlated variables. We will now explore this property in a very simple setting.
Suppose that $n = 2$, $p = 2$, $x_{11} = x_{12}$, $x_{21} = x_{22}$. Furthermore, suppose that $y_{1} +y_{2} = 0$ and $x_{11} +x_{21} = 0$ and $x_{12} +x_{22} = 0$, so that the estimate for the intercept in a least squares, ridge regression, or lasso model is zero: $\hat{\beta_{0}} = 0$.
Write out the ridge regression optimization problem in this setting.
Argue that in this setting, the ridge coefficient estimates satisfy $\hat{\beta_{1}} = \hat{\beta_{2}}$.
Write out the lasso optimization problem in this setting.
Argue that in this setting, the lasso coefficients $\hat{\beta_{1}}$ and $\hat{\beta_{1}}$ are not unique—in other words, there are many possible solutions to the optimization problem in (c). Describe these solutions.
Using the bayesian formulation given below, derive the ridge and lasso formulations \begin{align} p(\beta|X,Y) \propto f(Y|\beta,X)p(\beta|X) = f(Y|\beta,X)p(\beta) \end{align}
Suppose $y_{i} = \beta_{0} + x_{i}^{T}\beta + \epsilon_{i}$ where $\epsilon_{i}$ is normal with mean $0$ and variance $\sigma^{2}$. Calculate the likelihood of the data.
Assume the following prior for $\beta$: $\beta_{1},\ldots,\beta_{p}$ are independent identically distributed with the distribution $p(\beta) = \frac{1}{2b}\exp(-\mid \beta \mid/b)$. Write the posterior for $b$ in this setting.
Argue that the lasso estimate is the mode of the posterior on $\beta$.
Now assume that the $\beta$ are independently and identically distributed with the mean $0$ and variance $c$. Find the posterior of the $\beta$ in this case.
Argue that the ridge regression estimate is both the mode and mean for $\beta$ under this distribution.
Best subset model will have the lowest training RSS as it looks over all possible subsets of models.
Best subset model will typically overfit and thus forward or backward stepwise models will have lower test RSS.
True as we add variables to the last model.
True as we remove variables from the last model.
False as this is not necessary. The paths chosen are independent.
False.
False as the set of variables can be completely independent.
iii as lasso is less flexible (some coefficients are zero) which decreases variance with an increase in bias.
iii as it is similar to lasso in terms of purpose of use.
ii as non-linear models make less assumptions and are more flexible. This reduces bias at expense of variance.
Refer to image. As we increase s, we are expanding the region of allowed $\beta$. Hence, we are moving closer to the least squares fit which gives minimum training RSS.
We have high bias with a constant model. As $s$ increases, the test RSS initially decreases and then begins into increase as we keep relaxing the model, giving a U shape.
Constant model has zero variance. It increases as we relax the model (increase $s$).
By bias variance tradeoff, bias reduces as $s$ increases.
It is always constant.
As $\lambda$ increases, the model becomes more constrained and training RSS increases.
At $\lambda$ the model has minimum training RSS and high test RSS. As $\lambda$ increases, test RSS initially decreases until it is high for large $\lambda$.
As the model becomes less and less flexible to a constant model, the variance keeps decreasing.
By bias variance tradeoff, bias will keep increasing (it’s minimum for least squares fit).
By definition, it remains constant.
The data points after substituitions become $((x_{11}, x_{11}), y_{1})$ and $((-x_{11}, -x_{11}), -y_{1})$. The optimization functhion then becomes \begin{align} RSS &= \sum_{i=1}^{n}(y_{i} - \sum_{j=1}^{p} \beta_{j}x_{ij})^{2} + \lambda \sum_{j=1}^{p}\beta_{j}^{2}\newline &= 2(y_{1}-x_{11}(\beta_{1}+\beta_{2}))^{2} + \lambda(\beta_{1}^{2} + \beta_{1}^{2}) \end{align}
Taking partial derivates with $\beta_{1}$ and $\beta_{2}$, \begin{align} 4(y_{1} - x_{11}(\beta_{1} + \beta_{2}))(-x_{11}) + 2\lambda \beta_{1} &= 0\newline 4(y_{1} - x_{11}(\beta_{1} + \beta_{2}))(-x_{11}) + 2\lambda \beta_{2} &= 0 \end{align} which gives $\beta_{1} = \beta_{2}$.
In a similar fashion to above, the final optimization is \begin{align} RSS = 2(y_{1}-x_{11}(\beta_{1}+\beta_{2}))^{2} + \lambda(\mid \beta_{1} \mid + \mid \beta_{1} \mid) \end{align}
Taking partial derivates with $\beta_{1}$ and $\beta_{2}$, \begin{align} 4(y_{1} - x_{11}(\beta_{1} + \beta_{2}))(-x_{11}) + 2\lambda \frac{\mid \beta_{1} \mid}{\beta_{1}} &= 0\newline 4(y_{1} - x_{11}(\beta_{1} + \beta_{2}))(-x_{11}) + 2\lambda \frac{\mid \beta_{2} \mid}{\beta_{2}} &= 0 \end{align} which gives $\frac{\mid \beta_{1} \mid}{\beta_{1}} = \frac{\mid \beta_{2} \mid}{\beta_{2}}$, clearly not a unique solution.
For any $y_{i}$, the distribution given $X$ and $\beta$ is normal since $y_{i}$ = constant + Normal. The constant is $\beta_{0} + x_{i}\beta$ and the variance is $\sigma^{2}$. \begin{align} f(Y|X,\beta) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi \sigma^{2}}}\exp(-\frac{(y_{i}-\beta_{0}-x_{i}\beta)^{2}}{2\sigma^{2}}) \end{align}
The prior for $\beta$ will simply be the product of all the $\beta_{i}$. The posterior thus becomes \begin{align} p(\beta|X,Y) &= f(Y|X,\beta)\prod_{j=1}^{p}\frac{1}{2b}exp{-\frac{\mid \beta_{j} \mid}{b}}\newline \ln(p(\beta|X,Y)) &= \ln((\frac{1}{\sqrt{2\pi \sigma^{2}}})^{n} (\frac{1}{2b})^{p}) + \sum_{i=1}^{n} -\frac{(y_{i}-\beta_{0}-x_{i}\beta)^{2}}{2\sigma^{2}} + \sum_{j=1}^{p} -\frac{\mid \beta_{j} \mid}{b} \end{align} Removing constants and adjusting (includling $-1$), maximizing above equation is same as minimizing \begin{align} \sum_{i=1}^{n} (y_{i}-\beta_{0}-x_{i}\beta)^{2} + \frac{2\sigma^{2}}{b}\sum_{j=1}^{p} \mid \beta_{j} \mid \end{align} which is the Lasso regression optimization function with $\lambda = 2\sigma^{2}/b$.
$\beta$ obtained as a result of maximizing the posterior $p(\beta | X,Y)$ is the mode of that distribution and also happens to be the solution of the lasso regressin optimization function. |
The posterior is similar in form to the one derived above with minor changes \begin{align} p(\beta|X,Y) &= f(Y|X,\beta)\prod_{j=1}^{p}\frac{1}{2c^{2}}exp{-\frac{\beta_{j}^{2}}{2c^{2}}}\newline \ln(p(\beta|X,Y)) &= \ln((\frac{1}{\sqrt{2\pi \sigma^{2}}})^{n} (\frac{1}{2c^{2}})^{p}) + \sum_{i=1}^{n} -\frac{(y_{i}-\beta_{0}-x_{i}\beta)^{2}}{2\sigma^{2}} + \sum_{j=1}^{p} -\frac{\beta_{j}^{2}}{2c^{2}} \end{align}