⧼exchistory⧽
9 exercise(s) shown, 0 hidden
Jun 11'23

Section Logistic Regression discussed logistic regression as a ML method that learns a linear hypothesis map by minimizing the logistic loss. The logistic loss has computationally pleasant properties as it is smooth and convex. However, in some applications we might be ultimately interested in the accuracy or (equivalently) the average 0/1 loss.

Can we upper bound the average $0/1$ loss using the average logistic loss incurred by a given hypothesis on a given training set?

Jun 11'23

Consider a predictor map $h(\feature)$ which is piece-wise linear and consisting of $1000$ pieces. Assume we want to represent this map by an artificial neural network (ANN) using neurons with one hidden layer of neurons having a rectified linear unit (ReLU) activation function. The output layer consists of a single neuron with linear activation function.

How many neurons must the ANN contain at least ?

Jun 11'23

Consider a ANN with $\featuredim=10$ input neurons following by three hidden layers consisting of $4$, $9$ and $3$ nodes. The three hidden layers are followed by the output layer consisting of a single neuron. Assume that all neurons use a linear activation function and no bias term.

What is the effective dimension $\effdim{\hypospace}$ of the hypothesis space $\hypospace$ that consists of all hypothesis maps that can be obtained from this ANN.

Jun 11'23

Consider data points characterized by feature vectors $\featurevec \in \mathbb{R}^{\featuredim}$ and binary labels $\truelabel \in\{-1,1\}$.

We are interested in finding a good linear classifier which is such that the feature vectors resulting in $h(\featurevec) = 1$ is a half-space.

Which of the methods discussed in this chapter aim at learning a linear classifier?

Jun 11'23

Consider a ML application involving data points with features $\featurevec \in \mathbb{R}^{6}$ and a numeric label $\truelabel \in \mathbb{R}$. We learn a hypothesis by minimizing the average loss incurred on a training set $\dataset = \big\{\big(\featurevec^{(1)},\truelabel^{(1)}\big),\ldots,\big(\featurevec^{(\samplesize)},\truelabel^{(\samplesize)}\big)\big\}$.

Which of the following ML methods uses a hypothesis space that depends on the dataset $\dataset$?

Jun 11'23

Consider the ANN in Figure fig_ANN using the ReLU activation function (see Figure fig_activate_neuron).

Show that there is a particular choice for the weights $\weights =(\weight_{1},\ldots,\weight_{9})^{T}$ such that the resulting hypothesis map $h^{(\weights)}(\feature)$ is a triangle as depicted in the figure below.

Can you also find a choice for the weights $\weights =(\weight_{1},\ldots,\weight_{9})^{T}$ that produce the same triangle shape if we replace the ReLU activation function with the linear function $\actfun(z) =10 \cdot z$?

Jun 11'23

Try to approximate the hypothesis map depicted in the figure below by an element of $\hypospace_{\rm Gauss}$ (see equ_def_Gauss_hypospace) using $\sigma=1/10$, $\featuredim=10$ and $\mu_{\featureidx} = -1 + (2\featureidx/10)$.

Jun 11'23

Consider a $k$-NN method for a binary classification problem. We use $k=1$ and a given training set whose data points characterize humans. Each human is characterized by a feature vector and label that indicates sensitive information (e.g., some sickness).

Assume that you have access to the feature vectors of the data points in the training set but not to their labels.

Can you infer the label value of a data point in the training set based on the prediction that you obtained based on your feature vector?

Consider a binary classification problem involving data points that are characterized by feature vectors $\featurevec \in \mathbb{R}^{\featuredim}$ and binary labels $\truelabel \in \{-1,1\}$. We have access to a labeled training set $\dataset$ of size $\samplesize$.
Show that the $k$-NN hypothesis is obtained from the Bayes estimator by approximating or estimating the conditional probability distribution $\prob{\featurevec|\truelabel}$ via the density estimator [1](Sec. 2.5.2.)
[$] $$\hat{p} (\featurevec | \truelabel ) \defeq (k/\samplesize) \frac{1}{{\rm vol}(R_{k})}.$$ [$]
Here, ${\rm vol}(R)$ denotes the volume of a ball with radius $R$ and $R_{k}$ is the distance between $\featurevec$ and the $k$th nearest feature vector of a data point in $\dataset$.