⧼exchistory⧽
22 exercise(s) shown, 0 hidden
Jun 11'23

Consider data points that are characterized by a single numeric feature $\feature\!\in\!\mathbb{R}$ and a numeric label $\truelabel\!\in\mathbb{R}$. We use a ML method to learn a hypothesis map $h: \mathbb{R} \rightarrow \mathbb{R}$ based on a training set consisting of three data points

[$](\feature^{(1)}=1,\truelabel^{(1)} = 3), (\feature^{(2)}=4,\truelabel^{(2)}=-1), (\feature^{(3)}=1,\truelabel^{(3)}=5).[$]

Is there any chance for the ML method to learn a hypothesis map that perfectly fits the data points such that $h\big( \feature^{(\sampleidx)} \big) = \truelabel^{(\sampleidx)}$ for $\sampleidx=1,\ldots,3$.

Hint: Try to visualize the data points in a scatterplot and various hypothesis maps (see Figure fig_three_maps_example).

Jun 11'23

Consider a dataset of daily air temperatures $\feature^{(1)},\ldots,\feature^{(\samplesize)}$ measured at the Finnish Meteorological Institute (FMI) observation station “Utsjoki Nuorgam” during 01.12.2019 and 29.02.2020. Thus, $\feature^{(1)}$ is the daily temperature measured on 01.12.2019, $\feature^{(2)}$ is the daily temperature measure don 02.12.2019, and $\feature^{(\samplesize)}$ is the daily temperature measured on 29.02.2020. You can download this from the dataset. ML methods often determine few parameters to characterize large collections of data points.

Compute, for the above temperature measurement dataset, the following quantities:

• the minimum $A \defeq \min_{\sampleidx=1,\ldots,\samplesize} \feature^{(\sampleidx)}$
• the maximum $B \defeq \max_{\sampleidx=1,\ldots,\samplesize} \feature^{(\sampleidx)}$
• the average $C \defeq (1/\samplesize) \sum_{\sampleidx=1,\ldots,\samplesize} \feature^{(\sampleidx)}$
• the standard deviation $D \defeq \sqrt{(1/\samplesize)\sum_{\sampleidx=1,\ldots,\samplesize} \big( \feature^{(\sampleidx)}-C \big)^2}$
Jun 11'23

Consider the tiny desktop computer “RaspberryPI” equipped with a total of $8$ Gigabytes memory [1]. We want implement a ML algorithm that learns a hypothesis map that is represented by a deep artificial neural network (ANN) involving $\featurelen=10^6$ numeric parameters. Each parameter is quantized using $8$ bits ($=1$ Byte).

How many different hypotheses can we store at most on a RaspberryPI computer? (You can assume that $1 {\rm Gigabyte} = 10^{9} {\rm Bytes}$.)

1. O. Dürr, Y. Pauchard, D. Browarnik, R. Axthelm, and M. Loeser. Deep learning on a raspberry pi for real time face recognition. 01 2015
Jun 11'23

For some applications it can be a good idea to not learn a single hypothesis but to learn a whole ensemble of hypothesis maps $h^{(1)},\ldots,h^{(\augparam)}$. These hypotheses might even belong to different hypothesis spaces, $h^{(1)} \in \hypospace^{(1)},\ldots,h^{(\augparam)} \in \hypospace^{(\augparam)}$.

These hypothesis spaces can be arbitrary except that they are defined for the same feature space and label space. Given such an ensemble we can construct a new (“meta”) hypothesis $\tilde{h}$ by combining (or aggregating) the individual predictions obtained from each hypothesis,

[$] $$\label{equ_def_ensemble} \tilde{h}(\featurevec) \defeq a\big( h^{(1)}(\featurevec), \ldots,h^{(\augparam)}(\featurevec) \big).$$ [$]

Here, $a(\cdot)$ denotes some given (fixed) combination or aggregation function. One example for such an aggregation function is the average $a\big( h^{(1)}(\featurevec), \ldots,h^{(\augparam)}(\featurevec) \big) \defeq (1/\augparam) \sum_{\augidx=1}^{\augparam} h^{(\augidx)}(\featurevec)$. We obtain a new “meta” hypothesis space $\widetilde{\hypospace}$, that consists of all hypotheses of the form \eqref{equ_def_ensemble} with $h^{(1)} \in \hypospace^{(1)},\ldots,h^{(\augparam)} \in \hypospace^{(\augparam)}$.

Which conditions on the aggregation function $a(\cdot)$ and the individual hypothesis spaces $\hypospace^{(1)},\ldots,\hypospace^{(\augparam)}$ ensure that $\widetilde{\hypospace}$ contains each individual hypothesis space, i.e., $\hypospace^{(1)},\ldots,\hypospace^{(\augparam)} \subseteq \widetilde{\hypospace}$.

Jun 11'23

Consider the ML problem underlying a music information retrieval smartphone app [1]. Such an app aims at identifying a song title based on a short audio recording of a song interpretation. Here, the feature vector $\featurevec$ represents the sampled audio signal and the label $\truelabel$is a particular song title out of a huge music database.

What is the length $\featuredim$ of the feature vector $\featurevec \in \mathbb{R}^{\featuredim}$ if its entries are the signal amplitudes of a $20$-second long recording which is sampled at a rate of $44$ kHz?

1. A. Wang. An industrial-strength audio search algorithm. In International Symposium on Music Information Retrieval Baltimore, MD, 2003
Jun 11'23

Consider data points that are characterized by a feature vector $\featurevec \in \mathbb{R}^{10}$ and a vector-valued label $\labelvec \in \mathbb{R}^{30}$. Such vector-valued labels arise in multi-label classification problems. We want to predict the label vector using a linear predictor map

[$] $$\label{equ_lin_predictor_multilabel} \vh(\featurevec) = \mathbf{W} \featurevec \mbox{ with some matrix } \mathbf{W} \in \mathbb{R}^{30 \times 10}.$$ [$]

How many different linear predictors \eqref{equ_lin_predictor_multilabel} are there ? $10$, $30$, $40$, or infinite?

Jun 11'23

Consider the hypothesis space constituted by all linear maps $h(\featurevec) = \weights^{T} \featurevec$ with some weight vector $\weights \in \mathbb{R}^{\featuredim}$. We try to find the best linear map by minimizing the average squared error loss (the empirical risk) incurred on labeled data points (training set) $(\featurevec^{(1)},\truelabel^{(1)}),(\featurevec^{(2)},\truelabel^{(2)}),\ldots,(\featurevec^{(\samplesize)},\truelabel^{(\samplesize)})$.

Is it possible to represent the resulting empirical risk as a convex quadratic function$f(\weights) = \weights^{T} \mathbf{C} \weights + \vb \weights + c$?

If this is possible, how are the matrix $\mathbf{C}$, vector $\vb$ and constant $c$ related to the features and labels of data points in the training set?

Jun 11'23

Consider linear hypothesis space consisting of linear maps $h^{(\weights)}(\featurevec) = \weights^{T} \featurevec$ that are parametrized by a weight vector $\weights$. We learn an optimal weight vector by minimizing the average squared error loss $f(\weights) = \emperror \big( h^{(\weights)} | \dataset\big)$ incurred by $h^{(\weights)}(\featurevec)$ on the training set $\dataset = \big(\featurevec^{(1)},\truelabel^{(1)}\big),\ldots,\big(\featurevec^{(\samplesize)},\truelabel^{(\samplesize)}\big)$.

Is it possible to reconstruct the dataset $\dataset$ just from knowing the function $f(\weights)$?.

Is the resulting labeled training data unique or are there different training sets that could have resulted in the same empirical risk function?

Hint: Write down the training error $f(\weights)$ in the form $f(\weights) = \weights^{T} \mathbf{Q} \weights + c + \vb^{T} \weights$ with some matrix $\mathbf{Q}$, vector $\vb$ and scalar $c$ that might depend on the features and labels of the training data points.

Show that any hypothesis map of the form $h(\feature) = \weight_{1} \feature +\weight_{0}$ can be obtained from the concatenation of a feature map $\featuremap: \feature \mapsto \rawfeaturevec$ with the linear map $\tilde{h}(\rawfeaturevec) \defeq \widetilde{\weights}^{T} \rawfeaturevec$ using parameter vector $\widetilde{\weights} = \big( \weight_{1}, \weight_{0} \big)^{T} \in \mathbb{R}^{2}$.
Consider an ML application generating data points characterized by a scalar feature $x \in \mathbb{R}$ and numeric label $\truelabel \in \mathbb{R}$. We construct a non-linear map by first transforming the feature $\feature$ to a new feature vector $\rawfeaturevec=(\featuremap_{1}(\feature),\featuremap_{2}(\feature),\featuremap_{3}(\feature),\featuremap_{4}(\feature))^{T} \in \mathbb{R}^{4}$.
The components $\featuremap_{1}(\feature),\ldots,\featuremap_{4}(\feature)$ are indicator functions of intervals $[-10,-5), [-5,0),[0,5),[5,10]$. In particular, $\phi_{1}(\feature) = 1$ for $\feature \in [-10,-5)$ and $\phi_{1}(\feature)=0$ otherwise.
We obtain a hypothesis space $\hypospace^{(1)}$ by collecting all maps from feature $\feature$ to predicted label $\hat{\truelabel}$ that can written as a a weighted linear combination $\weights^{T}\rawfeaturevec$ (with some parameter vector $\weights$) of the transformed features. Which of the following hypothesis maps belong to $\hypospace^{(1)}$?