This chapter discusses an important family of optimization methods for solving empirical risk minimization (ERM) with a parametrized hypothesis space (see Chapter Computational and Statistical Aspects of ERM ). The common theme of these methods is to construct local approximations of the objective function in ERM. These local approximations are obtained from the gradients of the objective function. Gradient-based methods have gained popularity recently as an efficient technique for tuning the parameters of deep artificial neural network (ANN) (deep net)s within deep learning methods [1].

Section The Basic Gradient Step discusses gradient descent (GD) as the most basic form of gradient-based methods. The idea of GD is to update the weights by locally optimizing a linear approximation of the objective function. This update is referred to as a GD step and provides the main algorithmic primitive of gradient-based methods. One key challenge for a good use of gradient-based methods is the appropriate extend of the local approximations. This extent is controlled by a step size parameter that is used in the basic GD step. Section Choosing the Learning Rate discusses some approaches for choosing this step size. Section When To Stop? discusses a second main challenge in using gradient-based methods which is to decide when to stop repeating the GD steps.

Section GD for Linear Regression and Section GD for Logistic Regression spell out GD for two instances of ERM arising from linear regression and logistic regression, respectively. The beneficial effect of data normalization on the convergence speed of gradient-based methods is briefly discussed in Section Data Normalization . As explained in Section Stochastic GD , the use of stochastic approximations enables gradient-based methods for applications involving massive amounts of data (“big data”). Section Advanced Gradient-Based Methods develops some intuition for advanced gradient-based methods that exploit the information gathered during previous iterations.

Let us rewrite empirical risk minimization (ERM) as the optimization problem

[$] $$\label{equ_obj_emp_risk_GD} \min_{\weights \in \mathbb{R}^{\featuredim}} f(\weights) \defeq(1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h^{(\weights)}}.$$ [$]

From now on we tacitly assume that each individual loss

[$] $$\label{equ_def_componentn_loss_gd} f_{\sampleidx}(\weights) \defeq \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h^{(\weights)}}$$ [$]

arising in \eqref{equ_obj_emp_risk_GD} represents a differentiable function of the parameter vector $\weights$. Trivially, differentiability of the components \eqref{equ_def_componentn_loss_gd} implies differentiability of the overall objective function $f(\weights)$ \eqref{equ_obj_emp_risk_GD}.

Two important examples of ERM involving such differentiable loss functions are linear regression and logistic regression. In contrast, the hinge loss used by the support vector machine (SVM) results in a non-differentiable objective function $f(\weights)$ \eqref{equ_obj_emp_risk_GD}. However, it is possible to (significantly) extend the scope of gradient-based methods to non-differentiable functions by replacing the concept of a gradient with that of a subgradient.

Gradient based methods are iterative. They construct a sequence of parameter vectors $\weights^{(0)} \rightarrow \weights^{(1)} \dots$ that hopefully converge to a minimizer $\overline{\weights}$ of $f(\weights)$,

[$] $$\label{equ_def_opt_weight} f(\overline{\weights}) = \bar{f} \defeq \min_{\weights \in \mathbb{R}^{\featuredim}} f(\weights).$$ [$]

Note that there might be several different optimal parameter vectors $\overline{\weights}$ that satisfy the optimality condition \eqref{equ_def_opt_weight}. We want the sequence generated by a gradient based method to converge towards any of them. The vectors $\weights^{(\itercntr)}$ are (hopefully) increasingly, with increasing iteration $\itercntr$, more accurate approximation for a minimizer $\overline{\weights}$ of \eqref{equ_def_opt_weight}.

Since the objective function $f(\weights)$ is differentiable, we can approximate it locally around the vector $\weights^{(\itercntr)}$ using a tangent hyperplane that passes through the point $\big(\weights^{(\itercntr)},f\big(\weights^{(\itercntr)}\big) \big) \in \mathbb{R}^{\featuredim+1}$. The normal vector of this hyperplane is given by $\mathbf{n} = (\nabla f\big(\weights^{(\itercntr)}\big) ,-1)$ (see Figure fig_smooth_function). The first component of the normal vector is the gradient $\nabla f(\weights)$ of the objective function $f(\weights)$ evaluated at the point $\weights^{(\itercntr)}$. Our main use of the gradient $\nabla f\big(\weights^{(\itercntr)}\big)$ will be to construct a linear approximation [2]

[$] $$\label{equ_linear_approx_diff} f(\weights) \approx f\big(\weights^{(\itercntr)}\big) + \big(\weights-\weights^{(\itercntr)} \big)^{T} \nabla f\big(\weights^{(\itercntr)}\big) \mbox{ for }\weights \mbox{ sufficiently close to } \weights^{(\itercntr)}.$$ [$]

Requiring the objective function $f(\weights)$ in \eqref{equ_linear_approx_diff} to be differentiable is the same as requiring the validity of the local linear approximation \eqref{equ_linear_approx_diff} at every possible vector $\weights^{(\itercntr)}$. It turns out that differentiability alone is not very helpful for the design and analysis of gradient based methods.

Gradient based methods are most useful for finding the minimum of differentiable functions $f(\weights)$ that are also smooth. Informally, a differentiable function $f(\weights)$ is smooth if the gradient $\nabla f(\weights)$ does not change too rapidly as a function of the argument $\weights$. A quantitative version of the smoothness concept refers to a function as $\beta$-smooth if its gradient is Lipschitz continuous with Lipschitz constant $\beta\gt 0$ [3](Sec. 3.2),

[$] $$\label{equ_def_beta_smooth} \| \nabla f(\weights) - \nabla f(\weights') \| \leq \beta \| \weights - \weights' \|.$$ [$]

Note that if a function $f(\weights)$ is $\beta$ smooth, it is also $\beta'$ smooth for any $\beta' \gt \beta$. The smallest $\beta$ such that \eqref{equ_def_beta_smooth} is satisfied depends on the features and labels of data points used in \eqref{equ_obj_emp_risk_GD} as well as on the choice for the loss function.

Consider a current guess or approximation $\weights^{(\itercntr)}$ for the optimal parameter vector $\overline{\weights}$ \eqref{equ_def_opt_weight}. We would like to find a new (better) parameter vector $\weights^{(\itercntr+1)}$ that has smaller objective value $f(\weights^{(\itercntr+1)}) \lt f\big(\weights^{(\itercntr)}\big)$ than the current guess $\weights^{(\itercntr)}$. The approximation \eqref{equ_linear_approx_diff} suggests to choose the next guess $\weights = \weights^{(\itercntr+1)}$ such that $\big(\weights^{(\itercntr+1)}-\weights^{(\itercntr)} \big)^{T} \nabla f\big(\weights^{(\itercntr)}\big)$ is negative. We can achieve this by the GD step

[$] $$\label{equ_def_GD_step} \weights^{(\itercntr\!+\!1)} = \weights^{(\itercntr)} - \lrate \nabla f(\weights^{(\itercntr)})$$ [$]

with a sufficiently small step size $\lrate\gt0$. Figure fig_basic_GD_step illustrates the GD step \eqref{equ_def_GD_step} which is the elementary computation of gradient based methods.

The step size $\lrate$ in \eqref{equ_def_GD_step} must be sufficiently small to ensure the validity of the linear approximation \eqref{equ_linear_approx_diff}. In the context of ML, the GD step size parameter $\lrate$ is also referred to as learning rate. Indeed, the step size $\lrate$ determines the amount of progress during a GD step towards learning the optimal parameter vector $\overline{\weights}$.

We need to emphasize that the interpretation of the step size $\lrate$ as a learning rate is only useful when the step size is sufficiently small. Indeed, when increasing the step size $\lrate$ in \eqref{equ_def_GD_step} beyond a critical value (that depends on the properties of the objective function $f(\weights)$), the iterates \eqref{equ_def_GD_step} move away from the optimal parameter vector $\overline{\weights}$. Nevertheless, from now on we will consequently use the term learning rate for $\lrate$.

The idea of gradient-based methods is to repeat the GD step \eqref{equ_def_GD_step} for a sufficient number of iterations (repetitions) to obtain a sufficiently accurate approximation of the optimal parameter vector $\overline{\weights}$ \eqref{equ_def_opt_weight}. It turns out that this is feasible for a sufficiently small learning rate and if the objective function is smooth and convex. Section Choosing the Learning Rate discusses precise conditions on the learning rate such that the iterates produced by the GD step converge to the optimum parameter vector, i.e., $\lim_{\itercntr \rightarrow \infty} f(\weights^{(\itercntr)}) = f\big(\overline{\weights}\big)$.

To implement the GD step \eqref{equ_def_GD_step} we need to choose a useful learning rate $\lrate$. Moreover, executing the GD step \eqref{equ_def_GD_step} requires to compute the gradient $\nabla f(\weights^{(\itercntr)})$. Both tasks can be computationally challenging as discussed in Section Choosing the Learning Rate and Stochastic GD . For the objective function \eqref{equ_obj_emp_risk_GD} obtained in linear regression and logistic regression, we can obtain closed-form expressions for the gradient $\nabla f(\weights)$ (see Section GD for Linear Regression and GD for Logistic Regression ).

In general, we do not have closed-form expressions for the gradient of the objective function \eqref{equ_obj_emp_risk_GD} arising from a non-linear hypothesis space. One example for such a hypothesis space is obtained from a ANN, which is used by deep learning methods (see Section Deep Learning ). The empirical success of deep learning methods might be partially attributed to the availability of an efficient algorithm for computing the gradient $\nabla f(\weights^{(\itercntr)})$. This algorithm is known as \index{back-propagation}back-propagation [1].

## Choosing the Learning Rate

The choice of the learning rate $\lrate$ in the GD step \eqref{equ_def_GD_step} has a significant impact on the performance of Algorithm alg:gd_linreg. If we choose the learning rate $\lrate$ too large, the GD steps \eqref{equ_def_GD_step} diverge (see Figure fig_small_large_lrate-(b)) and, in turn, Algorithm alg:gd_linreg fails to deliver a satisfactory approximation of the optimal weights $\overline{\weights}$.

If we choose the learning rate $\lrate$ too small (see Figure fig_small_large_lrate-(a)), the updates \eqref{equ_def_GD_step} make only very little progress towards approximating the optimal parameter vector $\overline{\weights}$. In applications that require real-time processing of data streams, it might be possible to repeat the GD steps only for a moderate number. Thus If the learning rate is chosen too small, Algorithm alg:gd_linreg will fail to deliver a good approximation within an acceptable number of iterations (runtime of Algorithm alg:gd_linreg).

Finding a (nearly) optimal choice for the learning rate $\lrate$ of GD can be a challenging task. Many sophisticated approaches for tuning the learning rate of gradient-based methods have been proposed [1](Chapter 8). A detailed discussion of these approaches is beyond the scope of this book. We will instead discuss two sufficient conditions on the learning rate which guarantee the convergence of the GD iterations to the optimum of a smooth and convex objective function \eqref{equ_obj_emp_risk_GD}.

The first condition applies to an objective function that is $\beta$-smooth (see \eqref{equ_def_beta_smooth}) with known constant $\beta$ (not necessarily the smallest constant such that \eqref{equ_def_beta_smooth} holds). Then, the iterates $\weights^{(\itercntr)}$ generated by the GD step \eqref{equ_def_GD_step} with a learning rate

[$] $$\label{equ_suff_cond_lrate_beta} \lrate \lt 2/\beta,$$ [$]

satisfy [4](Thm. 2.1.13)

[$] $$\label{equ_convergence_rate_inverse_k-GD} f \big( \weights^{(\itercntr)} \big) - \bar{f} \leq \frac{2(f \big( \weights^{(0)} \big) - \bar{f}) \sqeuclnorm{\weights^{(0)} -\overline{\weights}}}{2\sqeuclnorm{ \weights^{(0)} - \overline{\weights}}+\itercntr(f \big( \weights^{(0)} \big) -\bar{f}) \lrate(2-\beta\lrate)}.$$ [$]

The bound \eqref{equ_convergence_rate_inverse_k-GD} not only tells us that GD iterates converge to an optimal parameter vector but also characterize the convergence speed or rate. The sub-optimality $f \big( \weights^{(\itercntr)} \big) - \min_{\weights} f(\weights)$ in terms of objective function value decreases inversely (like “$1/\itercntr$”) with the number $\itercntr$ of GD steps \eqref{equ_def_GD_step}. Convergence bounds like \eqref{equ_convergence_rate_inverse_k-GD} can be used to specify a stopping criterion, i.e., to determine the number of GD steps to be computed (see Section When To Stop? ).

The condition \eqref{equ_suff_cond_lrate_beta} and the bound \eqref{equ_convergence_rate_inverse_k-GD} is only useful if we can verify $\beta$ smoothness \eqref{equ_def_beta_smooth} assumption for a reasonable constant $\beta$. Verifying \eqref{equ_convergence_rate_inverse_k-GD} only for a very large $\beta$ results in the bound \eqref{equ_convergence_rate_inverse_k-GD} being too loose (pessimistic). When we use a loose bound \eqref{equ_convergence_rate_inverse_k-GD} to determine the number of GD steps, we might compute an unnecessary large number of GD steps \eqref{equ_def_GD_step}.

One elegant approach to verify if a differentiable function $f(\weights)$ is $\beta$ smooth \eqref{equ_def_beta_smooth} is via the Hessian matrix $\nabla^{2} f(\weights) \in \mathbb{R}^{\featuredim \times \featuredim}$ if it exists. The entries of this Hessian matrix are the second-order partial derivatives $\frac{\partial f(\weights)}{\partial \weight_{\featureidx} \partial \weight_{\featureidx'}}$ of the function $f(\weights)$.

Consider an objective function $f(\weights)$ \eqref{equ_obj_emp_risk_GD} that is convex and twice-differentiable with positive semi-definite (psd) Hessian $\nabla^{2} f(\weights)$. If the maximum eigenvalue $\eigval{\rm max} \big( \nabla^{2} f(\weights) \big)$ of the Hessian is upper bounded uniformly (for all $\weights$) by the constant $\beta\gt0$, then $f(\weights)$ is $\beta$ smooth \eqref{equ_def_beta_smooth} [3]. This implies, in turn via \eqref{equ_suff_cond_lrate_beta}, the sufficient condition

[$] $$\label{equ_GD_conv_guarantee} \lrate \leq \frac{2}{\eigval{\rm max} \big( \nabla^{2} f(\weights) \big) }\mbox{ for all } \vw \in \mathbb{R}^{\featuredim}$$ [$]

for the GD learning rate such that the GD steps converge to the minimum of the objective function $f(\weights)$.

It is important to note that the condition \eqref{equ_GD_conv_guarantee} guarantees convergence of the GD steps for any possible initialization $\weights^{(0)}$. Note that the usefulness of the condition \eqref{equ_GD_conv_guarantee} depends on the difficulty of computing the Hessian matrix $\nabla^{2} f(\weights)$. Section GD for Linear Regression and Section GD for Logistic Regression will present closed-form expressions for the Hessian of the objective function \eqref{equ_obj_emp_risk_GD} obtained for linear regression and logistic regression. These closed-form expressions involve the feature vectors and labels of the data points in the training set $\dataset = \big\{ \big(\featurevec^{(1)},\truelabel^{(1)} \big),\ldots,\big(\featurevec^{(\samplesize)},\truelabel^{(\samplesize)} \big) \big\}$ used in \eqref{equ_obj_emp_risk_GD}.

While it might be computationally challenging to determine the maximum (in absolute value) eigenvalue $\eigval{\rm max} \big( \nabla^{2} f(\weights) \big)$ for arbitrary $\weights$, it might still be feasible to find an upper bound $U$ for it. If we know such an upper bound $U \geq \eigval{\rm max} \big( \nabla^{2} f(\weights) \big)$ (valid for all $\weights \in \mathbb{R}^{\featuredim}$), the learning rate $\lrate =1/U$ still ensures convergence of the GD steps \eqref{equ_def_GD_step}.

Up to know we have assumed a fixed (constant) learning rate $\lrate$ that is used for each repetition of the GD steps \eqref{equ_def_GD_step}. However, it might be useful to vary or adjust the learning rate as the GD steps \eqref{equ_def_GD_step} proceed. Thus, we might use a different learning rate $\lrate_{\itercntr}$ for each iteration $\itercntr$ of \eqref{equ_def_GD_step}. Such a varying learning rate is useful for a variant of GD that uses stochastic approximation (see Section Stochastic GD ). However, we might use a varying learning rate also to avoid the burden of verifying $\beta$ smoothness \eqref{equ_def_beta_smooth} with a tight (small) $\beta$. The GD steps \eqref{equ_def_GD_step} with the learning rate $\lrate_{\itercntr} \defeq 1/\itercntr$ converge to the optimal parameter vector $\overline{\weights}$ as long as we can ensure a bounded gradient $\| \nabla f(\weights) \| \leq U$ for a sufficiently large neighbourhood of $\overline{\weights}$ [4].

## When To Stop?

One main challenge in the successful application of GD is to decide when to stop iterating (or repeating) the GD step \eqref{equ_def_GD_step}. Maybe the most simple approach is to monitor the decrease in the objective function $f(\weights^{(\itercntr)})$ and to stop if the decrease $f(\weights^{(\itercntr-1)})-f(\weights^{(\itercntr)})$ falls below a threshold. However, the ultimate goal of a ML method is not to minimize the objective function $f(\weights)$ in \eqref{equ_obj_emp_risk_GD}. Indeed, the objective function represents the average loss of a hypothesis $h^{(\weights)}$ incurred on a training set. However, the ultimate goal of a ML method is to learn a parameter vector $\weights$ such that the resulting hypothesis accurately predicts any data point, including those outside the training set.

We will see in Chapter Model Validation and Selection how to use validation techniques to probe a hypothesis outside the training set. These validation techniques provide a validation error $\tilde{f}(\weights)$ that estimates the average loss of a hypothesis with parameter vector $\weights$. Early stopping techniques monitor the validation error $\tilde{f}(\weights^{(\itercntr)})$ as the GD iterations $\itercntr$ proceed to decide when to stop iterating.

Another possible stopping criterion is to use a fixed number of iterations or GD steps. This fixed number of iterations can be chosen based on convergence bounds such as \eqref{equ_convergence_rate_inverse_k-GD} in order to guarantee a prescribed sub-optimality of the final iterate $\weights^{(\itercntr)}$. A slightly more convenient convergence bound can be obtained from \eqref{equ_convergence_rate_inverse_k-GD} when using the the learning rate $\lrate = 1/\beta$ in the GD step \eqref{equ_def_GD_step} [3],

[$] $$f \big( \weights^{(\itercntr)} \big) -\bar{f} \leq \frac{2\beta \sqeuclnorm{\weights^{(0)} -\overline{\weights}}}{\itercntr} \mbox{ for } \itercntr=1,2,\ldots.$$ [$]

## GD for Linear Regression

We now present a gradient based method for learning the parameter vector for a linear hypothesis (see equ_lin_hypospace)

[$] $$\label{equ_def_lin_pred_GD} h^{(\weights)}(\featurevec) = \weights^{T} \featurevec.$$ [$]

The ERM principle tells us to choose the parameter vector $\weights$ in \eqref{equ_def_lin_pred_GD} by minimizing the average squared error loss

[$] $$\label{equ_def_cost_linreg} \emperror(h^{(\weights)}| \dataset) \stackrel{\eqref{eq_def_ERM_weight}}{=} (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} (\truelabel^{(\sampleidx)} - \weights^{T} \featurevec^{(\sampleidx)})^{2}.$$ [$]

The average squared error loss \eqref{equ_def_cost_linreg} is computed by applying the predictor $h^{(\weights)}(\featurevec)$ to labeled data points in a training set $\dataset=\{ (\featurevec^{(\sampleidx)}, \truelabel^{(\sampleidx)}) \}_{\sampleidx=1}^{\samplesize}$. An optimal parameter vector $\overline{\weights}$ for \eqref{equ_def_lin_pred_GD} is obtained as

[$] $$\label{equ_smooth_problem_linreg} \overline{\weights} = \argmin_{\weights \in \mathbb{R}^{\featuredim}} f(\weights) \mbox{ with } f(\weights) = (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big(\truelabel^{(\sampleidx)} - \weights^{T} \featurevec^{(\sampleidx)}\big)^{2}.$$ [$]

The objective function $f(\weights)$ in \eqref{equ_smooth_problem_linreg} is convex and smooth. We can therefore use GD \eqref{equ_def_GD_step} to solve \eqref{equ_smooth_problem_linreg} iteratively, i.e., by constructing a sequence of parameter vectors that converge to an optimal parameter vector $\overline{\weights}$. To implement GD, we need to compute the gradient $\nabla f(\weights)$.

The gradient of the objective function in \eqref{equ_smooth_problem_linreg} is given by

[$] $$\label{equ_gradient_linear_regression} \nabla f(\weights) = -(2/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big(\truelabel^{(\sampleidx)} - \weights^{T} \featurevec^{(\sampleidx)} \big) \featurevec^{(\sampleidx)}.$$ [$]

By inserting \eqref{equ_gradient_linear_regression} into the basic GD iteration \eqref{equ_def_GD_step}, we obtain Algorithm alg:gd_linreg.

Linear regression via GD

Input: dataset $\dataset=\{ (\featurevec^{(\sampleidx)}, \truelabel^{(\sampleidx)}) \}_{\sampleidx=1}^{\samplesize}$ ; learning rate $\lrate \gt0$.

Initialize: set $\weights^{(0)}\!\defeq\!\mathbf{0}$; set iteration counter $\itercntr\!\defeq\!0$

• repeat
• $\itercntr \defeq \itercntr +1$ (increase iteration counter)
• $\weights^{(\itercntr)} \defeq \weights^{(\itercntr\!-\!1)} + \lrate (2/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big(\truelabel^{(\sampleidx)} - \big(\weights^{(\itercntr\!-\!1)}\big)^{T} \featurevec^{(\sampleidx)}\big) \featurevec^{(\sampleidx)}$ (do a GD step \eqref{equ_def_GD_step})
• until stopping criterion met

Output: $\weights^{(\itercntr)}$ (which approximates $\overline{\weights}$ in \eqref{equ_smooth_problem_linreg})

Let us have a closer look on the update in step $3$ of Algorithm alg:gd_linreg, which is

[$] $$\label{equ_update_gd_linreg} \weights^{(\itercntr)} \defeq \weights^{(\itercntr\!-\!1)} + \lrate (2/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big(\truelabel^{(\sampleidx)} - \big(\weights^{(\itercntr\!-\!1)})^{T} \featurevec^{(\sampleidx)} \big) \featurevec^{(\sampleidx)}.$$ [$]

The update \eqref{equ_update_gd_linreg} has an appealing form as it amounts to correcting the previous guess (or approximation) $\weights^{(\itercntr\!-\!1)}$ for the optimal parameter vector $\overline{\weights}$ by the correction term

[$] $$\label{equ_corr_term_linreg} (2\lrate/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \underbrace{(\truelabel^{(\sampleidx)} - \big(\weights^{(\itercntr\!-\!1)})^{T} \featurevec^{(\sampleidx)})}_{e^{(\sampleidx)}} \featurevec^{(\sampleidx)}.$$ [$]

The correction term \eqref{equ_corr_term_linreg} is a weighted average of the feature vectors $\featurevec^{(\sampleidx)}$ using weights $(2\lrate/\samplesize) \cdot e^{(\sampleidx)}$. These weights consist of the global factor $(2\lrate/\samplesize)$ (that applies equally to all feature vectors $\featurevec^{(\sampleidx)}$) and a sample-specific factor $e^{(\sampleidx)} = (\truelabel^{(\sampleidx)} - \big(\weights^{(\itercntr\!-\!1)})^{T} \featurevec^{(\sampleidx)})$, which is the prediction (approximation) error obtained by the linear predictor $h^{(\weights^{(\itercntr\!-\!1)})}(\featurevec^{(\sampleidx)}) = \big(\weights^{(\itercntr\!-\!1)})^{T} \featurevec^{(\sampleidx)}$ when predicting the label $\truelabel^{(\sampleidx)}$ from the features $\featurevec^{(\sampleidx)}$.

We can interpret the GD step \eqref{equ_update_gd_linreg} as an instance of “learning by trial and error”. Indeed, the GD step amounts to first “trying out” (trial) the predictor $h(\featurevec^{(\sampleidx)}) = \big(\weights^{(\itercntr\!-\!1)})^{T}\featurevec^{(\sampleidx)}$. The predicted values are then used to correct the weight vector $\weights^{(\itercntr\!-\!1)}$ according to the error $e^{(\sampleidx)} = \truelabel^{(\sampleidx)} - \big(\weights^{(\itercntr\!-\!1)})^{T} \featurevec^{(\sampleidx)}$.

The choice of the learning rate $\lrate$ used for Algorithm alg:gd_linreg can be based on the condition \eqref{equ_GD_conv_guarantee} with the Hessian $\nabla^{2} f(\weights)$ of the objective function $f(\weights)$ underlying linear regression (see \eqref{equ_smooth_problem_linreg}). This Hessian is given explicitly as

[$] $$\label{equ_hessian_linreg} \nabla^{2} f(\weights) = (1/\samplesize) \featuremtx^{T} \featuremtx,$$ [$]

with the feature matrix $\featuremtx=\big(\featurevec^{(1)},\ldots,\featurevec^{(\samplesize)}\big)^{T} \in \mathbb{R}^{\samplesize \times \featuredim}$ (see feature matrix ). Note that the Hessian \eqref{equ_hessian_linreg} does not depend on the parameter vector $\weights$.

Comparing \eqref{equ_hessian_linreg} with \eqref{equ_GD_conv_guarantee}, one particular strategy for choosing the learning rate in Algorithm alg:gd_linreg is to (i) compute the matrix product $\featuremtx^{T} \featuremtx$, (ii) compute the maximum eigenvalue $\eigval{\rm max}\big( (1/\samplesize) \featuremtx^{T} \featuremtx \big)$ of this product and (iii) set the learning rate to $\lrate =1/\eigval{\rm max} \big( (1/\samplesize) \featuremtx^{T} \featuremtx \big)$.

While it might be challenging to compute the maximum eigenvalue $\eigval{\rm max} \big( (1/\samplesize) \featuremtx^{T} \featuremtx \big)$, it might be easier to find an upper bound $U$ for it.[a]

Given such an upper bound $U \geq \eigval{\rm max} \big( (1/\samplesize) \featuremtx^{T} \featuremtx \big)$, the learning rate $\lrate =1/U$ still ensures convergence of the GD steps. Consider a dataset $\{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})\}_{\sampleidx=1}^{\samplesize}$ with normalized features, i.e., $\| \featurevec^{(\sampleidx)}\| = 1$ for all $\sampleidx =1,\ldots,\samplesize$. This implies, in turn, the upper bound $U= 1$, i.e., $1 \geq \lambda_{\rm max} \big( (1/\samplesize) \featuremtx^{T} \featuremtx \big)$. We can then ensure convergence of the iterates $\weights^{(\itercntr)}$ (see \eqref{equ_update_gd_linreg}) by choosing the learning rate $\lrate =1$.

Time-Data Tradeoffs. The number of GD steps required by Algorithm alg:gd_linreg to ensure a prescribed sub-optimality depends crucially on the condition number of $\featuremtx^{T} \featuremtx$. What can we say about the condition number? In general, we have not control over this quantity as the matrix $\featuremtx$ consists of the feature vectors of arbitrary data points. However, it is often useful to model the feature vectors as realizations of independent and identically distributed (iid) random vectors. It is then possible to bound the probability of the feature matrix having a sufficiently small condition number. These bounds can then be used to choose the step-size such that convergence is guaranteed with sufficiently large probability. The usefulness of these bounds typically depends on the ratio $\featurelen/\samplesize$. For increasing sample-size, these bounds allow to use larger step-sizes and, in turn, result in faster convergence of GD algorithm. Thus, we obtain a trade-off between the runtime of Algorithm alg:gd_linreg and the number of data points that we feed into it [5].

## GD for Logistic Regression

Logistic regression learns a linear hypothesis $h^{(\weights)}$ that is used to classify data points by predicting their binary label. The quality of such a linear classifier is measured by the logistic loss. The ERM principle suggest to learn the parameter vector $\weights$ by minimizing the average logistic loss obtained for a training set $\dataset= \{ (\featurevec^{(\sampleidx)}, \truelabel^{(\sampleidx)}) \}_{\sampleidx=1}^{\samplesize}$. The training set consists of data points with features $\featurevec^{(\sampleidx)} \in \mathbb{R}^{\featuredim}$ and binary labels $\truelabel^{(\sampleidx)} \in \{-1,1\}$.

We can rewrite ERM for logistic regression as the optimization problem

[] \begin{align} \overline{\weights}& = \argmin_{\weights \in \mathbb{R}^{\featuredim}} f(\weights) \nonumber \\ \mbox{ with } f(\weights) & = \label{equ_smooth_problem_logeg}(1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \log\big( 1\!+\!\exp \big( - \truelabel^{(\sampleidx)} \weights^{T} \featurevec^{(\sampleidx)} \big)\big). \end{align} []

The objective function $f(\weights)$ is differentiable and therefore we can use GD \eqref{equ_def_GD_step} to solve \eqref{equ_smooth_problem_logeg}. We can write down the gradient of the objective function in \eqref{equ_smooth_problem_logeg} in closed-form as

[$] $$\label{equ_gradient_logistic_regression} \nabla f(\weights) = (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \frac{-\truelabel^{(\sampleidx)}}{1 + \exp ( \truelabel^{(\sampleidx)} \weights^{T} \featurevec^{(\sampleidx)})} \featurevec^{(\sampleidx)}.$$ [$]

Inserting \eqref{equ_gradient_logistic_regression} into the GD step \eqref{equ_def_GD_step} yields Algorithm alg:gd_logreg.

Logistic regression via GD

Input: labeled dataset $\dataset=\{ (\featurevec^{(\sampleidx)}, \truelabel^{(\sampleidx)}) \}_{\sampleidx=1}^{\samplesize}$ containing feature vectors $\featurevec^{(\sampleidx)} \in \mathbb{R}^{\featuredim}$ and labels $\truelabel^{(\sampleidx)} \in \mathbb{R}$; GD learning rate $\lrate \gt0$.

Initialize: set $\weights^{(0)}\!\defeq\!\mathbf{0}$; set iteration counter $\itercntr\!\defeq\!0$

• repeat
• $\itercntr\!\defeq\! \itercntr\!+\!1$ (increase iteration counter)
• $\weights^{(\itercntr)} \defeq \weights^{(\itercntr\!-\!1)}\!+\!\lrate (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \frac{\truelabel^{(\sampleidx)}}{1\!+\!\exp \big( \truelabel^{(\sampleidx)} \big(\weights^{(\itercntr\!-\!1)}\big)^{T} \featurevec^{(\sampleidx)}\big)} \featurevec^{(\sampleidx)}$ (do a GD step \eqref{equ_def_GD_step})
• until stopping criterion met

Output: $\weights^{(\itercntr)}$, which approximates a solution $\overline{\weights}$ of \eqref{equ_smooth_problem_logeg})

Let us have a closer look on the update in step equ_step_updat_logreg_GD of Algorithm alg:gd_logreg. This step amounts to computing

[$] $$\label{equ_update_logreg_GD} \weights^{(\itercntr)} \defeq \weights^{(\itercntr\!-\!1)} + \lrate (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \frac{\truelabel^{(\sampleidx)}}{1 + \exp \big( \truelabel^{(\sampleidx)} \big( \vw^{(\itercntr\!-\!1)} \big)^{T} \featurevec^{(\sampleidx)}\big)} \featurevec^{(\sampleidx)}.$$ [$]

Similar to the GD step \eqref{equ_update_gd_linreg} for linear regression, also the GD step \eqref{equ_update_logreg_GD} for logistic regression can be interpreted as an implementation of the trial-and-error principle. Indeed, \eqref{equ_update_logreg_GD} corrects the previous guess (or approximation) $\weights^{(\itercntr\!-\!1)}$ for the optimal parameter vector $\overline{\weights}$ by the correction term

[$] $$\label{equ_correction_logreg_GD} (\lrate/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \underbrace{ \frac{\truelabel^{(\sampleidx)}}{1 + \exp ( \truelabel^{(\sampleidx)} \weights^{T} \featurevec^{(\sampleidx)})}}_{e^{(\sampleidx)}} \featurevec^{(\sampleidx)}.$$ [$]

The correction term \eqref{equ_correction_logreg_GD} is a weighted average of the feature vectors $\featurevec^{(\sampleidx)}$. The feature vector $\featurevec^{(\sampleidx)}$ is weighted by the factor $(\lrate/\samplesize) \cdot e^{(\sampleidx)}$. These weighting factors are a product of the global factor $(\lrate/\samplesize)$ that applies equally to all feature vectors $\featurevec^{(\sampleidx)}$. The global factor is multiplied by a data point-specific factor $e^{(\sampleidx)} = \frac{\truelabel^{(\sampleidx)}}{1 + \exp ( \truelabel^{(\sampleidx)} \weights^{T} \featurevec^{(\sampleidx)})}$, which quantifies the error of the classifier $h^{(\weights^{(\itercntr\!-\!1)})}(\featurevec^{(\sampleidx)}) = \big(\weights^{(\itercntr\!-\!1)})^{T} \featurevec^{(\sampleidx)}$ for a single data point with true label $\truelabel^{(\sampleidx)} \in \{-1,1\}$ and features $\featurevec^{(\sampleidx)} \in \mathbb{R}^{\featuredim}$.

We can use the sufficient condition \eqref{equ_GD_conv_guarantee} for the convergence of GD steps to guide the choice of the learning rate $\lrate$ in Algorithm alg:gd_logreg. To apply condition \eqref{equ_GD_conv_guarantee}, we need to determine the Hessian $\nabla^{2} f(\weights)$ matrix of the objective function $f(\weights)$ underlying logistic regression (see \eqref{equ_smooth_problem_logeg}). Some basic calculus reveals (see [6](Ch. 4.4.))

[$] $$\label{equ_hessian_logreg} \nabla^{2} f(\weights) = (1/\samplesize) \mX^{T} \mD \featuremtx.$$ [$]

Here, we used the feature matrix $\featuremtx=\big(\featurevec^{(1)},\ldots,\featurevec^{(\samplesize)}\big)^{T} \in \mathbb{R}^{\samplesize \times \featuredim}$ (see feature matrix ) and the diagonal matrix $\mD = {\rm diag} \{d_{1},\ldots,d_{\samplesize}\} \in \mathbb{R}^{\samplesize \times \samplesize}$ with diagonal elements

[$] $$d_{\sampleidx} = \frac{1}{1+\exp(-\weights^{T} \featurevec^{(\sampleidx)})} \bigg(1- \frac{1}{1+\exp(-\weights^{T} \featurevec^{(\sampleidx)})} \bigg). \label{equ_diag_entries_log_reg}$$ [$]

We highlight that, in contrast to the Hessian \eqref{equ_hessian_linreg} of the objective function arising in linear regression, the Hessian \eqref{equ_hessian_logreg} of logistic regression varies with the parameter vector $\weights$. This makes the analysis of Algorithm alg:gd_logreg and the optimal choice for the learning rate $\lrate$ more difficult compared to Algorithm alg:gd_linreg. At least, we can ensure convergence of \eqref{equ_update_logreg_GD} (towards a solution of \eqref{equ_smooth_problem_logeg}) for the learning rate $\lrate=1$ if we normalize feature vectors such that $\| \featurevec^{(\sampleidx)} \|=1$. This follows from the fact the diagonal entries \eqref{equ_diag_entries_log_reg} take values in the interval $[0,1]$.

## Data Normalization

The number of GD steps \eqref{equ_def_GD_step} required to reach the minimum (within a prescribed accuracy) of the objective function depends crucially on the condition number [3][7]

[$] $$\label{equ_def_condition_number} \kappa( \featuremtx^{T} \featuremtx) \defeq \eigval{\rm max}/\eigval{\rm min}.$$ [$]

Here, we use the largest and smallest eigenvalue of the matrix $\featuremtx^{T} \featuremtx$, denoted as $\eigval{\rm max}$ and $\eigval{\rm min}$, respectively. The condition number \eqref{equ_def_condition_number} is only well-defined if the columns of the feature matrix $\featuremtx$ (which are the feature vectors $\featurevec^{(\sampleidx)}$), are linearly independent. In this case the condition number is lower bounded as $1 \leq \kappa( \featuremtx^{T} \featuremtx)$.

It can be shown that the GD steps \eqref{equ_def_GD_step} converge faster for smaller condition number $\kappa( \featuremtx^{T} \featuremtx)$ [7]. Thus, GD will be faster for datasets with a feature matrix $\featuremtx$ such that $\kappa( \featuremtx^{T} \featuremtx) \approx 1$. It is therefore often beneficial to pre-process the feature vectors using a normalization (or standardization) procedure as detailed in Algorithm alg_reshaping.

“Data Normalization”

Input: labeled dataset $\dataset = \{(\featurevec^{(\sampleidx)},y^{(\sampleidx)})\}_{\sampleidx=1}^{\samplesize}$

• remove sample means $\widehat{\featurevec}=(1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \featurevec^{(\sampleidx)}$ from features, i.e.,
[$] $$\nonumber \featurevec^{(\sampleidx)} \defeq \featurevec^{(\sampleidx)} - \widehat{\featurevec} \mbox{ for } \sampleidx=1,\ldots,\samplesize$$ [$]
• normalise features to have unit variance,
[$] $$\nonumber \hat{\feature}^{(\sampleidx)}_{\featureidx} \defeq \feature^{(\sampleidx)}_{\featureidx}/ \hat{\sigma} \mbox{ for } \featureidx=1,\ldots,\featuredim \mbox{ and } \sampleidx=1,\ldots,\samplesize$$ [$]
with the empirical (sample) variance $\hat{\sigma}_{\featureidx}^{2} =(1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \big( \feature^{(\sampleidx)}_{\featureidx} \big)^{2}$

Output: normalized feature vectors $\{\hat{\featurevec}^{(\sampleidx)}\}_{\sampleidx=1}^{\samplesize}$

Algorithm alg_reshaping transforms the original feature vectors $\featurevec^{(\sampleidx)}$ into new feature vectors $\widehat{\featurevec}^{(\sampleidx)}$ such that the new feature matrix $\widehat{\featuremtx} = (\widehat{\featurevec}^{(1)},\ldots,\widehat{\featurevec}^{(\samplesize)})^{T}$ is better conditioned than the original feature matrix, i.e., $\kappa( \widehat{\featuremtx}^{T} \widehat{\featuremtx}) \lt \kappa( \featuremtx^{T} \featuremtx)$.

## Stochastic GD

Consider the GD steps \eqref{equ_def_GD_step} for minimizing the empirical risk \eqref{equ_obj_emp_risk_GD}. The gradient $\nabla f(\weights)$ of the objective function \eqref{equ_obj_emp_risk_GD} has a particular structure. Indeed, this gradient is a sum

[$] $$\label{eq_gradient_sum} \nabla f(\weights) = (1/\samplesize) \sum_{\sampleidx=1}^{\samplesize} \nabla f_{\sampleidx}(\vw) \mbox{ with } f_{\sampleidx}(\weights) \defeq \loss{(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})}{h^{(\weights)}}.$$ [$]

Each component of the sum \eqref{eq_gradient_sum} corresponds to one particular data points $(\featurevec^{(\sampleidx)},\truelabel^{(\sampleidx)})$, for $\sampleidx=1,\ldots,\samplesize$. We need to compute a sum of the form \eqref{eq_gradient_sum} for each new GD step \eqref{equ_def_GD_step}.

Computing the sum in \eqref{eq_gradient_sum} can be computationally challenging for at least two reasons. First, computing the sum is challenging for very large datasets with $\samplesize$ in the order of billions. Second, for datasets which are stored in different data centres located all over the world, the summation would require a huge amount of network resources. Moreover, the finite transmission rate of communication networks limits the rate by which the GD steps \eqref{equ_def_GD_step} can be executed.

The idea of stochastic gradient descent (SGD) is to replace the exact gradient $\nabla f(\weights)$ \eqref{eq_gradient_sum} by an approximation that is easier to compute than a direct evaluation of \eqref{eq_gradient_sum}. The word “stochastic” in the name SGD hints already at the use of a stochastic approximation $\noisygrad(\weights) \approx \nabla f(\weights)$. It turns out that using a gradient approximation $\noisygrad(\weights)$ can result in significant savings in computational complexity while incurring a graceful degradation in the overall optimization accuracy. The optimization accuracy (distance to minimum of $f(\weights)$) depends crucially on the “gradient noise”

[$] $$\label{equ_def_gradient_noise_generic} \varepsilon \defeq \nabla f(\weights)- \noisygrad(\weights).$$ [$]

The elementary step of most SGD methods is obtained from the GD step \eqref{equ_def_GD_step} by replacing the exact gradient $\nabla f(\weights)$ with some stochastic approximation $\noisygrad(\weights)$,

[$] $$\label{equ_SGD_update} \weights^{(\itercntr\!+\!1)} = \weights^{(\itercntr)} - \lrate_{\itercntr} \noisygrad \big( \weights^{(\itercntr)} \big),$$ [$]

As the notation in \eqref{equ_SGD_update} indicates, SGD methods use a learning rate $\lrate_{\itercntr}$ that varies between different iterations.

To avoid accumulation of the gradient noise \eqref{equ_def_gradient_noise_generic} during the SGD updates \eqref{equ_SGD_update}, SGD methods typically decrease the learning rate $\lrate_{\itercntr}$ as the iterations proceed. The precise dependence of the learning rate $\lrate_{\itercntr}$ on the iteration index $\itercntr$ is referred to as a learning rate schedule [1](Chapter 8). One possible choice for the learning rate schedule is $\lrate_{\itercntr}\!\defeq\!1/\itercntr$ [8]. Exercise discusses conditions on the learning rate schedule that guarantee convergence of the updates SGD to the minimum of $f(\weights)$.

The approximate (“noisy”) gradient $\noisygrad(\weights)$ can be obtained by different randomization strategies. The most basic form of SGD constructs the gradient approximation $\noisygrad(\weights)$ by replacing the sum \eqref{eq_gradient_sum} with a randomly selected component,

[$] $$\noisygrad(\weights) \defeq \nabla f_{\hat{\sampleidx}}(\weights).$$ [$]

The index $\hat{\sampleidx}$ is chosen randomly from the set $\{1,\ldots,\samplesize\}$. The resulting Stochastic gradient descent (SGD) method then repeats the update

[$] $$\label{equ_SGD_update_basic_form} \weights^{(\itercntr\!+\!1)} = \weights^{(\itercntr)} - \lrate \nabla f_{\hat{\sampleidx}_{\itercntr}}(\weights^{(\itercntr)}),$$ [$]

sufficiently often. Every update \eqref{equ_SGD_update_basic_form} uses a “fresh” randomly chosen (drawn) index $\hat{\sampleidx}_{\itercntr}$. Formally, the indices $\hat{\sampleidx}_{\itercntr}$ are realizations of iid random variable (RV)s whose common probability distribution is the uniform distribution over the index set $\{1,\ldots,\samplesize\}$.

Note that \eqref{equ_SGD_update_basic_form} replaces the summation over the training set during the GD step \eqref{equ_def_GD_step} by randomly selecting a single component of this sum. The resulting savings in computational complexity can be significant when the training set consists of a large number of data points that might be stored in a distributed fashion (in the “cloud”). The saving in computational complexity of SGD comes at the cost of introducing a non-zero gradient noise

[] \begin{align} \varepsilon_{\itercntr} & \stackrel{\eqref{equ_def_gradient_noise_generic}}{=} \nabla f( \weights^{(\itercntr)} ) - \noisygrad \big( \weights^{(\itercntr)} \big) \nonumber \\ & = \label{equ_gradient_noise_simple_form}\nabla f( \weights^{(\itercntr)} ) - \nabla f_{\hat{\sampleidx}_{\itercntr}}(\weights). \end{align} []

Mini-Batch SGD. Let us now discuss a variant of SGD that tries to reduce the approximation error (gradient noise) \eqref{equ_gradient_noise_simple_form} arising in the SGD step \eqref{equ_SGD_update_basic_form}. The idea behind this variant, referred to as mini-batch SGD, is quite simple. Instead of using only a single randomly selected component $\nabla f_{\sampleidx}(\weights)$ (see \eqref{eq_gradient_sum}) for constructing a gradient approximation, mini-batch SGD uses several randomly chosen components.

We summarize mini-batch SGD in Algorithm alg:minibatch_gd which requires an integer batch size $\batchsize$ as input parameter. Algorithm alg:minibatch_gd repeats the SGD step \eqref{equ_SGD_update} using a gradient approximation that is constructed from a randomly selected subset $\batch = \{ \sampleidx_{1},\ldots,\sampleidx_{\batchsize}\}$ (a “batch”),

[$] $$\label{equ_def_gradient_approx_mini_batch} \noisygrad \big( \weights \big) = (1/\batchsize) \sum_{\sampleidx' \in \batch} \nabla f_{\sampleidx'}(\weights).$$ [$]

For each new iteration of Algorithm alg:minibatch_gd, a new batch $\batch$ is generated by a random generator.

Mini-Batch SGD

Input: components $f_{\sampleidx}(\weights)$, for $\sampleidx=1,\ldots,\samplesize$ of objective function $f(\weights)=\sum_{\sampleidx=1}^{\samplesize} f_{\sampleidx}(\weights)$ ; batch size $\batchsize$; learning rate schedule $\lrate_{\itercntr} \gt0$.

set $\vw^{(0)}\!\defeq\!\mathbf{0}$; set iteration counter $\itercntr\!\defeq\!0$

• repeat
• randomly select a batch $\batch = \{\sampleidx_{1},\ldots,\sampleidx_{\batchsize}\} \subseteq \{1,\ldots,\samplesize\}$ of indices that select a subset of components $f_{\sampleidx}$
• compute an approximate gradient $\noisygrad \big( \weights^{(\itercntr)} \big)$ using \eqref{equ_def_gradient_approx_mini_batch}
• $\itercntr \defeq \itercntr +1$ (increase iteration counter)
• $\weights^{(\itercntr)} \defeq \weights^{(\itercntr\!-\!1)} - \lrate_{\itercntr} \noisygrad \big( \weights^{(\itercntr-1)} \big)$
• until stopping criterion met
Output: $\weights^{(\itercntr)}$ (which approximates $\argmin_{\weights \in \mathbb{R}^{\featuredim}} f(\weights)$ ))

Note that Algorithm alg:minibatch_gd includes the basic SGD variant \eqref{equ_SGD_update_basic_form} as a special case for the batch size $\batchsize=1$. Another special case is $\batchsize= \samplesize$, where the SGD step \ref{equ_sGD_step_minibtatch} in Algorithm alg:minibatch_gd becomes an ordinary GD step \eqref{equ_def_GD_step}.

Online Learning. A main motivation for the SGD step \eqref{equ_SGD_update_basic_form} is that a training set is already collected but so large that the sum in \eqref{eq_gradient_sum} is computationally intractable. Another variant of SGD is obtained for sequential (time-series) data. In particular, consider data points that are gathered sequentially, one new data point $\big( \featurevec^{(\timeidx)},\truelabel^{(\timeidx)} \big)$ at each time instant $\timeidx=1,2,\ldots.$. With each new data point $\big( \featurevec^{(\timeidx)},\truelabel^{(\timeidx)} \big)$ we can access a new component $f_{\timeidx}(\weights) = \loss{(\featurevec^{(\timeidx)},\truelabel^{(\timeidx)})}{h^{(\weights)}}$ (see \eqref{equ_obj_emp_risk_GD}). For sequential data, we can use a slight modification of the SGD step \eqref{equ_SGD_update} to obtain an online learning method (see Section Online Learning ). This online variant of SGD computes, at each time instant $\timeidx$,

[$] $$\label{equ_SGD_update_timeiedx} \weights^{(\timeidx)} \defeq \weights^{(\timeidx\!-\!1)} - \lrate_{\timeidx} \nabla f_{\timeidx}(\weights^{(\timeidx\!-\!1)}).$$ [$]

The main idea underlying GD and SGD is to approximate the objective function \eqref{equ_obj_emp_risk_GD} locally around a current guess or approximation $\weights^{(\itercntr)}$ for the optimal weights. This local approximation is a tangent hyperplane whose normal vector is determined by the gradient $\nabla f (\weights^{(\itercntr)}$. We then obtain an updated (improved) approximation by minimizing this local approximation, i.e., by doing a GD step \eqref{equ_def_GD_step}.

The idea of advanced gradient methods [1](Ch. 8) is to exploit the information provided by the gradients $\nabla f\big(\weights^{(\itercntr')}\big)$ at previous iterations $\itercntr'=1,\ldots,\itercntr$, to build an improved local approximation of $f(\weights)$ around a current iterate $\weights^{(\itercntr)}$. Figure fig_improved_local_approx indicates such an improved local approximation of $f(\weights)$ which is non-linear (e.g., quadratic). These improved local approximations can be used to adapt the learning rate during the GD steps \eqref{equ_def_GD_step}.

Advanced gradient-based methods use improved local approximations to modify the gradient $\nabla f\big(\weights^{(\itercntr')}\big)$ to obtain an improved update direction. Figure fig_improved_local_approx_better_directions depicts the contours of an objective function $f(\weights)$ for which the gradient $\nabla f\big(\weights^{(\itercntr)}\big)$ points only weakly towards the optimal parameter vector $\overline{\weights}$ (minimizer of $f(\weights)$). The gradient history $\nabla f\big(\weights^{(\itercntr')}\big)$, for$\itercntr'=1,\ldots,\itercntr$, allows to detect such an unfavourable geometry of the objective function. Moreover, the gradient history can be used to “correct” the update direction $\nabla f\big(\weights^{(\itercntr)}\big)$ to obtain an improved update direction towards the optimum parameter vector $\overline{\weights}$.

## Notes

1. The problem of computing a full eigenvalue decomposition of $\featuremtx^{T} \featuremtx$ has essentially the same complexity as ERM via directly solving equ_zero_gradient_lin_reg, which we want to avoid by using the “cheaper” GD Algorithm alg:gd_linreg.

## General References

Jung, Alexander (2022). Machine Learning: The Basics. Signapore: Springer. doi:10.1007/978-981-16-8193-6.

Jung, Alexander (2022). "Machine Learning: The Basics". arXiv:1805.05052.

## References

1. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning MIT Press, 2016
2. W. Rudin. Principles of Mathematical Analysis McGraw-Hill, New York, 3 edition, 1976
3. S. Bubeck. Convex optimization. algorithms and complexity. In Foundations and Trends in Machine Learning volume 8. Now Publishers, 2015
4. Y. Nesterov. Introductory lectures on convex optimization volume 87 of Applied Optimization. Kluwer Academic Publishers, Boston, MA, 2004. A basic course
5. S. Oymak, B. Recht, and M. Soltanolkotabi. Sharp time--data tradeoffs for linear inverse problems. IEEE Transactions on Information Theory 64(6):4129--4158, June 2018
6. T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning Springer Series in Statistics. Springer, New York, NY, USA, 2001
7. A. Jung. A fixed-point of view on gradient methods for big data. Frontiers in Applied Mathematics and Statistics 3, 2017
8. N. Murata. A statistical study on on-line learning. In D. Saad, editor, On-line Learning in Neural Networks pages 63--92. Cambridge University Press, New York, NY, USA, 1998