Rechercher

sur ce site


Accueil du site > Résumés des séminaires > Labo > A regularization view on learning with stochastic gradient methods

A regularization view on learning with stochastic gradient methods

Stochastic Gradient Methods (SGM) are hugely popular in machine learning because of their ease of use and good practical performance. Yet, their learning capabilities are relatively little understood. Most previous works consider the learning properties of SGM with only one pass over the data, while in practice multiple passes are usually considered. The effect of multiple passes is studied extensively for the optimization of the empirical error, but the role for learning is less clear. In practice, early-stopping of the number of iterations, for example monitoring a hold-out set error, is a heuristic often employed. Moreover, the step-size is typically tuned empirically to obtain the best results. In this talk, I will present recent results that are a step towards grounding theoretically these commonly used heuristics by viewing SGM under the lens of regularization.

In collaboration with Raffaello Camoriano, Junhong Lin and Silvia, Villa.


CMAP UMR 7641 École Polytechnique CNRS, Route de Saclay, 91128 Palaiseau Cedex France, Tél: +33 1 69 33 46 23 Fax: +33 1 69 33 46 46