Rechercher

sur ce site


Accueil du site > Résumés des séminaires > Labo > Getting the best of both worlds : SAG, a stochastic optimization algorithm with a linear convergence rate

Getting the best of both worlds : SAG, a stochastic optimization algorithm with a linear convergence rate

We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.

CMAP UMR 7641 École Polytechnique CNRS, Route de Saclay, 91128 Palaiseau Cedex France, Tél: +33 1 69 33 46 23 Fax: +33 1 69 33 46 46