This paper deals with the steplength selection in stochastic gradient methods for large scale optimization problems arising in machine learning. We introduce an adaptive steplength selection derived by tailoring a limited memory steplength rule, recently developed in the deterministic context, to the stochastic gradient approach. The proposed steplength rule provides values within an interval, whose bounds need to be prefixed by the user. A suitable choice of the interval bounds allows to perform similarly to the standard stochastic gradient method equipped with the best-tuned steplength. Since the setting of the bounds slightly affects the performance, the new rule makes the tuning of the parameters less expensive with respect to the choice of the optimal prefixed steplength in the standard stochastic gradient method. We evaluate the behaviour of the proposed steplength selection in training binary classifiers on well known data sets and by using different loss functions

On the Steplength Selection in Stochastic Gradient Methods

G Franchini
Primo
;
V. Ruggiero;
2020

Abstract

This paper deals with the steplength selection in stochastic gradient methods for large scale optimization problems arising in machine learning. We introduce an adaptive steplength selection derived by tailoring a limited memory steplength rule, recently developed in the deterministic context, to the stochastic gradient approach. The proposed steplength rule provides values within an interval, whose bounds need to be prefixed by the user. A suitable choice of the interval bounds allows to perform similarly to the standard stochastic gradient method equipped with the best-tuned steplength. Since the setting of the bounds slightly affects the performance, the new rule makes the tuning of the parameters less expensive with respect to the choice of the optimal prefixed steplength in the standard stochastic gradient method. We evaluate the behaviour of the proposed steplength selection in training binary classifiers on well known data sets and by using different loss functions
2020
9783030390808
Stochastic gradient methods, Steplength selection rule, Ritz-like values, Machine learning
File in questo prodotto:
File Dimensione Formato  
main.pdf

solo gestori archivio

Descrizione: Pre-print
Tipologia: Pre-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 388.29 kB
Formato Adobe PDF
388.29 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-030-39081-5 (2).pdf

solo gestori archivio

Descrizione: Full text editoriale
Tipologia: Full text (versione editoriale)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.89 MB
Formato Adobe PDF
1.89 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2413820
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 1
social impact