An International Publisher for Academic and Scientific Journals
Author Login
Scholars Journal of Physics, Mathematics and Statistics | Volume-4 | Issue-04
Adaptive Learning Rate SGD Algorithm for SVM
Shuxia Lu, Zhao Jin
Published: Nov. 22, 2017 |
120
64
DOI: 10.21276/sjpms.2017.4.4.5
Pages: 178-184
Downloads
Abstract
Stochastic gradient descent (SGD) is a simple and effective algorithm for
solving the optimization problem of support vector machine, where each iteration
operates on a single training example. The run-time of SGD does not depend directly
on the size of the training set, the resulting algorithm is especially suited for learning
from large datasets. However, the problem of stochastic gradient descent algorithm is
that it is difficult to choose the proper learning rate. A learning rate is too small, which
leads to slow convergence, while a learning rate that is too large can hinder
convergence and cause fluctuate. In order to improve the efficiency and classification
ability of SVM based on stochastic gradient descent algorithm, three algorithms of
adaptive learning rate SGD are used to solve support vector machine, which are
Adagrad, Adadelta and Adam. The experimental results show that the algorithm based
on Adagrad, Adadelta and Adam for solving the linear support vector machine has
faster convergence speed and higher testing precision.