Stochastic gradient descent: Difference between revisions
Undid revision 945125994 by 94.241.105.1 (talk) Rv. removed empty ref tags. |
→Natural Gradient Descent and kSGD: Removing Self-promotion WP:SELFPROMOTE. Please provide secondary and tertiary references showing the importance of this idea. The paper has only 22 citations |
||
Line 204: | Line 204: | ||
where <math>\epsilon</math> is a small scalar used to prevent division by 0, and <math>\beta_1</math> and <math>\beta_2</math> are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done elementwise. |
where <math>\epsilon</math> is a small scalar used to prevent division by 0, and <math>\beta_1</math> and <math>\beta_2</math> are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done elementwise. |
||
===Natural Gradient Descent and kSGD=== |
|||
Kalman-based Stochastic Gradient Descent (kSGD)<ref name=":0">{{Cite journal|last=Patel|first=V.|date=2016-01-01|title=Kalman-Based Stochastic Gradient Method with Stop Condition and Insensitivity to Conditioning|journal=SIAM Journal on Optimization|volume=26|issue=4|pages=2620–2648|doi=10.1137/15M1048239|issn=1052-6234|arxiv=1512.01139}}</ref> is an online and offline algorithm for learning parameters from statistical problems from [[quasi-likelihood]] models, which include [[Linear regression|linear models]], [[Nonlinear regression|non-linear models]], [[generalized linear model]]s, and [[Artificial neural network|neural networks]] with [[Mean squared error|squared error loss]] as special cases. For online learning problems, kSGD is a special case of the [[Kalman filter|Kalman Filter]] for linear regression problems, a special case of the [[Extended Kalman filter|Extended Kalman Filter]] for non-linear regression problems, and can be viewed as an incremental [[Gauss–Newton algorithm|Gauss-Newton]] method. Moreover, because of kSGD's relationship to the Kalman Filter and natural gradient descent's<ref>{{cite journal|last1=Cichocki|first1=A|last2=Chen|first2=T|last3=Amari|first3=S|title=Stability Analysis of Learning Algorithms for Blind Source Separation.|journal=Neural Networks|date=November 1997|volume=10|issue=8|pages=1345–1351|pmid=12662478|doi=10.1016/S0893-6080(97)00039-7}}</ref> relationship to the Kalman Filter,<ref>{{cite arxiv|last1=Yann|first1=Ollivier|title=Online Natural Gradient as a Kalman Filter|eprint=1703.00209|language=en|date=1 March 2017|class=stat.ML}}</ref> kSGD is a rigorous improvement over the popular natural gradient descent method. |
|||
The benefits of kSGD, in comparison to other methods, are (1) it is not sensitive to the condition number of the problem,{{efn|For the linear regression problem, kSGD's objective function discrepancy (i.e. the total of bias and variance) at iteration <math>k</math> is <math>{\frac {1+\epsilon }{k}}p\sigma ^{2}</math> with probability converging to 1 at a rate depending on <math>\epsilon \in (0,1)</math>, where <math>\sigma ^{2}</math> is the variance of the residuals. Moreover, for specific choices of <math>\gamma _{1},\gamma _{2}</math>, kSGD's objective function bias at iteration <math>k</math> can be shown to be <math>\frac {(1+\epsilon )^{2}}{2k^{2}}\Vert w(0)-w_{*}\Vert _{2}^{2}</math> with probability converging to 1 at a rate depending on <math>\epsilon \in (0,1)</math>, where <math>w_{*}</math> is the optimal parameter. |
|||
}} (2) it has a robust choice of hyperparameters, and (3) it has a stopping condition. The drawbacks of kSGD is that the algorithm requires storing a dense covariance matrix between iterations, and requires a matrix-vector product at each iteration. |
|||
To describe the algorithm, suppose <math>Q_i(w)</math>, where <math>w \in \mathbb{R}^p</math> is defined by an example <math>(Y_i,X_i)\in \mathbb{R} \times \mathbb{R}^d</math> such that |
|||
:<math>\nabla_w Q_i(w) = \frac{Y_i - \mu(X_i,w)}{V(\mu(X_i,w))} \nabla_w \mu(X_i,w)</math> |
|||
where <math>\mu(X_i,w)</math> is mean function (i.e. the expected value of <math>Y_i</math> given <math>X_i</math>), and <math>V(\mu(X_i,w))</math> is the variance function (i.e. the variance of <math>Y_i</math> given <math>X_i</math>). Then, the parameter update, <math> w(t+1) </math>, and covariance matrix update, <math> M(t+1) </math> are given by the following |
|||
:<math> p = \nabla_w \mu(X_{t+1},w(t)) </math> |
|||
:<math> m = \mu(X_{t+1},w(t)) </math> |
|||
:<math> v = M(t) p </math> |
|||
:<math> s = \min\lbrace \gamma_1, \max\lbrace \gamma_2, V(m)\rbrace \rbrace + v^\mathsf{T} p </math> |
|||
:<math> w(t+1) = w(t) + \frac{Y_{t+1} - m}{s}v </math> |
|||
:<math> M(t+1) = M(t) - \frac{1}{s} v v^\mathsf{T} </math> |
|||
where <math>\gamma_1,\gamma_2</math> are hyperparameters. The <math>M(t)</math> update can result in the covariance matrix becoming indefinite, which can be avoided at the cost of a matrix-matrix multiplication. <math>M(0)</math> can be any positive definite symmetric matrix, but is typically taken to be the identity. As noted by Patel,<ref name=":0" /> for all problems besides linear regression, restarts are required to ensure convergence of the algorithm, but no theoretical or implementation details were given. In a closely related, off-line, mini-batch method for non-linear regression analyzed by Bertsekas,<ref>{{Cite journal|last=Bertsekas|first=D.|date=1996-08-01|title=Incremental Least Squares Methods and the Extended Kalman Filter|journal=SIAM Journal on Optimization|volume=6|issue=3|pages=807–822|doi=10.1137/S1052623494268522|issn=1052-6234|hdl=1721.1/3362}}</ref> a forgetting factor was used in the covariance matrix update to prove convergence. |
|||
===Second-Order Methods=== |
===Second-Order Methods=== |
Revision as of 16:24, 13 March 2020
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data).[1] Especially in big data applications this reduces the computational burden, achieving faster iterations in trade for a slightly lower convergence rate.[2]
While the basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s,[3] stochastic gradient descent has become an important optimization method in machine learning.[1]
Background
Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum:
where the parameter that minimizes is to be estimated. Each summand function is typically associated with the -th observation in the data set (used for training).
In classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation.[4] Therefore, contemporary statistical theorists often consider stationary points of the likelihood function (or zeros of its derivative, the score function, and other estimating equations).
The sum-minimization problem also arises for empirical risk minimization. In this case, is the value of the loss function at -th example, and is the empirical risk.
When used to minimize the above function, a standard (or "batch") gradient descent method would perform the following iterations:
where is a step size (sometimes called the learning rate in machine learning).
In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations.
However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems.[5]
Iterative method
In stochastic (or "on-line") gradient descent, the true gradient of is approximated by a gradient at a single example:
As the algorithm sweeps through the training set, it performs the above update for each training example. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges.
In pseudocode, stochastic gradient descent can be presented as follows:
- Choose an initial vector of parameters and learning rate .
- Repeat until an approximate minimum is obtained:
- Randomly shuffle examples in the training set.
- For , do:
A compromise between computing the true gradient and the gradient at a single example is to compute the gradient against more than one training example (called a "mini-batch") at each step. This can perform significantly better than "true" stochastic gradient descent described, because the code can make use of vectorization libraries rather than computing each step separately. It may also result in smoother convergence, as the gradient computed at each step is averaged over more training examples.
The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum.[6][7] This is in fact a consequence of the Robbins-Siegmund theorem.[8]
Example
Let's suppose we want to fit a straight line to a training set with observations and corresponding estimated responses using least squares. The objective function to be minimized is:
The last line in the above pseudocode for this specific problem will become:
Note that in each iteration (also called update), only the gradient evaluated at a single point instead of evaluating at the set of all samples.
The key difference compared to standard (Batch) Gradient Descent is that only one piece of data from the dataset is used to calculate the step, and the piece of data is picked randomly at each step.
Notable applications
Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models.[9] When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.[10] Its use has been also reported in the Geophysics community, specifically to applications of Full Waveform Inversion (FWI).[11]
Stochastic gradient descent competes with the L-BFGS algorithm,[citation needed] which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE.[12]
Another stochastic gradient descent algorithm is the least mean squares (LMS) adaptive filter.
Extensions and variants
Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge;[citation needed] setting it too low makes it slow to converge.[citation needed] A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function ηt of the iteration number t, giving a learning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on k-means clustering.[13] Some practical guidance on choosing the step size in several variants of SGD is given in Sects. 4.4, 6.6, and 7.5 of Spall (2003).[14]
Implicit updates (ISGD)
As mentioned earlier, classical stochastic gradient descent is generally sensitive to learning rate η. Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved[15] by considering implicit updates whereby the stochastic gradient is evaluated at the next iterate rather than the current one:
This equation is implicit since appears on both sides of the equation. It is a stochastic form of the proximal gradient method since the update can also be written as:
As an example, consider least squares with features and observations . We wish to solve:
where indicates the inner product. Note that could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows:
where is uniformly sampled between 1 and . Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when is misspecified so that has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, implicit stochastic gradient descent (shortened as ISGD) can be solved in closed-form as:
This procedure will remain numerically stable virtually for all as the learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and normalized least mean squares filter (NLMS).
Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that depends on only through a linear combination with features , so that we can write , where may depend on as well but not on except through . Least squares obeys this rule, and so does logistic regression, and most generalized linear models. For instance, in least squares, , and in logistic regression , where is the logistic function. In Poisson regression, , and so on.
In such settings, ISGD is simply implemented as follows. Let , where is scalar. Then, ISGD is equivalent to:
The scaling factor can be found through the bisection method since in most regular models, such as the aforementioned generalized linear models, function is decreasing, and thus the search bounds for are .
Momentum
Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' seminal paper on backpropagation learning.[16] Stochastic gradient descent with momentum remembers the update Δ w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:[17][18]
that leads to:
where the parameter which minimizes is to be estimated, and is a step size (sometimes called the learning rate in machine learning)[clarification needed].
The name momentum stems from an analogy to momentum in physics: the weight vector , thought of as a particle traveling through parameter space[16], incurs acceleration from the gradient of the loss ("force"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of artificial neural networks for several decades.[19]
Averaging
Averaged stochastic gradient descent, invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of[20]
- .
When optimization is done, this averaged parameter vector takes the place of w.
AdaGrad
AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011.[21][22] Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition.[21] It still has a base learning rate η, but this is multiplied with the elements of a vector {Gj,j} which is the diagonal of the outer product matrix
where , the gradient, at iteration τ. The diagonal is given by
- .
This vector is updated after every iteration. The formula for an update is now
or, written as per-parameter updates,
Each {G(i,i)} gives rise to a scaling factor for the learning rate that applies to a single parameter wi. Since the denominator in this factor, is the ℓ2 norm of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates.[19]
While designed for convex problems, AdaGrad has been successfully applied to non-convex optimization.[23]
RMSProp
RMSProp (for Root Mean Square Propagation) is also a method in which the learning rate is adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.[24] So, first the running average is calculated in terms of means square,
where, is the forgetting factor.
And the parameters are updated as,
RMSProp has shown excellent adaptation of learning rate in different applications. RMSProp can be seen as a generalization of Rprop and is capable to work with mini-batches as well opposed to only full-batches.[25]
Adam
Adam[26] (short for Adaptive Moment Estimation) is an update to the RMSProp optimizer. In this optimization algorithm, running averages of both the gradients and the second moments of the gradients are used. Given parameters and a loss function , where indexes the current training iteration (indexed at ), Adam's parameter update is given by:
where is a small scalar used to prevent division by 0, and and are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done elementwise.
Second-Order Methods
It is known that a stochastic analogue of the standard (deterministic) Newton–Raphson algorithm (a “second-order” method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation. A method that uses direct measurements of the Hessian matrices of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer.[27] However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others.[28][29][30] (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.[31]) These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function.
Notes
- ^ is the element-wise product.
See also
- Coordinate descent – changes one coordinate at a time, rather than one example
- Linear classifier
- Online machine learning
- Stochastic hill climbing
References
- ^ a b Taddy, Matt (2019). "Stochastic Gradient Descent". Business Data Science: Combining Machine Learning and Economics to Optimize, Automate, and Accelerate Business Decisions. New York: McGraw-Hill. pp. 303–307. ISBN 978-1-260-45277-8.
{{cite book}}
: External link in
(help); Unknown parameter|chapterurl=
|chapterurl=
ignored (|chapter-url=
suggested) (help) - ^ Bottou, Léon; Bousquet, Olivier (2012). "The Tradeoffs of Large Scale Learning". In Sra, Suvrit; Nowozin, Sebastian; Wright, Stephen J. (eds.). Optimization for Machine Learning. Cambridge: MIT Press. pp. 351–368. ISBN 978-0-262-01646-9.
{{cite book}}
: External link in
(help); Unknown parameter|chapterurl=
|chapterurl=
ignored (|chapter-url=
suggested) (help) - ^ Mei, Song (2018). "A mean field view of the landscape of two-layer neural networks". Proceedings of the National Academy of Sciences. 115 (33): E7665–E7671. arXiv:1804.06561. Bibcode:2018arXiv180406561M. doi:10.1073/pnas.1806579115. PMC 6099898. PMID 30054315.
- ^ Ferguson, Thomas S. (1982). "An inconsistent maximum likelihood estimate". Journal of the American Statistical Association. 77 (380): 831–834. doi:10.1080/01621459.1982.10477894. JSTOR 2287314.
- ^ Bottou, Léon; Bousquet, Olivier (2008). The Tradeoffs of Large Scale Learning. Advances in Neural Information Processing Systems. Vol. 20. pp. 161–168.
- ^ Bottou, Léon (1998). "Online Algorithms and Stochastic Approximations". Online Learning and Neural Networks. Cambridge University Press. ISBN 978-0-521-65263-6.
- ^ Kiwiel, Krzysztof C. (2001). "Convergence and efficiency of subgradient methods for quasiconvex minimization". Mathematical Programming, Series A. Vol. 90, no. 1. Berlin, Heidelberg: Springer. pp. 1–25. doi:10.1007/PL00011414. ISSN 0025-5610. MR 1819784.
- ^ Robbins, Herbert; Siegmund, David O. (1971). "A convergence theorem for non negative almost supermartingales and some applications". In Rustagi, Jagdish S. (ed.). Optimizing Methods in Statistics. Academic Press.
- ^ Jenny Rose Finkel, Alex Kleeman, Christopher D. Manning (2008). Efficient, Feature-based, Conditional Random Field Parsing. Proc. Annual Meeting of the ACL.
- ^ LeCun, Yann A., et al. "Efficient backprop." Neural networks: Tricks of the trade. Springer Berlin Heidelberg, 2012. 9-48
- ^ Díaz, Esteban and Guitton, Antoine. "Fast full waveform inversion with random shot decimation". SEG Technical Program Expanded Abstracts, 2011. 2804-2808
- ^ Avi Pfeffer. "CS181 Lecture 5 — Perceptrons" (PDF). Harvard University.[permanent dead link ]
- ^ Cited by Darken, Christian; Moody, John (1990). Fast adaptive k-means clustering: some empirical results. Int'l Joint Conf. on Neural Networks (IJCNN). IEEE. doi:10.1109/IJCNN.1990.137720.
- ^ Spall, J. C. (2003). Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Hoboken, NJ: Wiley. ISBN 0-471-33052-3.
- ^ Toulis, Panos; Airoldi, Edoardo (2017). "Asymptotic and finite-sample properties of estimators based on stochastic gradients". Annals of Statistics. 45 (4): 1694–1727. arXiv:1408.2923. doi:10.1214/16-AOS1506.
- ^ a b Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (8 October 1986). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0.
- ^ Sutskever, Ilya; Martens, James; Dahl, George; Hinton, Geoffrey E. (June 2013). Sanjoy Dasgupta and David Mcallester (ed.). On the importance of initialization and momentum in deep learning (PDF). In Proceedings of the 30th international conference on machine learning (ICML-13). Vol. 28. Atlanta, GA. pp. 1139–1147. Retrieved 14 January 2016.
- ^ Sutskever, Ilya (2013). Training recurrent neural networks (PDF) (Ph.D.). University of Toronto. p. 74.
- ^ a b Zeiler, Matthew D. (2012). "ADADELTA: An adaptive learning rate method". arXiv:1212.5701 [cs.LG].
- ^ Polyak, Boris T.; Juditsky, Anatoli B. (1992). "Acceleration of stochastic approximation by averaging" (PDF). SIAM J. Control Optim. 30 (4): 838–855. doi:10.1137/0330046.
- ^ a b Duchi, John; Hazan, Elad; Singer, Yoram (2011). "Adaptive subgradient methods for online learning and stochastic optimization" (PDF). JMLR. 12: 2121–2159.
- ^ Perla, Joseph (2014). "Notes on AdaGrad" (PDF). Archived from the original (PDF) on 2015-03-30.
- ^ Gupta, Maya R.; Bengio, Samy; Weston, Jason (2014). "Training highly multiclass classifiers" (PDF). JMLR. 15 (1): 1461–1492.
- ^ Tieleman, Tijmen and Hinton, Geoffrey (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning
- ^ Hinton, Geoffrey. "Overview of mini-batch gradient descent" (PDF). pp. 27–29. Retrieved 27 September 2016.
- ^ Diederik, Kingma; Ba, Jimmy (2014). "Adam: A method for stochastic optimization". arXiv:1412.6980 [cs.LG].
- ^ Byrd, R. H.; Hansen, S. L.; Nocedal, J.; Singer, Y. (2016). "A Stochastic Quasi-Newton method for Large-Scale Optimization". SIAM Journal on Optimization. 26 (2): 1008–1031. arXiv:1401.7020. doi:10.1137/140954362.
- ^ Spall, J. C. (2000). "Adaptive Stochastic Approximation by the Simultaneous Perturbation Method". IEEE Transactions on Automatic Control. 45 (10): 1839−1853. doi:10.1109/TAC.2000.880982.
- ^ Spall, J. C. (2009). "Feedback and Weighting Mechanisms for Improving Jacobian Estimates in the Adaptive Simultaneous Perturbation Algorithm". IEEE Transactions on Automatic Control. 54 (6): 1216–1229. doi:10.1109/TAC.2009.2019793.
- ^ Bhatnagar, S.; Prasad, H. L.; Prashanth, L. A. (2013). Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods. London: Springer. ISBN 978-1-4471-4284-3.
- ^ Ruppert, D. (1985). "A Newton-Raphson Version of the Multivariate Robbins-Monro Procedure". Annals of Statistics. 13 (1): 236–245. doi:10.1214/aos/1176346589.
Further reading
- Bertsekas, Dimitri P. (1999), Nonlinear Programming (2nd ed.), Cambridge, MA.: Athena Scientific, ISBN 978-1-886529-00-7.
- Bertsekas, Dimitri (2003), Convex Analysis and Optimization, Athena Scientific.
- Bottou, Léon (2004), "Stochastic Learning", Advanced Lectures on Machine Learning, LNAI, vol. 3176, Springer, pp. 146–168, ISBN 978-3-540-23122-6.
- Davidon, W.C. (1976), "New least-square algorithms", Journal of Optimization Theory and Applications, 18 (2): 187–197, doi:10.1007/BF00935703, MR 0418461.
- Duda, Richard O.; Hart, Peter E.; Stork, David G. (2000), Pattern Classification (2nd ed.), Wiley, ISBN 978-0-471-05669-0.
- Kiwiel, Krzysztof C. (2004), "Convergence of approximate and incremental subgradient methods for convex optimization", SIAM Journal on Optimization, 14 (3): 807–840, doi:10.1137/S1052623400376366, MR 2085944. (Extensive list of references)
- Snyman, Jan A.; Wilke, Daniel N. (2018), Practical Mathematical Optimization - Basic Optimization Theory and Gradient-Based Algorithms, Springer Optimization and Its Applications Vol. 133 (2 ed.), Springer, pp. xxvi+372, ISBN 978-3-319-77585-2. (Python module pmo.py)
- Spall, James C. (2003), Introduction to Stochastic Search and Optimization, Wiley, ISBN 978-0-471-33052-3.
External links
- Using stochastic gradient descent in C++, Boost, Ublas for linear regression
- Machine Learning Algorithms
- "Gradient Descent, How Neural Networks Learn". 3Blue1Brown. October 16, 2017 – via YouTube.