Backtracking line search: Difference between revisions
m Task 18 (cosmetic): eval 15 templates: del empty params (9×); |
Citation bot (talk | contribs) Added publisher. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox2 | #UCB_webform_linked 291/1051 |
||
(33 intermediate revisions by 18 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Mathematical optimization method}} |
|||
{{No footnotes|date=September 2020}} |
{{No footnotes|date=September 2020}} |
||
In (unconstrained) [[optimization |
In (unconstrained) [[mathematical optimization]], a '''backtracking line search''' is a [[line search]] method to determine the amount to move along a given [[Descent direction|search direction]]. Its use requires that the [[objective function]] is [[Differentiable function|differentiable]] and that its [[gradient]] is known. |
||
The method involves starting with a relatively large estimate of the [[step size]] for movement along the line search direction, and iteratively shrinking the step size (i.e., "backtracking") until a decrease of the objective function is observed that adequately corresponds to the amount of decrease that is expected, based on the step size and the local gradient of the objective function. A common stopping criterion is the '''Armijo–Goldstein condition.''' |
|||
⚫ | |||
⚫ | |||
==Motivation== |
==Motivation== |
||
Line 13: | Line 16: | ||
Based on a selected control parameter <math>c\,\in\,(0,1)</math>, the Armijo–Goldstein condition tests whether a step-wise movement from a current position |
Based on a selected control parameter <math>c\,\in\,(0,1)</math>, the Armijo–Goldstein condition tests whether a step-wise movement from a current position |
||
<math>\mathbf{x}</math> to a modified position <math>\mathbf{x}+\alpha\,\mathbf{p}</math> achieves an adequately corresponding decrease in the objective function. The |
<math>\mathbf{x}</math> to a modified position <math>\mathbf{x}+\alpha\,\mathbf{p}</math> achieves an adequately corresponding decrease in the objective function. The condition is fulfilled, see {{harvtxt|Armijo|1966}}, if <math>f(\mathbf{x}+\alpha\,\mathbf{p})\leq f(\mathbf{x})+\alpha\,c\,m\,.</math> |
||
This condition, when used appropriately as part of a line search, can ensure that the step size is not excessively large. However, this condition is not sufficient on its own to ensure that the step size is nearly optimal, since any value of <math>\displaystyle \alpha</math> that is sufficiently small will satisfy the condition. |
This condition, when used appropriately as part of a line search, can ensure that the step size is not excessively large. However, this condition is not sufficient on its own to ensure that the step size is nearly optimal, since any value of <math>\displaystyle \alpha</math> that is sufficiently small will satisfy the condition. |
||
Line 33: | Line 36: | ||
In practice, the above algorithm is typically iterated to produce a sequence <math>\mathbf{x}_n</math>, <math>n = 1, 2, ...</math>, to converge to a minimum, provided such a minimum exists and <math>\mathbf{p}_n</math> is selected appropriately in each step. For gradient descent, <math>\mathbf{p}_n</math> is selected as <math>-\nabla f(\mathbf{x}_n)</math>. |
In practice, the above algorithm is typically iterated to produce a sequence <math>\mathbf{x}_n</math>, <math>n = 1, 2, ...</math>, to converge to a minimum, provided such a minimum exists and <math>\mathbf{p}_n</math> is selected appropriately in each step. For gradient descent, <math>\mathbf{p}_n</math> is selected as <math>-\nabla f(\mathbf{x}_n)</math>. |
||
The value of <math>\alpha_j</math> for the <math>j</math> that fulfills the Armijo–Goldstein condition depends on <math>\mathbf{x}</math> and <math>\mathbf{p}</math>, and is thus denoted below by <math>\alpha (\mathbf{x},\mathbf{p})</math>. It also depends on <math>f</math>, <math>\alpha_0</math>, <math>\tau</math> and <math>c</math> of course, although these dependencies can be left implicit if are assumed to be fixed with respect to the optimization problem. |
The value of <math>\alpha_j</math> for the <math>j</math> that fulfills the Armijo–Goldstein condition depends on <math>\mathbf{x}</math> and <math>\mathbf{p}</math>, and is thus denoted below by <math>\alpha (\mathbf{x},\mathbf{p})</math>. It also depends on <math>f</math>, <math>\alpha_0</math>, <math>\tau</math> and <math>c</math> of course, although these dependencies can be left implicit if they are assumed to be fixed with respect to the optimization problem. |
||
The detailed steps are thus, see {{harvtxt|Armijo|1966}}, {{harvtxt|Bertsekas|2016}}: |
The detailed steps are thus, see {{harvtxt|Armijo|1966}}, {{harvtxt|Bertsekas|2016}}: |
||
# Choose an initial starting point <math>\mathbf{x}_0</math> and set the iteration counter <math>n=0</math>. |
# Choose an initial starting point <math>\mathbf{x}_0</math> and set the iteration counter <math>n=0</math>. |
||
# Until some stopping condition is satisfied, choose a descent direction <math>\mathbf{p}_n</math>, |
# Until some stopping condition is satisfied, choose a descent direction <math>\mathbf{p}_n</math>, update the position to <math>\mathbf{x}_{n+1}=\mathbf{x}_n+\alpha (\mathbf{x}_n,\mathbf{p}_n)\,\mathbf{p}_n</math>, and increment <math>n</math>. |
||
# Return <math>\mathbf{x}_{n}</math> as the minimizing position and <math>f(\mathbf{x}_n)</math> as the function minimum. |
# Return <math>\mathbf{x}_{n}</math> as the minimizing position and <math>f(\mathbf{x}_n)</math> as the function minimum. |
||
Line 50: | Line 53: | ||
== Upper bound for learning rates == |
== Upper bound for learning rates == |
||
In the same situation where <math>\mathbf{p}=-\nabla f(\mathbf{x})</math>, an interesting question is how large learning rates can be chosen in Armijo's condition (that is, when one has no limit on <math>\alpha_0</math> in the section "Function minimization using backtracking line search in practice"), since larger learning rates when <math>\mathbf{x}_n</math> is closer to the limit point (if exists) can make convergence faster. For example, in [[Wolfe conditions]], there is no mention of <math>\alpha_0</math> but another condition called curvature condition is introduced. |
In the same situation where <math>\mathbf{p}=-\nabla f(\mathbf{x})</math>, an interesting question is how large learning rates can be chosen in Armijo's condition (that is, when one has no limit on <math>\alpha_0</math> as defined in the section "Function minimization using backtracking line search in practice"), since larger learning rates when <math>\mathbf{x}_n</math> is closer to the limit point (if exists) can make convergence faster. For example, in [[Wolfe conditions]], there is no mention of <math>\alpha_0</math> but another condition called curvature condition is introduced. |
||
An upper bound for learning rates is shown to exist if one wants the constructed sequence <math>\mathbf{x}_n</math> converges to a [[Critical point (mathematics)|non-degenerate critical point]], see {{harvtxt|Truong|Nguyen|2020}}: The learning rates must be bounded from above roughly by <math>||H||\times ||H^{-1}||^2</math>. Here H is the Hessian of the function at the limit point, <math>H^{-1}</math> is its [[Invertible matrix|inverse]], and <math>||.||</math> is the [[Operator norm|norm of a linear operator]]. Thus, this result applies for example when one uses Backtracking line search for [[Morse theory|Morse functions]]. Note that in dimension 1, <math>H</math> is a number and hence this upper bound is of the same size as the lower bound in the section "Lower bound for learning rates". |
An upper bound for learning rates is shown to exist if one wants the constructed sequence <math>\mathbf{x}_n</math> converges to a [[Critical point (mathematics)|non-degenerate critical point]], see {{harvtxt|Truong|Nguyen|2020}}: The learning rates must be bounded from above roughly by <math>||H||\times ||H^{-1}||^2</math>. Here H is the Hessian of the function at the limit point, <math>H^{-1}</math> is its [[Invertible matrix|inverse]], and <math>||.||</math> is the [[Operator norm|norm of a linear operator]]. Thus, this result applies for example when one uses Backtracking line search for [[Morse theory|Morse functions]]. Note that in dimension 1, <math>H</math> is a number and hence this upper bound is of the same size as the lower bound in the section "Lower bound for learning rates". |
||
On the other hand, if the limit point is degenerate, then learning rates can be unbounded. For example, a modification of |
On the other hand, if the limit point is degenerate, then learning rates can be unbounded. For example, a modification of backtracking line search known as unbounded backtracking gradient descent (see {{harvtxt|Truong|Nguyen|2020}}) allows the learning rate to be half the size <math>||\nabla f(\mathbf{x}_n)||^{-\gamma}</math>, where <math>1>\gamma >0</math> is a constant. Experiments with simple functions such as <math>f(x,y)=x^4+y^4</math> show that unbounded backtracking gradient descent converges much faster than the basic version described in the section "Function minimization using backtracking line search in practice". |
||
==Time efficiency== |
==Time efficiency== |
||
An argument against the use of Backtracking line search, in particular in Large scale optimisation, is that satisfying Armijo's condition is expensive. There is a way (so-called Two-way Backtracking) to go around, with good theoretical guarantees and has been tested with good results on [[Deep learning| |
An argument against the use of Backtracking line search, in particular in Large scale optimisation, is that satisfying Armijo's condition is expensive. There is a way (so-called Two-way Backtracking) to go around, with good theoretical guarantees and has been tested with good results on [[Deep learning|deep neural networks]], see {{harvtxt|Truong|Nguyen|2020}}. (There, one can find also good/stable implementations of Armijo's condition and its combination with some popular algorithms such as Momentum and NAG, on datasets such as Cifar10 and Cifar100.) One observes that if the sequence <math>\mathbf{x}_{n}</math> converges (as wished when one makes use of an iterative optimisation method), then the sequence of learning rates <math>\alpha_n</math> should vary little when n is large enough. Therefore, in the search for <math>\alpha_n</math>, if one always starts from <math>\alpha_0</math>, one would waste a lot of time if it turns out that the sequence <math>\alpha_n</math> stays far away from <math>\alpha_0</math>. Instead, one should search for <math>\alpha_n</math> by starting from <math>\alpha_{n-1}</math>. The second observation is that <math>\alpha_n</math> could be larger than <math>\alpha_{n-1}</math>, and hence one should allow to increase learning rate (and not just decrease as in the section Algorithm). Here is the detailed algorithm for Two-way Backtracking: At step n |
||
# Set <math>\gamma _0=\alpha _{n-1}</math>. Set <math>t = -c\,m</math> and iteration counter <math>j\,=\,0</math>. |
# Set <math>\gamma _0=\alpha _{n-1}</math>. Set <math>t = -c\,m</math> and iteration counter <math>j\,=\,0</math>. |
||
# (Increase learning rate if Armijo's condition is satisfied.) If <math>f(\mathbf{x})-f(\mathbf{x}+\gamma _j\,\mathbf{p})\geq \gamma_j\,t,</math>, then while this condition and the condition that <math>\gamma _j\leq \alpha _0</math> are satisfied, repeatedly set <math>\ |
# (Increase learning rate if Armijo's condition is satisfied.) If <math>f(\mathbf{x})-f(\mathbf{x}+\gamma _j\,\mathbf{p})\geq \gamma_j\,t,</math>, then while this condition and the condition that <math>\gamma _j\leq \alpha _0</math> are satisfied, repeatedly set <math>\gamma_{j+1}=\gamma_{j}/\tau </math> and increase j. |
||
# (Otherwise, reduce the learning rate if Armijo's condition is not satisfied.) If in contrast <math>f(\mathbf{x})-f(\mathbf{x}+\gamma _0\,\mathbf{p})< \gamma_j\,t,</math>, then until the condition is satisfied that <math>f(\mathbf{x})-f(\mathbf{x}+\gamma_j\,\mathbf{p})\geq \gamma_j\,t,</math> repeatedly increment <math>j</math> and set <math>\ |
# (Otherwise, reduce the learning rate if Armijo's condition is not satisfied.) If in contrast <math>f(\mathbf{x})-f(\mathbf{x}+\gamma _0\,\mathbf{p})< \gamma_j\,t,</math>, then until the condition is satisfied that <math>f(\mathbf{x})-f(\mathbf{x}+\gamma_j\,\mathbf{p})\geq \gamma_j\,t,</math> repeatedly increment <math>j</math> and set <math>\gamma_j=\tau\,\gamma_{j-1}\,.</math> |
||
# Return <math>\gamma_j</math> for the learning rate <math>\alpha _n</math>. |
# Return <math>\gamma_j</math> for the learning rate <math>\alpha _n</math>. |
||
(In {{harvtxt|Nocedal|Wright|2000}} one can find a description of an algorithm with 1), 3) and 4) above, which was not tested in deep neural networks before the cited paper.) |
|||
⚫ | One can save time further by a hybrid mixture between |
||
⚫ | One can save time further by a hybrid mixture between two-way backtracking and the basic standard gradient descent algorithm. This procedure also has good theoretical guarantee and good test performance. Roughly speaking, we run two-way backtracking a few times, then use the learning rate we get from then unchanged, except if the function value increases. Here is precisely how it is done. One choose in advance a number <math>N</math>, and a number <math>m\leq N</math>. |
||
# Set iteration counter j=0. |
# Set iteration counter j=0. |
||
Line 72: | Line 77: | ||
# Increase j by 1. |
# Increase j by 1. |
||
== Theoretical guarantee (for gradient descent) == |
== Theoretical guarantee (for gradient descent) == |
||
Compared with Wolfe's conditions, which is more complicated, Armijo's condition has a better theoretical guarantee. Indeed, so far backtracking line search and its modifications are the most theoretically guaranteed methods among all numerical optimization algorithms concerning convergence to [[Critical point (mathematics)|critical points]] and avoidance of [[Saddle point|saddle points]], see below. |
Compared with Wolfe's conditions, which is more complicated, Armijo's condition has a better theoretical guarantee. Indeed, so far backtracking line search and its modifications are the most theoretically guaranteed methods among all numerical optimization algorithms concerning convergence to [[Critical point (mathematics)|critical points]] and avoidance of [[Saddle point|saddle points]], see below. |
||
[[Critical point (mathematics)|Critical points]] are points where the gradient of the objective function is 0. Local minima are critical points, but there are critical points which are not local minima. An example is saddle points. [[Saddle point|Saddle points]] are critical points, at which there are at least one direction where the function is (local) maximum. Therefore, these points are far from being local minima. For example, if a function has at least one saddle point, then it cannot be [[Convex function|convex]]. The relevance of saddle points to optimisation algorithms is that in large scale (i.e. high-dimensional) optimisation, one likely sees more saddle points than minima, see {{harvtxt|Bray|Dean|2007}}. Hence, a good optimisation algorithm should be able to avoid saddle points. In the setting of [[ |
[[Critical point (mathematics)|Critical points]] are points where the gradient of the objective function is 0. Local minima are critical points, but there are critical points which are not local minima. An example is saddle points. [[Saddle point|Saddle points]] are critical points, at which there are at least one direction where the function is (local) maximum. Therefore, these points are far from being local minima. For example, if a function has at least one saddle point, then it cannot be [[Convex function|convex]]. The relevance of saddle points to optimisation algorithms is that in large scale (i.e. high-dimensional) optimisation, one likely sees more saddle points than minima, see {{harvtxt|Bray|Dean|2007}}. Hence, a good optimisation algorithm should be able to avoid saddle points. In the setting of [[deep learning]], saddle points are also prevalent, see {{harvtxt|Dauphin|Pascanu|Gulcehre|Cho|Ganguli|Bengjo|2014}}. Thus, to apply in deep learning, one needs results for non-convex functions. |
||
For convergence to critical points: For example, if the cost function is a [[Analytic function|real analytic function]], then it is shown in {{harvtxt|Absil|Mahony|Andrews|2005}} that convergence is guaranteed. The main idea is to use [[Łojasiewicz inequality]] which is enjoyed by a real analytic function. For non-smooth functions satisfying [[Łojasiewicz inequality]], the above convergence guarantee is extended, see {{harvtxt|Attouch|Bolte|Svaiter|2011}}. In {{harvtxt|Bertsekas|2016}}, there is a proof that for every sequence constructed by backtracking line search, a cluster point (i.e. the [[Limit (mathematics)|limit]] of one [[subsequence]], if the subsequence converges) is a critical point. For the case of a function with at most countably many critical points (such as a [[Morse theory|Morse function]]) and [[Compact space|compact]] |
For convergence to critical points: For example, if the cost function is a [[Analytic function|real analytic function]], then it is shown in {{harvtxt|Absil|Mahony|Andrews|2005}} that convergence is guaranteed. The main idea is to use [[Łojasiewicz inequality]] which is enjoyed by a real analytic function. For non-smooth functions satisfying [[Łojasiewicz inequality]], the above convergence guarantee is extended, see {{harvtxt|Attouch|Bolte|Svaiter|2011}}. In {{harvtxt|Bertsekas|2016}}, there is a proof that for every sequence constructed by backtracking line search, a cluster point (i.e. the [[Limit (mathematics)|limit]] of one [[subsequence]], if the subsequence converges) is a critical point. For the case of a function with at most countably many critical points (such as a [[Morse theory|Morse function]]) and [[Compact space|compact]] [[Level set|sublevels]], as well as with Lipschitz continuous gradient where one uses standard GD with learning rate <1/L (see the section "Stochastic gradient descent"), then convergence is guaranteed, see for example Chapter 12 in {{harvtxt|Lange|2013}}. Here the assumption about compact sublevels is to make sure that one deals with compact sets of the Euclidean space only. In the general case, where <math>f</math> is only assumed to be <math>C^1</math> and have at most countably many critical points, convergence is guaranteed, see {{harvtxt|Truong|Nguyen|2020}}. In the same reference, similarly convergence is guaranteed for other modifications of Backtracking line search (such as Unbounded backtracking gradient descent mentioned in the section "Upper bound for learning rates"), and even if the function has uncountably many critical points still one can deduce some non-trivial facts about convergence behaviour. In the stochastic setting, under the same assumption that the gradient is Lipschitz continuous and one uses a more restrictive version (requiring in addition that the sum of learning rates is infinite and the sum of squares of learning rates is finite) of diminishing learning rate scheme (see section "Stochastic gradient descent") and moreover the function is strictly convex, then the convergence is established in the well-known result {{harvtxt|Robbins|Monro|1951}}, see {{harvtxt|Bertsekas|Tsitsiklis|2006}} for generalisations to less restrictive versions of a diminishing learning rate scheme. None of these results (for non-convex functions) have been proven for any other optimization algorithm so far.{{citation needed|date=October 2020}} |
||
For avoidance of saddle points: For example, if the gradient of the cost function is Lipschitz continuous and one chooses |
For avoidance of saddle points: For example, if the gradient of the cost function is Lipschitz continuous and one chooses standard GD with learning rate <1/L, then with a random choice of initial point <math>\mathbf{x}_0</math> (more precisely, outside a set of [[Lebesgue measure]] zero), the sequence constructed will not converge to a [[Critical point (mathematics)|non-degenerate]] saddle point (proven in {{harvtxt|Lee|Simchowitz|Jordan|Recht|2016}}), and more generally it is also true that the sequence constructed will not converge to a degenerate saddle point (proven in {{harvtxt|Panageas|Piliouras|2017}}). Under the same assumption that the gradient is Lipschitz continuous and one uses a diminishing learning rate scheme (see the section "Stochastic gradient descent"), then avoidance of saddle points is established in {{harvtxt|Panageas|Piliouras|Wang|2019}}. |
||
== A special case: ( |
== A special case: (standard) stochastic gradient descent (SGD) == |
||
While it is trivial to mention, if the gradient of a cost function is Lipschitz continuous, with Lipschitz constant L, then with choosing learning rate to be constant and of the size 1/L, one has a special case of |
While it is trivial to mention, if the gradient of a cost function is Lipschitz continuous, with Lipschitz constant L, then with choosing learning rate to be constant and of the size <math>1/L</math>, one has a special case of backtracking line search (for gradient descent). This has been used at least in {{harvtxt|Armijo|1966}}. This scheme however requires that one needs to have a good estimate for L, otherwise if learning rate is too big (relative to 1/L) then the scheme has no convergence guarantee. One can see what will go wrong if the cost function is a smoothing (near the point 0) of the function f(t)=|t|. Such a good estimate is, however, difficult and laborious in large dimensions. Also, if the gradient of the function is not globally Lipschitz continuous, then this scheme has no convergence guarantee. For example, this is similar to an exercise in {{harvtxt|Bertsekas|2016}}, for the cost function <math>f(t)=|t|^{1.5} \,</math> and for whatever constant learning rate one chooses, with a random initial point the sequence constructed by this special scheme does not converge to the global minimum 0. |
||
If one does not care about the condition that learning rate must be bounded by 1/L, then this special scheme has been used much older, at least since 1847 by [[Gradient descent|Cauchy]], which can be called |
If one does not care about the condition that learning rate must be bounded by 1/L, then this special scheme has been used much older, at least since 1847 by [[Gradient descent|Cauchy]], which can be called standard GD (not to be confused with stochastic gradient descent, which is abbreviated herein as SGD). In the stochastic setting (such as in the mini-batch setting in deep learning), standard GD is called [[stochastic gradient descent]], or SGD. |
||
Even if the cost function has globally continuous gradient, good estimate of the Lipschitz constant for the cost functions in |
Even if the cost function has globally continuous gradient, good estimate of the Lipschitz constant for the cost functions in deep learning may not be feasible or desirable, given the very high dimensions of [[Deep learning|deep neural networks]]. Hence, there is a technique of fine-tuning of learning rates in applying standard GD or SGD. One way is to choose many learning rates from a grid search, with the hope that some of the learning rates can give good results. (However, if the loss function does not have global Lipschitz continuous gradient, then the example with <math>f(t)=|t|^{1.5} \,</math> above shows that grid search cannot help.) Another way is the so-called adaptive standard GD or SGD, some representatives are Adam, Adadelta, RMSProp and so on, see the article on [[Stochastic gradient descent]]. In adaptive standard GD or SGD, learning rates are allowed to vary at each iterate step n, but in a different manner from Backtracking line search for gradient descent. Apparently, it would be more expensive to use Backtracking line search for gradient descent, since one needs to do a loop search until Armijo's condition is satisfied, while for adaptive standard GD or SGD no loop search is needed. Most of these adaptive standard GD or SGD do not have the descent property <math>f(x_{n+1})\leq f(x_n)</math>, for all n, as Backtracking line search for gradient descent. Only a few has this property, and which have good theoretical properties, but they turn out to be special cases of Backtracking line search or more generally Armijo's condition {{harvtxt|Armijo|1966}}. The first one is when one chooses learning rate to be a constant <1/L, as mentioned above, if one can have a good estimate of L. The second is the so called diminishing learning rate, used in the well-known paper by {{harvtxt|Robbins|Monro|1951}}, if again the function has globally Lipschitz continuous gradient (but the Lipschitz constant may be unknown) and the learning rates converge to 0. |
||
== Summary == |
== Summary == |
||
In summary, |
In summary, backtracking line search (and its modifications) is a method which is easy to implement, is applicable for very general functions, has very good theoretical guarantee (for both convergence to critical points and avoidance of saddle points) and works well in practice. Several other methods which have good theoretical guarantee, such as diminishing learning rates or standard GD with learning rate <1/L – both require the gradient of the objective function to be Lipschitz continuous, turn out to be a special case of Backtracking line search or satisfy Armijo's condition. Even though ''a priori'' one needs the cost function to be continuously differentiable to apply this method, in practice one can apply this method successfully also for functions which are continuously differentiable on a dense open subset such as <math>f(t)=|t|</math> or <math>f(t)=ReLu(t)=\max\{t,0\}</math>. |
||
==See also== |
==See also== |
||
Line 100: | Line 105: | ||
==References== |
==References== |
||
* {{cite journal | last1 = Absil | first1 = P. A.|last2=Mahony|first2=R.|last3=Andrews|first3=B. | year = 2005 | title = Convergence of the iterates of Descent methods for analytic cost functions| journal = [[SIAM Journal on Optimization|SIAM J. Optim.]] | volume = 16 | issue = 2 | pages = 531–547 |
* {{cite journal | last1 = Absil | first1 = P. A.|last2=Mahony|first2=R.|last3=Andrews|first3=B. | year = 2005 | title = Convergence of the iterates of Descent methods for analytic cost functions| journal = [[SIAM Journal on Optimization|SIAM J. Optim.]] | volume = 16 | issue = 2 | pages = 531–547 | doi=10.1137/040605266| doi-access = }} |
||
* {{cite journal | last = Armijo | first = Larry | year = 1966 | title = Minimization of functions having Lipschitz continuous first partial derivatives | journal = Pacific J. Math. | volume = 16 | issue = 1 | pages = 1–3 | url = http://projecteuclid.org/euclid.pjm/1102995080 | doi=10.2140/pjm.1966.16.1| doi-access = free }} |
* {{cite journal | last = Armijo | first = Larry | year = 1966 | title = Minimization of functions having Lipschitz continuous first partial derivatives | journal = Pacific J. Math. | volume = 16 | issue = 1 | pages = 1–3 | url = http://projecteuclid.org/euclid.pjm/1102995080 | doi=10.2140/pjm.1966.16.1| doi-access = free }} |
||
* {{cite journal | last1 = Attouch | first1 = H.|last2=Bolte|first2=J.|last3=Svaiter|first3=B. F.| year = 2011 | title = Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods| journal = [[Mathematical Programming]] | volume = 137 | |
* {{cite journal | last1 = Attouch | first1 = H.|last2=Bolte|first2=J.|last3=Svaiter|first3=B. F.| year = 2011 | title = Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods| journal = [[Mathematical Programming]] | volume = 137 | issue = 1–2| pages = 91–129 | doi=10.1007/s10107-011-0484-9| doi-access = free }} |
||
* {{Citation|last= Bertsekas|first= Dimitri P. |author-link=Dimitri Bertsekas|year=2016|title=Nonlinear Programming|publisher= [[Athena Scientific]]| isbn= 978-1886529052|url=http://www.athenasc.com/nonlinbook.html}} |
* {{Citation|last= Bertsekas|first= Dimitri P. |author-link=Dimitri Bertsekas|year=2016|title=Nonlinear Programming|publisher= [[Athena Scientific]]| isbn= 978-1886529052|url=http://www.athenasc.com/nonlinbook.html}} |
||
* {{cite journal | last1 = Bertsekas | first1 = D. P.|last2=Tsitsiklis|first2=J. N. | year = 2006 | title = Gradient convergence in gradient methods with errors| journal = [[SIAM Journal on Optimization|SIAM J. Optim.]] | volume = 10 | issue = 3 | pages = 627–642 |
* {{cite journal | last1 = Bertsekas | first1 = D. P.|last2=Tsitsiklis|first2=J. N. | year = 2006 | title = Gradient convergence in gradient methods with errors| journal = [[SIAM Journal on Optimization|SIAM J. Optim.]] | volume = 10 | issue = 3 | pages = 627–642 | doi=10.1137/S1052623497331063| doi-access = free | citeseerx = 10.1.1.421.193 }} |
||
* {{cite journal | last1 = Bray | first1 = A. J.|last2=Dean|first2=D. S.| year = 2007 | title = Statistics of critical points of gaussian fields on large-dimensional spaces| journal = [[Physical Review Letters]] | volume = 98 | pages = 150–201 | |
* {{cite journal | last1 = Bray | first1 = A. J.|last2=Dean|first2=D. S.| year = 2007 | title = Statistics of critical points of gaussian fields on large-dimensional spaces| journal = [[Physical Review Letters]] | volume = 98 | issue = 15| pages = 150–201 | doi=10.1103/PhysRevLett.98.150201| pmid = 17501322| arxiv = cond-mat/0611023| bibcode = 2007PhRvL..98o0201B| doi-access = free }} |
||
* {{cite journal | last1 = Dauphin | first1 = Y. N.|last2=Pascanu|first2=R.| last3=Gulcehre|first3=C.| last4=Cho|first4=K.| last5=Ganguli|first5=S.| last6=Bengio|first6=Y. |year = 2014 | title = Identifying and attacking the saddle point problem in high-dimensional non-convex optimization| journal = [[NeurIPS]] | volume = 14 | pages = 2933–2941 | url = https://dl.acm.org/doi/10.5555/2969033.2969154}} |
* {{cite journal | last1 = Dauphin | first1 = Y. N.|last2=Pascanu|first2=R.| last3=Gulcehre|first3=C.| last4=Cho|first4=K.| last5=Ganguli|first5=S.| last6=Bengio|first6=Y. |year = 2014 | title = Identifying and attacking the saddle point problem in high-dimensional non-convex optimization| journal = [[NeurIPS]] | volume = 14 | pages = 2933–2941 | arxiv = 1406.2572| url = https://dl.acm.org/doi/10.5555/2969033.2969154}} |
||
* {{cite book |last1=Lange| first1= K. |title = Optimization|publisher=[[Springer-Verlag]] Publications|location= New York|year= 2013| isbn= 978-1-4614-5838-8}} |
* {{cite book |last1=Lange| first1= K. |title = Optimization|publisher=[[Springer-Verlag]] Publications|location= New York|year= 2013| isbn= 978-1-4614-5838-8}} |
||
* {{cite book | first1= J. E. |last1= Dennis |author-link1=John E. Dennis |first2= R. B.|last2= Schnabel |author-link2=Robert B. Schnabel |title = Numerical Methods for Unconstrained Optimization and Nonlinear Equations|publisher=[[Society for Industrial and Applied Mathematics|SIAM]] Publications|location= Philadelphia|year= 1996| isbn= 978-0-898713-64-0}} |
* {{cite book | first1= J. E. |last1= Dennis |author-link1=John E. Dennis |first2= R. B.|last2= Schnabel |author-link2=Robert B. Schnabel |title = Numerical Methods for Unconstrained Optimization and Nonlinear Equations|publisher=[[Society for Industrial and Applied Mathematics|SIAM]] Publications|location= Philadelphia|year= 1996| isbn= 978-0-898713-64-0}} |
||
* {{cite journal | last1 = Lee | first1 = J. D.|last2=Simchowitz|first2=M.| last3=Jordan|first3=M. I.| last4=Recht|first4=B. | year = 2016 | title = Gradient descent only converges to minimizers| journal = [[Proceedings of Machine Learning Research]] | volume = 49 | pages = 1246–1257 | url = http://proceedings.mlr.press/v49/lee16.html}} |
* {{cite journal | last1 = Lee | first1 = J. D.|last2=Simchowitz|first2=M.| last3=Jordan|first3=M. I.| last4=Recht|first4=B. | year = 2016 | title = Gradient descent only converges to minimizers| journal = [[Proceedings of Machine Learning Research]] | volume = 49 | pages = 1246–1257 | url = http://proceedings.mlr.press/v49/lee16.html}} |
||
* {{Citation|last1= Nocedal|first1= Jorge |author-link1=Jorge Nocedal |last2= Wright|first2= Stephen J. |year=2000|title=Numerical Optimization|publisher= [[Springer-Verlag]]| isbn= 0-387-98793-2|url=https://books.google.com/books?id=epc5fX0lqRIC |
* {{Citation|last1= Nocedal|first1= Jorge |author-link1=Jorge Nocedal |last2= Wright|first2= Stephen J. |year=2000|title=Numerical Optimization|publisher= [[Springer-Verlag]]| isbn= 0-387-98793-2|url=https://books.google.com/books?id=epc5fX0lqRIC}} |
||
* {{cite |
* {{cite conference | last1 = Panageas | first1 = I.|last2=Piliouras|first2=G.| year = 2017 | contribution = Gradient descent only converges to minimizers: non-isolated critical points and invariant regions| title=8th Innovations in Theoretical Computer Science Conference (ITCS 2017) | series = Leibniz International Proceedings in Informatics (LIPIcs)| volume = 67| pages = 2:1–2:12 | publisher = Schloss Dagstuhl – Leibniz-Zentrum für Informatik| url = https://drops.dagstuhl.de/opus/volltexte/2017/8164/pdf/LIPIcs-ITCS-2017-2.pdf | doi=10.4230/LIPIcs.ITCS.2017.2| isbn = 9783959770293| doi-access = free }} |
||
* {{cite journal | last1 = Panageas | first1 = I.|last2=Piliouras|first2=G.|last3=Wang|first3=X. | year = 2019 | title = First-order methods almost always avoid saddle points: The case of vanishing step-sizes| journal = [[NeurIPS]] | url = http://papers.neurips.cc/paper/8875-first-order-methods-almost-always-avoid-saddle-points-the-case-of-vanishing-step-sizes.pdf}} |
* {{cite journal | last1 = Panageas | first1 = I.|last2=Piliouras|first2=G.|last3=Wang|first3=X. | year = 2019 | title = First-order methods almost always avoid saddle points: The case of vanishing step-sizes| journal = [[NeurIPS]] | arxiv = 1906.07772| url = http://papers.neurips.cc/paper/8875-first-order-methods-almost-always-avoid-saddle-points-the-case-of-vanishing-step-sizes.pdf}} |
||
* {{cite journal | last1 = Robbins | first1 = H.|last2=Monro|first2=S. | year = 1951 | title = A stochastic approximation method | journal = [[Annals of Mathematical Statistics]] | volume = 22 |issue=3 | pages = 400–407 | url = https://projecteuclid.org/euclid.aoms/1177729586}} |
* {{cite journal | last1 = Robbins | first1 = H.|last2=Monro|first2=S. | year = 1951 | title = A stochastic approximation method | journal = [[Annals of Mathematical Statistics]] | volume = 22 |issue=3 | pages = 400–407 | doi = 10.1214/aoms/1177729586| url = https://projecteuclid.org/euclid.aoms/1177729586| doi-access = free }} |
||
* {{cite journal | last1 = Truong | first1 = T. T. |first2=H.-T. |last2=Nguyen | year = 2020 | title = Backtracking Gradient Descent Method and Some Applications in Large Scale Optimisation. Part 2: Algorithms and Experiments | journal = Applied Mathematics & Optimization | date = 6 September 2020 | |
* {{cite journal | last1 = Truong | first1 = T. T. |first2=H.-T. |last2=Nguyen | year = 2020 | title = Backtracking Gradient Descent Method and Some Applications in Large Scale Optimisation. Part 2: Algorithms and Experiments | journal = Applied Mathematics & Optimization | date = 6 September 2020 | volume = 84 | issue = 3 | pages = 2557–2586 | doi=10.1007/s00245-020-09718-8| doi-access = free | hdl = 10852/79322 | hdl-access = free }} |
||
[[Category:Mathematical optimization]] |
[[Category:Mathematical optimization]] |
Latest revision as of 20:09, 28 July 2024
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (September 2020) |
In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradient is known.
The method involves starting with a relatively large estimate of the step size for movement along the line search direction, and iteratively shrinking the step size (i.e., "backtracking") until a decrease of the objective function is observed that adequately corresponds to the amount of decrease that is expected, based on the step size and the local gradient of the objective function. A common stopping criterion is the Armijo–Goldstein condition.
Backtracking line search is typically used for gradient descent (GD), but it can also be used in other contexts. For example, it can be used with Newton's method if the Hessian matrix is positive definite.
Motivation
[edit]Given a starting position and a search direction , the task of a line search is to determine a step size that adequately reduces the objective function (assumed i.e. continuously differentiable), i.e., to find a value of that reduces relative to . However, it is usually undesirable to devote substantial resources to finding a value of to precisely minimize . This is because the computing resources needed to find a more precise minimum along one particular direction could instead be employed to identify a better search direction. Once an improved starting point has been identified by the line search, another subsequent line search will ordinarily be performed in a new direction. The goal, then, is just to identify a value of that provides a reasonable amount of improvement in the objective function, rather than to find the actual minimizing value of .
The backtracking line search starts with a large estimate of and iteratively shrinks it. The shrinking continues until a value is found that is small enough to provide a decrease in the objective function that adequately matches the decrease that is expected to be achieved, based on the local function gradient
Define the local slope of the function of along the search direction as (where denotes the dot product). It is assumed that is a vector for which some local decrease is possible, i.e., it is assumed that .
Based on a selected control parameter , the Armijo–Goldstein condition tests whether a step-wise movement from a current position to a modified position achieves an adequately corresponding decrease in the objective function. The condition is fulfilled, see Armijo (1966), if
This condition, when used appropriately as part of a line search, can ensure that the step size is not excessively large. However, this condition is not sufficient on its own to ensure that the step size is nearly optimal, since any value of that is sufficiently small will satisfy the condition.
Thus, the backtracking line search strategy starts with a relatively large step size, and repeatedly shrinks it by a factor until the Armijo–Goldstein condition is fulfilled.
The search will terminate after a finite number of steps for any positive values of and that are less than 1. For example, Armijo used 1⁄2 for both and in Armijo (1966).
Algorithm
[edit]This condition is from Armijo (1966). Starting with a maximum candidate step size value , using search control parameters and , the backtracking line search algorithm can be expressed as follows:
- Set and iteration counter .
- Until the condition is satisfied that repeatedly increment and set
- Return as the solution.
In other words, reduce by a factor of in each iteration until the Armijo–Goldstein condition is fulfilled.
Function minimization using backtracking line search in practice
[edit]In practice, the above algorithm is typically iterated to produce a sequence , , to converge to a minimum, provided such a minimum exists and is selected appropriately in each step. For gradient descent, is selected as .
The value of for the that fulfills the Armijo–Goldstein condition depends on and , and is thus denoted below by . It also depends on , , and of course, although these dependencies can be left implicit if they are assumed to be fixed with respect to the optimization problem.
The detailed steps are thus, see Armijo (1966), Bertsekas (2016):
- Choose an initial starting point and set the iteration counter .
- Until some stopping condition is satisfied, choose a descent direction , update the position to , and increment .
- Return as the minimizing position and as the function minimum.
To assure good behavior, it is necessary that some conditions must be satisfied by . Roughly speaking should not be too far away from . A precise version is as follows (see e.g. Bertsekas (2016)). There are constants so that the following two conditions are satisfied:
- For all n, . Here, is the Euclidean norm of . (This assures that if , then also . More generally, if , then also .) A more strict version requires also the reverse inequality: for a positive constant .
- For all n, . (This condition ensures that the directions of and are similar.)
Lower bound for learning rates
[edit]This addresses the question whether there is a systematic way to find a positive number - depending on the function f, the point and the descent direction - so that all learning rates satisfy Armijo's condition. When , we can choose in the order of , where is a local Lipschitz constant for the gradient near the point (see Lipschitz continuity). If the function is , then is close to the Hessian of the function at the point . See Armijo (1966) for more detail.
Upper bound for learning rates
[edit]In the same situation where , an interesting question is how large learning rates can be chosen in Armijo's condition (that is, when one has no limit on as defined in the section "Function minimization using backtracking line search in practice"), since larger learning rates when is closer to the limit point (if exists) can make convergence faster. For example, in Wolfe conditions, there is no mention of but another condition called curvature condition is introduced.
An upper bound for learning rates is shown to exist if one wants the constructed sequence converges to a non-degenerate critical point, see Truong & Nguyen (2020): The learning rates must be bounded from above roughly by . Here H is the Hessian of the function at the limit point, is its inverse, and is the norm of a linear operator. Thus, this result applies for example when one uses Backtracking line search for Morse functions. Note that in dimension 1, is a number and hence this upper bound is of the same size as the lower bound in the section "Lower bound for learning rates".
On the other hand, if the limit point is degenerate, then learning rates can be unbounded. For example, a modification of backtracking line search known as unbounded backtracking gradient descent (see Truong & Nguyen (2020)) allows the learning rate to be half the size , where is a constant. Experiments with simple functions such as show that unbounded backtracking gradient descent converges much faster than the basic version described in the section "Function minimization using backtracking line search in practice".
Time efficiency
[edit]An argument against the use of Backtracking line search, in particular in Large scale optimisation, is that satisfying Armijo's condition is expensive. There is a way (so-called Two-way Backtracking) to go around, with good theoretical guarantees and has been tested with good results on deep neural networks, see Truong & Nguyen (2020). (There, one can find also good/stable implementations of Armijo's condition and its combination with some popular algorithms such as Momentum and NAG, on datasets such as Cifar10 and Cifar100.) One observes that if the sequence converges (as wished when one makes use of an iterative optimisation method), then the sequence of learning rates should vary little when n is large enough. Therefore, in the search for , if one always starts from , one would waste a lot of time if it turns out that the sequence stays far away from . Instead, one should search for by starting from . The second observation is that could be larger than , and hence one should allow to increase learning rate (and not just decrease as in the section Algorithm). Here is the detailed algorithm for Two-way Backtracking: At step n
- Set . Set and iteration counter .
- (Increase learning rate if Armijo's condition is satisfied.) If , then while this condition and the condition that are satisfied, repeatedly set and increase j.
- (Otherwise, reduce the learning rate if Armijo's condition is not satisfied.) If in contrast , then until the condition is satisfied that repeatedly increment and set
- Return for the learning rate .
(In Nocedal & Wright (2000) one can find a description of an algorithm with 1), 3) and 4) above, which was not tested in deep neural networks before the cited paper.)
One can save time further by a hybrid mixture between two-way backtracking and the basic standard gradient descent algorithm. This procedure also has good theoretical guarantee and good test performance. Roughly speaking, we run two-way backtracking a few times, then use the learning rate we get from then unchanged, except if the function value increases. Here is precisely how it is done. One choose in advance a number , and a number .
- Set iteration counter j=0.
- At the steps , use Two-way Backtracking.
- At each step k in the set : Set . If , then choose and . (So, in this case, use the learning rate unchanged.) Otherwise, if , use Two-way Backtracking. Increase k by 1 and repeat.
- Increase j by 1.
Theoretical guarantee (for gradient descent)
[edit]Compared with Wolfe's conditions, which is more complicated, Armijo's condition has a better theoretical guarantee. Indeed, so far backtracking line search and its modifications are the most theoretically guaranteed methods among all numerical optimization algorithms concerning convergence to critical points and avoidance of saddle points, see below.
Critical points are points where the gradient of the objective function is 0. Local minima are critical points, but there are critical points which are not local minima. An example is saddle points. Saddle points are critical points, at which there are at least one direction where the function is (local) maximum. Therefore, these points are far from being local minima. For example, if a function has at least one saddle point, then it cannot be convex. The relevance of saddle points to optimisation algorithms is that in large scale (i.e. high-dimensional) optimisation, one likely sees more saddle points than minima, see Bray & Dean (2007). Hence, a good optimisation algorithm should be able to avoid saddle points. In the setting of deep learning, saddle points are also prevalent, see Dauphin et al. (2014). Thus, to apply in deep learning, one needs results for non-convex functions.
For convergence to critical points: For example, if the cost function is a real analytic function, then it is shown in Absil, Mahony & Andrews (2005) that convergence is guaranteed. The main idea is to use Łojasiewicz inequality which is enjoyed by a real analytic function. For non-smooth functions satisfying Łojasiewicz inequality, the above convergence guarantee is extended, see Attouch, Bolte & Svaiter (2011). In Bertsekas (2016), there is a proof that for every sequence constructed by backtracking line search, a cluster point (i.e. the limit of one subsequence, if the subsequence converges) is a critical point. For the case of a function with at most countably many critical points (such as a Morse function) and compact sublevels, as well as with Lipschitz continuous gradient where one uses standard GD with learning rate <1/L (see the section "Stochastic gradient descent"), then convergence is guaranteed, see for example Chapter 12 in Lange (2013). Here the assumption about compact sublevels is to make sure that one deals with compact sets of the Euclidean space only. In the general case, where is only assumed to be and have at most countably many critical points, convergence is guaranteed, see Truong & Nguyen (2020). In the same reference, similarly convergence is guaranteed for other modifications of Backtracking line search (such as Unbounded backtracking gradient descent mentioned in the section "Upper bound for learning rates"), and even if the function has uncountably many critical points still one can deduce some non-trivial facts about convergence behaviour. In the stochastic setting, under the same assumption that the gradient is Lipschitz continuous and one uses a more restrictive version (requiring in addition that the sum of learning rates is infinite and the sum of squares of learning rates is finite) of diminishing learning rate scheme (see section "Stochastic gradient descent") and moreover the function is strictly convex, then the convergence is established in the well-known result Robbins & Monro (1951), see Bertsekas & Tsitsiklis (2006) for generalisations to less restrictive versions of a diminishing learning rate scheme. None of these results (for non-convex functions) have been proven for any other optimization algorithm so far.[citation needed]
For avoidance of saddle points: For example, if the gradient of the cost function is Lipschitz continuous and one chooses standard GD with learning rate <1/L, then with a random choice of initial point (more precisely, outside a set of Lebesgue measure zero), the sequence constructed will not converge to a non-degenerate saddle point (proven in Lee et al. (2016)), and more generally it is also true that the sequence constructed will not converge to a degenerate saddle point (proven in Panageas & Piliouras (2017)). Under the same assumption that the gradient is Lipschitz continuous and one uses a diminishing learning rate scheme (see the section "Stochastic gradient descent"), then avoidance of saddle points is established in Panageas, Piliouras & Wang (2019).
A special case: (standard) stochastic gradient descent (SGD)
[edit]While it is trivial to mention, if the gradient of a cost function is Lipschitz continuous, with Lipschitz constant L, then with choosing learning rate to be constant and of the size , one has a special case of backtracking line search (for gradient descent). This has been used at least in Armijo (1966). This scheme however requires that one needs to have a good estimate for L, otherwise if learning rate is too big (relative to 1/L) then the scheme has no convergence guarantee. One can see what will go wrong if the cost function is a smoothing (near the point 0) of the function f(t)=|t|. Such a good estimate is, however, difficult and laborious in large dimensions. Also, if the gradient of the function is not globally Lipschitz continuous, then this scheme has no convergence guarantee. For example, this is similar to an exercise in Bertsekas (2016), for the cost function and for whatever constant learning rate one chooses, with a random initial point the sequence constructed by this special scheme does not converge to the global minimum 0.
If one does not care about the condition that learning rate must be bounded by 1/L, then this special scheme has been used much older, at least since 1847 by Cauchy, which can be called standard GD (not to be confused with stochastic gradient descent, which is abbreviated herein as SGD). In the stochastic setting (such as in the mini-batch setting in deep learning), standard GD is called stochastic gradient descent, or SGD.
Even if the cost function has globally continuous gradient, good estimate of the Lipschitz constant for the cost functions in deep learning may not be feasible or desirable, given the very high dimensions of deep neural networks. Hence, there is a technique of fine-tuning of learning rates in applying standard GD or SGD. One way is to choose many learning rates from a grid search, with the hope that some of the learning rates can give good results. (However, if the loss function does not have global Lipschitz continuous gradient, then the example with above shows that grid search cannot help.) Another way is the so-called adaptive standard GD or SGD, some representatives are Adam, Adadelta, RMSProp and so on, see the article on Stochastic gradient descent. In adaptive standard GD or SGD, learning rates are allowed to vary at each iterate step n, but in a different manner from Backtracking line search for gradient descent. Apparently, it would be more expensive to use Backtracking line search for gradient descent, since one needs to do a loop search until Armijo's condition is satisfied, while for adaptive standard GD or SGD no loop search is needed. Most of these adaptive standard GD or SGD do not have the descent property , for all n, as Backtracking line search for gradient descent. Only a few has this property, and which have good theoretical properties, but they turn out to be special cases of Backtracking line search or more generally Armijo's condition Armijo (1966). The first one is when one chooses learning rate to be a constant <1/L, as mentioned above, if one can have a good estimate of L. The second is the so called diminishing learning rate, used in the well-known paper by Robbins & Monro (1951), if again the function has globally Lipschitz continuous gradient (but the Lipschitz constant may be unknown) and the learning rates converge to 0.
Summary
[edit]In summary, backtracking line search (and its modifications) is a method which is easy to implement, is applicable for very general functions, has very good theoretical guarantee (for both convergence to critical points and avoidance of saddle points) and works well in practice. Several other methods which have good theoretical guarantee, such as diminishing learning rates or standard GD with learning rate <1/L – both require the gradient of the objective function to be Lipschitz continuous, turn out to be a special case of Backtracking line search or satisfy Armijo's condition. Even though a priori one needs the cost function to be continuously differentiable to apply this method, in practice one can apply this method successfully also for functions which are continuously differentiable on a dense open subset such as or .
See also
[edit]References
[edit]- Absil, P. A.; Mahony, R.; Andrews, B. (2005). "Convergence of the iterates of Descent methods for analytic cost functions". SIAM J. Optim. 16 (2): 531–547. doi:10.1137/040605266.
- Armijo, Larry (1966). "Minimization of functions having Lipschitz continuous first partial derivatives". Pacific J. Math. 16 (1): 1–3. doi:10.2140/pjm.1966.16.1.
- Attouch, H.; Bolte, J.; Svaiter, B. F. (2011). "Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods". Mathematical Programming. 137 (1–2): 91–129. doi:10.1007/s10107-011-0484-9.
- Bertsekas, Dimitri P. (2016), Nonlinear Programming, Athena Scientific, ISBN 978-1886529052
- Bertsekas, D. P.; Tsitsiklis, J. N. (2006). "Gradient convergence in gradient methods with errors". SIAM J. Optim. 10 (3): 627–642. CiteSeerX 10.1.1.421.193. doi:10.1137/S1052623497331063.
- Bray, A. J.; Dean, D. S. (2007). "Statistics of critical points of gaussian fields on large-dimensional spaces". Physical Review Letters. 98 (15): 150–201. arXiv:cond-mat/0611023. Bibcode:2007PhRvL..98o0201B. doi:10.1103/PhysRevLett.98.150201. PMID 17501322.
- Dauphin, Y. N.; Pascanu, R.; Gulcehre, C.; Cho, K.; Ganguli, S.; Bengio, Y. (2014). "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization". NeurIPS. 14: 2933–2941. arXiv:1406.2572.
- Lange, K. (2013). Optimization. New York: Springer-Verlag Publications. ISBN 978-1-4614-5838-8.
- Dennis, J. E.; Schnabel, R. B. (1996). Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Philadelphia: SIAM Publications. ISBN 978-0-898713-64-0.
- Lee, J. D.; Simchowitz, M.; Jordan, M. I.; Recht, B. (2016). "Gradient descent only converges to minimizers". Proceedings of Machine Learning Research. 49: 1246–1257.
- Nocedal, Jorge; Wright, Stephen J. (2000), Numerical Optimization, Springer-Verlag, ISBN 0-387-98793-2
- Panageas, I.; Piliouras, G. (2017). "Gradient descent only converges to minimizers: non-isolated critical points and invariant regions". 8th Innovations in Theoretical Computer Science Conference (ITCS 2017) (PDF). Leibniz International Proceedings in Informatics (LIPIcs). Vol. 67. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. pp. 2:1–2:12. doi:10.4230/LIPIcs.ITCS.2017.2. ISBN 9783959770293.
- Panageas, I.; Piliouras, G.; Wang, X. (2019). "First-order methods almost always avoid saddle points: The case of vanishing step-sizes" (PDF). NeurIPS. arXiv:1906.07772.
- Robbins, H.; Monro, S. (1951). "A stochastic approximation method". Annals of Mathematical Statistics. 22 (3): 400–407. doi:10.1214/aoms/1177729586.
- Truong, T. T.; Nguyen, H.-T. (6 September 2020). "Backtracking Gradient Descent Method and Some Applications in Large Scale Optimisation. Part 2: Algorithms and Experiments". Applied Mathematics & Optimization. 84 (3): 2557–2586. doi:10.1007/s00245-020-09718-8. hdl:10852/79322.
{{cite journal}}
: CS1 maint: date and year (link)