Iteratively reweighted least squares: Difference between revisions
mNo edit summary |
→L1 minimization for sparse recovery: rm unsourced claim of disutility |
||
(31 intermediate revisions by 21 users not shown) | |||
Line 1: | Line 1: | ||
{{for|International lunar research station|International Lunar Research Station}} |
|||
{{Short description|Method for solving certain optimization problems}} |
|||
{{Regression bar}} |
{{Regression bar}} |
||
⚫ | |||
⚫ | |||
⚫ | |||
<math display="block">\mathop{\operatorname{arg\,min}}_{\boldsymbol\beta} \sum_{i=1}^n \big| y_i - f_i (\boldsymbol\beta) \big|^p, </math> |
|||
by an [[iterative method]] in which each step involves solving a [[weighted least squares]] problem of the form:<ref name=Burrus>C. Sidney Burrus, ''[https://cnx.org/exports/92b90377-2b34-49e4-b26f-7fe572db78a1@12.pdf/iterative-reweighted-least-squares-12.pdf Iterative Reweighted Least Squares]''</ref> |
by an [[iterative method]] in which each step involves solving a [[weighted least squares]] problem of the form:<ref name=Burrus>C. Sidney Burrus, ''[https://web.archive.org/web/20221017041048/https://cnx.org/exports/92b90377-2b34-49e4-b26f-7fe572db78a1@12.pdf/iterative-reweighted-least-squares-12.pdf Iterative Reweighted Least Squares]''</ref> |
||
<math display="block">\boldsymbol\beta^{(t+1)} = \underset{\boldsymbol\beta} {\operatorname{arg\,min}} \sum_{i=1}^n w_i (\boldsymbol\beta^{(t)}) \big| y_i - f_i (\boldsymbol\beta) \big|^2. </math> |
|||
IRLS is used to find the [[maximum likelihood]] estimates of a [[generalized linear model]], and in [[robust regression]] to find an [[M-estimator]], as a way of mitigating the influence of outliers in an otherwise normally-distributed data set |
IRLS is used to find the [[maximum likelihood]] estimates of a [[generalized linear model]], and in [[robust regression]] to find an [[M-estimator]], as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing the [[least absolute errors]] rather than the [[least squares|least square errors]]. |
||
Although not a linear regression problem, [[Weiszfeld's algorithm]] for approximating the [[geometric median]] can also be viewed as a special case of iteratively reweighted least squares, in which the objective function is the sum of distances of the estimator from the samples. |
|||
One of the advantages of IRLS over [[linear programming]] and [[convex programming]] is that it can be used with [[Gauss–Newton]] and [[Levenberg–Marquardt]] numerical algorithms. |
One of the advantages of IRLS over [[linear programming]] and [[convex programming]] is that it can be used with [[Gauss–Newton]] and [[Levenberg–Marquardt]] numerical algorithms. |
||
Line 18: | Line 17: | ||
=== ''L''<sub>1</sub> minimization for sparse recovery === |
=== ''L''<sub>1</sub> minimization for sparse recovery === |
||
IRLS can be used for '''[[L1 norm| |
IRLS can be used for '''[[L1 norm|''ℓ''<sub>1</sub>]]''' minimization and smoothed '''[[Lp quasi-norm|''ℓ''<sub>p</sub>]]''' minimization, ''p'' < 1, in [[compressed sensing]] problems. It has been proved that the algorithm has a linear rate of convergence for ''ℓ''<sub>1</sub> norm and superlinear for ''ℓ''<sub>''t''</sub> with ''t'' < 1, under the [[restricted isometry property]], which is generally a sufficient condition for sparse solutions.<ref>{{Cite conference |
||
| last1 = Chartrand | first1 = R. |
| last1 = Chartrand | first1 = R. |
||
| last2 = Yin | first2 = W. |
| last2 = Yin | first2 = W. |
||
| title = Iteratively reweighted algorithms for compressive sensing |
| title = Iteratively reweighted algorithms for compressive sensing |
||
| |
| book-title = IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2008 |
||
| pages = 3869–3872 |
| pages = 3869–3872 |
||
| date = March 31 – April 4, 2008 |
| date = March 31 – April 4, 2008 |
||
| |
| doi = 10.1109/ICASSP.2008.4518498 |
||
}} |
|||
</ref><ref>{{Cite journal | last1 = Daubechies | first1 = I. | last2 = Devore | first2 = R. | last3 = Fornasier | first3 = M. | last4 = Güntürk | first4 = C. S. N. | title = Iteratively reweighted least squares minimization for sparse recovery | doi = 10.1002/cpa.20303 | journal = Communications on Pure and Applied Mathematics | volume = 63 | pages = |
</ref><ref>{{Cite journal | last1 = Daubechies | first1 = I. | last2 = Devore | first2 = R. | last3 = Fornasier | first3 = M. | last4 = Güntürk | first4 = C. S. N. | title = Iteratively reweighted least squares minimization for sparse recovery | doi = 10.1002/cpa.20303 | journal = Communications on Pure and Applied Mathematics | volume = 63 | pages = 1–38 | year = 2010 | arxiv = 0807.0575 }}</ref> |
||
=== ''L<sup>p</sup>'' norm linear regression === |
=== ''L<sup>p</sup>'' norm linear regression === |
||
To find the parameters '''''β''''' = (''β''<sub>1</sub>, …,''β''<sub>''k''</sub>)<sup>T</sup> which minimize the [[Lp space|''L<sup>p</sup>'' norm]] for the [[linear regression]] problem, |
To find the parameters '''''β''''' = (''β''<sub>1</sub>, …,''β''<sub>''k''</sub>)<sup>T</sup> which minimize the [[Lp space|''L<sup>p</sup>'' norm]] for the [[linear regression]] problem, |
||
<math display="block"> |
|||
\underset{\boldsymbol \beta}{ \operatorname{arg\,min} } |
\underset{\boldsymbol \beta}{ \operatorname{arg\,min} } |
||
\big\| \mathbf y - X \boldsymbol \beta \|_p |
\big\| \mathbf y - X \boldsymbol \beta \|_p |
||
Line 39: | Line 39: | ||
</math> |
</math> |
||
the IRLS algorithm at step ''t''+1 involves solving the [[Linear least squares (mathematics)#Weighted linear least squares|weighted linear least squares]] problem:<ref>{{cite book |
the IRLS algorithm at step ''t'' + 1 involves solving the [[Linear least squares (mathematics)#Weighted linear least squares|weighted linear least squares]] problem:<ref>{{cite book |
||
|chapter=6.8.1 Solutions that Minimize Other Norms of the Residuals |
|chapter=6.8.1 Solutions that Minimize Other Norms of the Residuals |
||
|title=Matrix algebra |
|title=Matrix algebra |
||
Line 47: | Line 47: | ||
|publisher=Springer |location=New York |
|publisher=Springer |location=New York |
||
|year=2007 |
|year=2007 |
||
|series=Springer Texts in Statistics |
|||
}}</ref> |
}}</ref> |
||
<math display="block"> |
|||
\boldsymbol\beta^{(t+1)} |
\boldsymbol\beta^{(t+1)} |
||
= |
= |
||
Line 60: | Line 61: | ||
where ''W''<sup>(''t'')</sup> is the [[diagonal matrix]] of weights, usually with all elements set initially to: |
where ''W''<sup>(''t'')</sup> is the [[diagonal matrix]] of weights, usually with all elements set initially to: |
||
<math display="block">w_i^{(0)} = 1</math> |
|||
and updated after each iteration to: |
and updated after each iteration to: |
||
<math display="block">w_i^{(t)} = \big|y_i - X_i \boldsymbol \beta ^{(t)} \big|^{p-2}.</math> |
|||
In the case ''p'' = 1, this corresponds to [[least absolute deviation]] regression (in this case, the problem would be better approached by use of [[linear programming]] methods,<ref name=Pfeil>William A. Pfeil, |
In the case ''p'' = 1, this corresponds to [[least absolute deviation]] regression (in this case, the problem would be better approached by use of [[linear programming]] methods,<ref name=Pfeil>William A. Pfeil, |
||
''[http://www.wpi.edu/Pubs/E-project/Available/E-project-050506-091720/unrestricted/IQP_Final_Report.pdf Statistical Teaching Aids]'', Bachelor of Science thesis, [[Worcester Polytechnic Institute]], 2006</ref> so the result would be exact) and the formula is: |
''[http://www.wpi.edu/Pubs/E-project/Available/E-project-050506-091720/unrestricted/IQP_Final_Report.pdf Statistical Teaching Aids]'', Bachelor of Science thesis, [[Worcester Polytechnic Institute]], 2006</ref> so the result would be exact) and the formula is: |
||
<math display="block">w_i^{(t)} = \frac{1}{\big|y_i - X_i \boldsymbol \beta ^{(t)} \big|}.</math> |
|||
To avoid dividing by zero, [[Regularization (mathematics)|regularization]] must be done, so in practice the formula is: |
To avoid dividing by zero, [[Regularization (mathematics)|regularization]] must be done, so in practice the formula is: |
||
<math display="block">w_i^{(t)} = \frac 1 {\max\left\{\delta, \left|y_i - X_i \boldsymbol \beta ^{(t)} \right|\right\} }.</math> |
|||
where <math>\delta</math> is some small value, like 0.0001.<ref name=Pfeil /> Note the use of <math>\delta</math> in the weighting function is equivalent to the [[Huber loss]] function in robust estimation. <ref name=Fox_and_Weisberg> Fox, J.; Weisberg, S. (2013),''[http://users.stat.umn.edu/~sandy/courses/8053/handouts/robust.pdf Robust Regression]'', Course Notes, University of Minnesota</ref> |
|||
== See also == |
|||
where <math>\delta</math> is some small value, like 0.0001.<ref name=Pfeil /> |
|||
⚫ | |||
* [[Weiszfeld's algorithm]] (for approximating the [[geometric median]]), which can be viewed as a special case of IRLS |
|||
== Notes == |
== Notes == |
||
Line 81: | Line 86: | ||
== References == |
== References == |
||
* [http://www.mai.liu.se/~akbjo/LSPbook.html Numerical Methods for Least Squares Problems by Åke Björck] (Chapter 4: Generalized Least Squares Problems.) |
* [https://web.archive.org/web/20070810222123/http://www.mai.liu.se/~akbjo/LSPbook.html Numerical Methods for Least Squares Problems by Åke Björck] (Chapter 4: Generalized Least Squares Problems.) |
||
* [http://graphics.stanford.edu/~jplewis/lscourse/SLIDES.pdf Practical Least-Squares for Computer Graphics. SIGGRAPH Course 11] |
* [http://graphics.stanford.edu/~jplewis/lscourse/SLIDES.pdf Practical Least-Squares for Computer Graphics. SIGGRAPH Course 11] |
||
== External links == |
== External links == |
||
* [ |
* [https://stemblab.github.io/irls/ Solve under-determined linear systems iteratively] |
||
{{DEFAULTSORT:Iteratively Reweighted Least Squares}} |
{{DEFAULTSORT:Iteratively Reweighted Least Squares}} |
Latest revision as of 07:47, 4 June 2024
Part of a series on |
Regression analysis |
---|
Models |
Estimation |
Background |
The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm:
by an iterative method in which each step involves solving a weighted least squares problem of the form:[1]
IRLS is used to find the maximum likelihood estimates of a generalized linear model, and in robust regression to find an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing the least absolute errors rather than the least square errors.
One of the advantages of IRLS over linear programming and convex programming is that it can be used with Gauss–Newton and Levenberg–Marquardt numerical algorithms.
Examples
[edit]L1 minimization for sparse recovery
[edit]IRLS can be used for ℓ1 minimization and smoothed ℓp minimization, p < 1, in compressed sensing problems. It has been proved that the algorithm has a linear rate of convergence for ℓ1 norm and superlinear for ℓt with t < 1, under the restricted isometry property, which is generally a sufficient condition for sparse solutions.[2][3]
Lp norm linear regression
[edit]To find the parameters β = (β1, …,βk)T which minimize the Lp norm for the linear regression problem,
the IRLS algorithm at step t + 1 involves solving the weighted linear least squares problem:[4]
where W(t) is the diagonal matrix of weights, usually with all elements set initially to:
and updated after each iteration to:
In the case p = 1, this corresponds to least absolute deviation regression (in this case, the problem would be better approached by use of linear programming methods,[5] so the result would be exact) and the formula is:
To avoid dividing by zero, regularization must be done, so in practice the formula is:
where is some small value, like 0.0001.[5] Note the use of in the weighting function is equivalent to the Huber loss function in robust estimation. [6]
See also
[edit]- Feasible generalized least squares
- Weiszfeld's algorithm (for approximating the geometric median), which can be viewed as a special case of IRLS
Notes
[edit]- ^ C. Sidney Burrus, Iterative Reweighted Least Squares
- ^ Chartrand, R.; Yin, W. (March 31 – April 4, 2008). "Iteratively reweighted algorithms for compressive sensing". IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2008. pp. 3869–3872. doi:10.1109/ICASSP.2008.4518498.
- ^ Daubechies, I.; Devore, R.; Fornasier, M.; Güntürk, C. S. N. (2010). "Iteratively reweighted least squares minimization for sparse recovery". Communications on Pure and Applied Mathematics. 63: 1–38. arXiv:0807.0575. doi:10.1002/cpa.20303.
- ^ Gentle, James (2007). "6.8.1 Solutions that Minimize Other Norms of the Residuals". Matrix algebra. Springer Texts in Statistics. New York: Springer. doi:10.1007/978-0-387-70873-7. ISBN 978-0-387-70872-0.
- ^ a b William A. Pfeil, Statistical Teaching Aids, Bachelor of Science thesis, Worcester Polytechnic Institute, 2006
- ^ Fox, J.; Weisberg, S. (2013),Robust Regression, Course Notes, University of Minnesota
References
[edit]- Numerical Methods for Least Squares Problems by Åke Björck (Chapter 4: Generalized Least Squares Problems.)
- Practical Least-Squares for Computer Graphics. SIGGRAPH Course 11