Hankel matrix: Difference between revisions
A Hankel matrix doesn't have to be a square matrix |
|||
(34 intermediate revisions by 20 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|A square matrix in which each ascending skew-diagonal from left to right is constant}} |
|||
In [[linear algebra]], a '''Hankel matrix''' (or '''[[catalecticant]] matrix'''), named after [[Hermann Hankel]], is a n x m matrix in which each ascending skew-diagonal from left to right is constant. For example, |
|||
<math display=block>\qquad\begin{bmatrix} |
|||
a & b & c & d & e \\ |
a & b & c & d & e \\ |
||
b & c & d & e & f \\ |
b & c & d & e & f \\ |
||
Line 9: | Line 10: | ||
\end{bmatrix}.</math> |
\end{bmatrix}.</math> |
||
More generally, a '''Hankel matrix''' is any <math>n\times n</math> matrix <math>A</math> of the form |
More generally, a '''Hankel matrix''' is any <math>n \times n</math> [[matrix (mathematics)|matrix]] <math>A</math> of the form |
||
<math display=block>A = \begin{bmatrix} |
|||
<math> |
|||
a_0 & a_1 & a_2 & \ldots & a_{n-1} \\ |
|||
A = |
|||
a_1 & a_2 & & &\vdots \\ |
|||
\begin{bmatrix} |
|||
a_2 & & & & a_{2n-4} \\ |
|||
\vdots & & & a_{2n-4} & a_{2n-3} \\ |
|||
a_{n-1} & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} |
|||
\end{bmatrix}.</math> |
|||
\vdots & & & & & a_{2n-4}\\ |
|||
\vdots & & & & a_{2n-4}& a_{2n-3} \\ |
|||
a_{n-1} & \ldots & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} |
|||
\end{bmatrix}. |
|||
</math> |
|||
In terms of the components, if the <math>i,j</math> element of <math>A</math> is denoted with <math>A_{ij}</math>, and assuming <math>i\le j</math>, then we have<math |
In terms of the components, if the <math>i,j</math> element of <math>A</math> is denoted with <math>A_{ij}</math>, and assuming <math>i \le j</math>, then we have <math>A_{i,j} = A_{i+k,j-k}</math> for all <math>k = 0,...,j-i.</math> |
||
==Properties== |
|||
The Hankel matrix is a [[symmetric matrix]]. |
|||
* Any Hankel matrix is [[symmetric matrix|symmetric]]. |
|||
* Let <math>J_n</math> be the <math>n \times n</math> [[exchange matrix]]. If <math>H</math> is an <math>m \times n</math> Hankel matrix, then <math>H = T J_n</math> where <math>T</math> is an <math>m \times n</math> [[Toeplitz matrix]]. |
|||
** If <math>T</math> is [[real number|real]] symmetric, then <math>H = T J_n</math> will have the same [[eigenvalue]]s as <math>T</math> up to sign.<ref name="simax1">{{cite journal | last = Yasuda | first = M. | title = A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices | journal = SIAM J. Matrix Anal. Appl. | volume = 25 | issue = 3 | pages = 601–605 | year = 2003 | doi = 10.1137/S0895479802418835}}</ref> |
|||
* The [[Hilbert matrix]] is an example of a Hankel matrix. |
|||
* The [[determinant]] of a Hankel matrix is called a [[catalecticant]]. |
|||
==Hankel operator== |
|||
The Hankel matrix is closely related to the [[Toeplitz matrix]] (a Hankel matrix is an upside-down Toeplitz matrix). For a special case of this matrix see [[Hilbert matrix]]. |
|||
Given a [[formal Laurent series]] |
|||
<math display="block"> |
|||
f(z) = \sum_{n=-\infty}^N a_n z^n, |
|||
</math> |
|||
the corresponding '''Hankel operator''' is defined as<ref>{{harvnb|Fuhrmann|2012|loc=§8.3}}</ref> |
|||
<math display="block"> |
|||
H_f : \mathbf C[z] \to \mathbf z^{-1} \mathbf C[[z^{-1}]]. |
|||
</math> |
|||
This takes a [[polynomial]] <math>g \in \mathbf C[z]</math> and sends it to the product <math>fg</math>, but discards all powers of <math>z</math> with a non-negative exponent, so as to give an element in <math>z^{-1} \mathbf C[[z^{-1}]]</math>, the [[formal power series]] with strictly negative exponents. The map <math>H_f</math> is in a natural way <math>\mathbf C[z]</math>-linear, and its matrix with respect to the elements <math>1, z, z^2, \dots \in \mathbf C[z]</math> and <math>z^{-1}, z^{-2}, \dots \in z^{-1}\mathbf C[[z^{-1}]]</math> is the Hankel matrix |
|||
<math display=block>\begin{bmatrix} |
|||
a_1 & a_2 & \ldots \\ |
|||
a_2 & a_3 & \ldots \\ |
|||
a_3 & a_4 & \ldots \\ |
|||
\vdots & \vdots & \ddots |
|||
\end{bmatrix}.</math> |
|||
Any Hankel matrix arises in this way. A [[theorem]] due to [[Kronecker]] says that the [[rank (linear algebra)|rank]] of this matrix is finite precisely if <math>f</math> is a [[rational function]], that is, a fraction of two polynomials |
|||
<math display="block"> |
|||
f(z) = \frac{p(z)}{q(z)}. |
|||
</math> |
|||
==Approximations== |
|||
A Hankel [[operator (mathematics)|operator]] on a [[Hilbert space]] is one whose matrix with respect to an [[orthonormal basis]] is a (possibly infinite) Hankel matrix |
|||
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests [[singular value decomposition]] as a possible technique to approximate the action of the operator. |
|||
<math>(A_{i,j})_{i,j \ge 1}</math>, where <math> A_{i,j}</math> depends only on <math>i+j</math>. |
|||
Note that the matrix <math>A</math> does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with [[AAK theory]]. |
|||
The determinant of a Hankel matrix is called a [[catalecticant]]. |
|||
==Hankel transform== |
==Hankel matrix transform== |
||
{{Distinguish|Hankel transform}} |
|||
The '''Hankel transform''' is the name sometimes given to the transformation of a [[sequence]], where the transformed sequence corresponds to the determinant of the Hankel matrix. That is, the sequence <math>\{h_n\}_{n\ge 0}</math> is the Hankel transform of the sequence <math>\{b_n\}_{n\ge 0}</math> when |
|||
The '''Hankel matrix transform''', or simply '''Hankel transform''', of a [[sequence]] <math>b_k</math> is the sequence of the determinants of the Hankel matrices formed from <math>b_k</math>. Given an integer <math>n > 0</math>, define the corresponding <math>(n \times n)</math>-dimensional Hankel matrix <math>B_n</math> as having the matrix elements <math>[B_n]_{i,j} = b_{i+j}.</math> Then the sequence <math>h_n</math> given by |
|||
:<math>h_n = \det (b_{i+j-2})_{1 \le i,j \le n+1}.</math> |
|||
<math display="block"> |
|||
h_n = \det B_n |
|||
Here, <math>a_{i,j}=b_{i+j-2}</math> is the Hankel matrix of the sequence <math>\{b_n\}</math>. The Hankel transform is invariant under the [[binomial transform]] of a sequence. That is, if one writes |
|||
</math> |
|||
is the Hankel transform of the sequence <math>b_k.</math> The Hankel transform is invariant under the [[binomial transform]] of a sequence. That is, if one writes |
|||
:<math>c_n = \sum_{k=0}^n {n \choose k} b_k</math> |
|||
<math display="block"> |
|||
c_n = \sum_{k=0}^n {n \choose k} b_k |
|||
as the binomial transform of the sequence <math>\{b_n\}</math>, then one has |
|||
</math> |
|||
as the binomial transform of the sequence <math>b_n</math>, then one has <math>\det B_n = \det C_n.</math> |
|||
:<math>\det (b_{i+j-2})_{1 \le i,j \le n+1} = \det (c_{i+j-2})_{1 \le i,j \le n+1}.</math> |
|||
== Applications of Hankel matrices == |
== Applications of Hankel matrices == |
||
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or [[hidden Markov model]] is desired.<ref>{{cite book |first=Masanao |last=Aoki | |
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or [[hidden Markov model]] is desired.<ref>{{cite book |first=Masanao |last=Aoki |author-link=Masanao Aoki |chapter=Prediction of Time Series |title=Notes on Economic Time Series Analysis : System Theoretic Perspectives |location=New York |publisher=Springer |year=1983 |isbn=0-387-12696-1 |pages=38–47 |chapter-url=https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA38 }}</ref> The singular value decomposition of the Hankel matrix provides a means of computing the ''A'', ''B'', and ''C'' matrices which define the state-space realization.<ref>{{cite book |first=Masanao |last=Aoki |chapter=Rank determination of Hankel matrices |title=Notes on Economic Time Series Analysis : System Theoretic Perspectives |location=New York |publisher=Springer |year=1983 |isbn=0-387-12696-1 |pages=67–68 |chapter-url=https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA67 }}</ref> The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation. |
||
=== Method of moments for polynomial distributions === |
|||
== Orthogonal polynomials on the real line == |
|||
The [[Method of moments (statistics)|method of moments]] applied to polynomial distributions results in a Hankel matrix that needs to be [[inverse matrix|inverted]] in order to obtain the weight parameters of the polynomial distribution approximation.<ref name="PolyD2">J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573</ref> |
|||
=== Positive Hankel matrices and the Hamburger moment problems === |
=== Positive Hankel matrices and the Hamburger moment problems === |
||
{{ |
{{Further|Hamburger moment problem}} |
||
=== Orthogonal polynomials on the real line === |
|||
=== Tridiagonal model of positive Hankel operators === |
|||
=== Relation between Hankel and Toeplitz matrices === |
|||
Let <math> J_n </math> be an [[exchange matrix]] of order <math> n </math>. If <math>H(m,n)</math> is a <math>m \times n </math> Hankel matrix, then <math>H(m,n) = T(m, n) \, J_n </math>, where <math>T(m,n)</math> is a <math>m \times n </math> Toeplitz matrix. |
|||
If <math>T(n,n)</math> is symmetric, then <math>H(n,n) = T(n, n) \, J_n </math> will have the same eigenvalues as <math>T(n,n)</math> up to sign.<ref name="simax1">{{cite journal | last = Yasuda | first = M. | title = A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices | journal = SIAM J. Matrix Anal. Appl. | volume = 25 | issue = 3 | pages = 601–605 | year = 2003 | doi = 10.1137/S0895479802418835}}</ref> |
|||
=== Relations between structured matrices === |
|||
==See also== |
==See also== |
||
* [[Cauchy matrix]] |
* [[Cauchy matrix]] |
||
* [[Jacobi operator]] |
|||
* [[Toeplitz matrix]], an "upside down" (that is, row-reversed) Hankel matrix |
|||
* [[Vandermonde matrix]] |
* [[Vandermonde matrix]] |
||
* [[Displacement rank]] |
|||
== Notes == |
== Notes == |
||
{{ |
{{Reflist}} |
||
== References == |
== References == |
||
*[[Richard P. Brent|Brent R.P.]] (1999), "Stability of fast algorithms for structured linear systems", ''Fast Reliable Algorithms for Matrices with Structure'' (editors—T. Kailath, A.H. Sayed), ch.4 ([[Society for Industrial and Applied Mathematics|SIAM]]). |
*[[Richard P. Brent|Brent R.P.]] (1999), "Stability of fast algorithms for structured linear systems", ''Fast Reliable Algorithms for Matrices with Structure'' (editors—T. Kailath, A.H. Sayed), ch.4 ([[Society for Industrial and Applied Mathematics|SIAM]]). |
||
*{{cite book |
|||
* {{cite book | title=Structured matrices and polynomials: unified superfast algorithms | author=Victor Y. Pan | authorlink=Victor Pan | publisher=[[Birkhäuser]] | year=2001 | isbn=0817642404 }} |
|||
| last = Fuhrmann |
|||
* {{cite book | title=An introduction to Hankel operators | author=J.R. Partington | authorlink=Jonathan Partington | series=LMS Student Texts | volume=13 | publisher=[[Cambridge University Press]] | year=1988 | isbn=0-521-36791-3 }} |
|||
| first = Paul A. |
|||
* P. Jain and R.B. Pachori, [https://www.sciencedirect.com/science/article/pii/S0016003215002288 An iterative approach for decomposition of multi-component non-stationary signals based on eigenvalue decomposition of the Hankel matrix], Journal of the Franklin Institute, vol. 352, issue 10, pp. 4017--4044, October 2015. |
|||
| title = A polynomial approach to linear algebra |
|||
* P. Jain and R.B. Pachori, [https://ieeexplore.ieee.org/document/6847702/ Event-based method for instantaneous fundamental frequency estimation from voiced speech based on eigenvalue decomposition of Hankel matrix], IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 22. issue 10, pp. 1467-1482, October 2014. |
|||
| edition = 2 |
|||
*R.R. Sharma and R.B. Pachori, [https://ieeexplore.ieee.org/document/8249682/ Time-frequency representation using IEVDHM-HT with application to classification of epileptic EEG signals], IET Science, Measurement & Technology, vol. 12, issue 01, pp. 72-82, January 2018. |
|||
| series = Universitext |
|||
| year = 2012 |
|||
| publisher = Springer |
|||
| location = New York, NY |
|||
| isbn = 978-1-4614-0337-1 |
|||
| doi = 10.1007/978-1-4614-0338-8 |
|||
| zbl = 1239.15001 |
|||
}} |
|||
* {{cite book | title=Structured matrices and polynomials: unified superfast algorithms | author=Victor Y. Pan | author-link=Victor Pan | publisher=[[Birkhäuser]] | year=2001 | isbn=0817642404 }} |
|||
* {{cite book | title=An introduction to Hankel operators | author=J.R. Partington | author-link=Jonathan Partington | series=LMS Student Texts | volume=13 | publisher=[[Cambridge University Press]] | year=1988 | isbn=0-521-36791-3 }} |
|||
{{Matrix classes}} |
{{Matrix classes}} |
||
{{Authority control}} |
|||
[[Category:Matrices]] |
[[Category:Matrices]] |
Latest revision as of 15:26, 29 November 2024
In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a n x m matrix in which each ascending skew-diagonal from left to right is constant. For example,
More generally, a Hankel matrix is any matrix of the form
In terms of the components, if the element of is denoted with , and assuming , then we have for all
Properties
[edit]- Any Hankel matrix is symmetric.
- Let be the exchange matrix. If is an Hankel matrix, then where is an Toeplitz matrix.
- If is real symmetric, then will have the same eigenvalues as up to sign.[1]
- The Hilbert matrix is an example of a Hankel matrix.
- The determinant of a Hankel matrix is called a catalecticant.
Hankel operator
[edit]Given a formal Laurent series the corresponding Hankel operator is defined as[2] This takes a polynomial and sends it to the product , but discards all powers of with a non-negative exponent, so as to give an element in , the formal power series with strictly negative exponents. The map is in a natural way -linear, and its matrix with respect to the elements and is the Hankel matrix Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if is a rational function, that is, a fraction of two polynomials
Approximations
[edit]We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.
Note that the matrix does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.
Hankel matrix transform
[edit]The Hankel matrix transform, or simply Hankel transform, of a sequence is the sequence of the determinants of the Hankel matrices formed from . Given an integer , define the corresponding -dimensional Hankel matrix as having the matrix elements Then the sequence given by is the Hankel transform of the sequence The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes as the binomial transform of the sequence , then one has
Applications of Hankel matrices
[edit]Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.
Method of moments for polynomial distributions
[edit]The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[5]
Positive Hankel matrices and the Hamburger moment problems
[edit]See also
[edit]- Cauchy matrix
- Jacobi operator
- Toeplitz matrix, an "upside down" (that is, row-reversed) Hankel matrix
- Vandermonde matrix
Notes
[edit]- ^ Yasuda, M. (2003). "A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices". SIAM J. Matrix Anal. Appl. 25 (3): 601–605. doi:10.1137/S0895479802418835.
- ^ Fuhrmann 2012, §8.3
- ^ Aoki, Masanao (1983). "Prediction of Time Series". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 38–47. ISBN 0-387-12696-1.
- ^ Aoki, Masanao (1983). "Rank determination of Hankel matrices". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 67–68. ISBN 0-387-12696-1.
- ^ J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573
References
[edit]- Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
- Fuhrmann, Paul A. (2012). A polynomial approach to linear algebra. Universitext (2 ed.). New York, NY: Springer. doi:10.1007/978-1-4614-0338-8. ISBN 978-1-4614-0337-1. Zbl 1239.15001.
- Victor Y. Pan (2001). Structured matrices and polynomials: unified superfast algorithms. Birkhäuser. ISBN 0817642404.
- J.R. Partington (1988). An introduction to Hankel operators. LMS Student Texts. Vol. 13. Cambridge University Press. ISBN 0-521-36791-3.