Hankel matrix: Difference between revisions
Resolving Category:Harv and Sfn no-target errors. Either |p= or |loc= must be used, otherwise the link think it's part of the author details. Also updated zbl code |
mNo edit summary |
||
Line 9: | Line 9: | ||
\end{bmatrix}.</math> |
\end{bmatrix}.</math> |
||
More generally, a '''Hankel matrix''' is any <math>n \times n</math> matrix <math>A</math> of the form |
More generally, a '''Hankel matrix''' is any <math>n \times n</math> [[matrix (mathematics)|matrix]] <math>A</math> of the form |
||
<math display=block>A = \begin{bmatrix} |
<math display=block>A = \begin{bmatrix} |
||
a_0 & a_1 & a_2 & \ldots & \ldots & a_{n-1} \\ |
|||
a_1 & a_2 & & & &\vdots \\ |
|||
a_2 & & & & & \vdots \\ |
|||
\vdots & & & & & a_{2n-4}\\ |
\vdots & & & & & a_{2n-4}\\ |
||
\vdots & & & & a_{2n-4} |
\vdots & & & & a_{2n-4} & a_{2n-3} \\ |
||
a_{n-1} & \ldots & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} |
a_{n-1} & \ldots & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} |
||
\end{bmatrix}.</math> |
\end{bmatrix}.</math> |
||
In terms of the components, if the <math>i,j</math> element of <math>A</math> is denoted with <math>A_{ij}</math>, and assuming <math>i\le j</math>, then we have <math>A_{i,j} = A_{i+k,j-k}</math> for all <math>k = 0,...,j-i.</math> |
In terms of the components, if the <math>i,j</math> element of <math>A</math> is denoted with <math>A_{ij}</math>, and assuming <math>i \le j</math>, then we have <math>A_{i,j} = A_{i+k,j-k}</math> for all <math>k = 0,...,j-i.</math> |
||
==Properties== |
==Properties== |
||
* Any Hankel matrix is [[symmetric matrix|symmetric]]. |
* Any Hankel matrix is [[symmetric matrix|symmetric]]. |
||
* Let <math>J_n</math> be the <math>n \times n</math> [[exchange matrix]]. If <math>H</math> is |
* Let <math>J_n</math> be the <math>n \times n</math> [[exchange matrix]]. If <math>H</math> is an <math>m \times n</math> Hankel matrix, then <math>H = T J_n</math> where <math>T</math> is an <math>m \times n</math> [[Toeplitz matrix]]. |
||
** If <math>T</math> is [[real number|real]] symmetric, then <math>H = T J_n</math> will have the same [[eigenvalue]]s as <math>T</math> up to sign.<ref name="simax1">{{cite journal | last = Yasuda | first = M. | title = A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices | journal = SIAM J. Matrix Anal. Appl. | volume = 25 | issue = 3 | pages = 601–605 | year = 2003 | doi = 10.1137/S0895479802418835}}</ref> |
** If <math>T</math> is [[real number|real]] symmetric, then <math>H = T J_n</math> will have the same [[eigenvalue]]s as <math>T</math> up to sign.<ref name="simax1">{{cite journal | last = Yasuda | first = M. | title = A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices | journal = SIAM J. Matrix Anal. Appl. | volume = 25 | issue = 3 | pages = 601–605 | year = 2003 | doi = 10.1137/S0895479802418835}}</ref> |
||
* The [[Hilbert matrix]] is an example of a Hankel matrix. |
* The [[Hilbert matrix]] is an example of a Hankel matrix. |
||
==Relation to formal Laurent series== |
==Relation to formal Laurent series== |
||
Hankel matrices are closely related to [[formal Laurent series]].<ref>{{harvnb|Fuhrmann|2012|loc=§8.3}}</ref> In fact, such a series <math>f(z) = \sum_{n=-\infty}^N a_n z^n</math> gives rise to a linear map, referred to as a ''Hankel operator'' |
Hankel matrices are closely related to [[formal Laurent series]].<ref>{{harvnb|Fuhrmann|2012|loc=§8.3}}</ref> In fact, such a series <math>f(z) = \sum_{n=-\infty}^N a_n z^n</math> gives rise to a [[linear map]], referred to as a ''Hankel operator'' |
||
:<math>H_f : \mathbf C[z] \to \mathbf z^{-1} \mathbf C[[z^{-1}]],</math> |
:<math>H_f : \mathbf C[z] \to \mathbf z^{-1} \mathbf C[[z^{-1}]],</math> |
||
which takes a [[polynomial]] <math>g \in \mathbf C[z]</math> and sends it to the product <math> |
which takes a [[polynomial]] <math>g \in \mathbf C[z]</math> and sends it to the product <math>fg</math>, but discards all powers of <math>z</math> with a non-negative exponent, so as to give an element in <math>z^{-1} \mathbf C[[z^{-1}]]</math>, the [[formal power series]] with strictly negative exponents. The map <math>H_f</math> is in a natural way <math>\mathbf C[z]</math>-linear, and its matrix with respect to the elements <math>1, z, z^2, \dots \in \mathbf C[z]</math> and <math>z^{-1}, z^{-2}, \dots \in z^{-1}\mathbf C[[z^{-1}]]</math> is the Hankel matrix |
||
:<math display=block>\begin{bmatrix} |
:<math display=block>\begin{bmatrix} |
||
a_1 & a_2 & \ldots \\ |
a_1 & a_2 & \ldots \\ |
||
Line 38: | Line 38: | ||
\vdots |
\vdots |
||
\end{bmatrix}.</math> |
\end{bmatrix}.</math> |
||
Any Hankel matrix arises in |
Any Hankel matrix arises in this way. A [[theorem]] due to [[Kronecker]] says that the [[rank (linear algebra)|rank]] of this matrix is finite precisely if <math>f</math> is a [[rational function]], i.e., a fraction of two polynomials <math>f(z) = \frac{p(z)}{q(z)}.</math> |
||
==Hankel operator== |
==Hankel operator== |
||
A Hankel |
A Hankel operator on a [[Hilbert space]] is one whose matrix is a (possibly [[infinite matrix|infinite]]) Hankel matrix with respect to an [[orthonormal basis]]. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix <math>A </math> must satisfy, for all rows <math>i</math> and columns <math>j</math>, <math>(A_{i,j})_{i,j \ge 1}</math>. Note that every entry <math>A_{i,j}</math> depends only on <math>i+j</math>. |
||
Let the corresponding '''Hankel Operator''' be <math>H_\alpha</math>. Given a Hankel matrix <math>A</math>, the corresponding Hankel operator is then defined as <math>H_\alpha(u)= Au</math>. |
Let the corresponding '''Hankel Operator''' be <math>H_\alpha</math>. Given a Hankel matrix <math>A</math>, the corresponding Hankel operator is then defined as <math>H_\alpha(u) = Au</math>. |
||
We are often interested in Hankel operators <math>H_\alpha: \ell^ |
We are often interested in Hankel operators <math>H_\alpha: \ell^2\left(\mathbb{Z}^+ \cup\{0\}\right) \to \ell^2\left(\mathbb{Z}^+ \cup\{0\}\right)</math> over the Hilbert space <math>\ell^2(\mathbb Z)</math>, the space of square-integrable bilateral [[complex number|complex]] [[sequence]]s. For any <math>u \in \ell^2(\mathbb Z)</math>, we have |
||
<math |
: <math>\|u\|_{\ell^2(z)}^2 = \sum_{n=-\infty}^{\infty}\left| u_n \right|^2</math> |
||
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests [[singular value decomposition]] as a possible technique to approximate the action of the operator. |
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests [[singular value decomposition]] as a possible technique to approximate the action of the operator. |
||
Line 59: | Line 59: | ||
{{Distinguish|Hankel transform}} |
{{Distinguish|Hankel transform}} |
||
The '''Hankel matrix transform''', or simply '''Hankel transform''', produces the sequence of the |
The '''Hankel matrix transform''', or simply '''Hankel transform''', produces the sequence of the determinants of the Hankel matrices formed from the given sequence. Namely, the sequence <math>\{h_n\}_{n \ge 0}</math> is the Hankel transform of the sequence <math>\{b_n\}_{n\ge 0}</math> when |
||
<math |
: <math>h_n = \det (b_{i+j-2})_{1 \le i,j \le n+1}.</math> |
||
The Hankel transform is invariant under the [[binomial transform]] of a sequence. That is, if one writes |
The Hankel transform is invariant under the [[binomial transform]] of a sequence. That is, if one writes |
||
⚫ | |||
⚫ | |||
as the binomial transform of the sequence <math>\{b_n\}</math>, then one has |
as the binomial transform of the sequence <math>\{b_n\}</math>, then one has |
||
⚫ | |||
⚫ | |||
== Applications of Hankel matrices == |
== Applications of Hankel matrices == |
||
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or [[hidden Markov model]] is desired.<ref>{{cite book |first=Masanao |last=Aoki |author-link=Masanao Aoki |chapter=Prediction of Time Series |title=Notes on Economic Time Series Analysis : System Theoretic Perspectives |location=New York |publisher=Springer |year=1983 |isbn=0-387-12696-1 |pages=38–47 |chapter-url=https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA38 }}</ref> The |
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or [[hidden Markov model]] is desired.<ref>{{cite book |first=Masanao |last=Aoki |author-link=Masanao Aoki |chapter=Prediction of Time Series |title=Notes on Economic Time Series Analysis : System Theoretic Perspectives |location=New York |publisher=Springer |year=1983 |isbn=0-387-12696-1 |pages=38–47 |chapter-url=https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA38 }}</ref> The singular value decomposition of the Hankel matrix provides a means of computing the ''A'', ''B'', and ''C'' matrices which define the state-space realization.<ref>{{cite book |first=Masanao |last=Aoki |chapter=Rank determination of Hankel matrices |title=Notes on Economic Time Series Analysis : System Theoretic Perspectives |location=New York |publisher=Springer |year=1983 |isbn=0-387-12696-1 |pages=67–68 |chapter-url=https://books.google.com/books?id=l_LsCAAAQBAJ&pg=PA67 }}</ref> The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation. |
||
=== Method of moments for polynomial distributions === |
=== Method of moments for polynomial distributions === |
Revision as of 22:08, 13 December 2023
In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:
More generally, a Hankel matrix is any matrix of the form
In terms of the components, if the element of is denoted with , and assuming , then we have for all
Properties
- Any Hankel matrix is symmetric.
- Let be the exchange matrix. If is an Hankel matrix, then where is an Toeplitz matrix.
- If is real symmetric, then will have the same eigenvalues as up to sign.[1]
- The Hilbert matrix is an example of a Hankel matrix.
Relation to formal Laurent series
Hankel matrices are closely related to formal Laurent series.[2] In fact, such a series gives rise to a linear map, referred to as a Hankel operator
which takes a polynomial and sends it to the product , but discards all powers of with a non-negative exponent, so as to give an element in , the formal power series with strictly negative exponents. The map is in a natural way -linear, and its matrix with respect to the elements and is the Hankel matrix
Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if is a rational function, i.e., a fraction of two polynomials
Hankel operator
A Hankel operator on a Hilbert space is one whose matrix is a (possibly infinite) Hankel matrix with respect to an orthonormal basis. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix must satisfy, for all rows and columns , . Note that every entry depends only on .
Let the corresponding Hankel Operator be . Given a Hankel matrix , the corresponding Hankel operator is then defined as .
We are often interested in Hankel operators over the Hilbert space , the space of square-integrable bilateral complex sequences. For any , we have
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.
Note that the matrix does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.
The determinant of a Hankel matrix is called a catalecticant.
Hankel matrix transform
The Hankel matrix transform, or simply Hankel transform, produces the sequence of the determinants of the Hankel matrices formed from the given sequence. Namely, the sequence is the Hankel transform of the sequence when
The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes
as the binomial transform of the sequence , then one has
Applications of Hankel matrices
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.
Method of moments for polynomial distributions
The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[5]
Positive Hankel matrices and the Hamburger moment problems
See also
- Toeplitz matrix, an "upside down" (i.e., row-reversed) Hankel matrix
- Cauchy matrix
- Vandermonde matrix
Notes
- ^ Yasuda, M. (2003). "A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices". SIAM J. Matrix Anal. Appl. 25 (3): 601–605. doi:10.1137/S0895479802418835.
- ^ Fuhrmann 2012, §8.3
- ^ Aoki, Masanao (1983). "Prediction of Time Series". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 38–47. ISBN 0-387-12696-1.
- ^ Aoki, Masanao (1983). "Rank determination of Hankel matrices". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 67–68. ISBN 0-387-12696-1.
- ^ J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573
References
- Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
- Fuhrmann, Paul A. (2012). A polynomial approach to linear algebra. Universitext (2 ed.). New York, NY: Springer. doi:10.1007/978-1-4614-0338-8. ISBN 978-1-4614-0337-1. Zbl 1239.15001.
- Victor Y. Pan (2001). Structured matrices and polynomials: unified superfast algorithms. Birkhäuser. ISBN 0817642404.
- J.R. Partington (1988). An introduction to Hankel operators. LMS Student Texts. Vol. 13. Cambridge University Press. ISBN 0-521-36791-3.