Jump to content
Wikipedia The Free Encyclopedia

Hankel matrix

From Wikipedia, the free encyclopedia
Square matrix in which each ascending skew-diagonal from left to right is constant

In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a rectangular matrix in which each ascending skew-diagonal from left to right is constant. For example,

[ a b c d e b c d e f c d e f g d e f g h e f g h i ] . {\displaystyle \qquad {\begin{bmatrix}a&b&c&d&e\\b&c&d&e&f\\c&d&e&f&g\\d&e&f&g&h\\e&f&g&h&i\\\end{bmatrix}}.} {\displaystyle \qquad {\begin{bmatrix}a&b&c&d&e\\b&c&d&e&f\\c&d&e&f&g\\d&e&f&g&h\\e&f&g&h&i\\\end{bmatrix}}.}

More generally, a Hankel matrix is any n × n {\displaystyle n\times n} {\displaystyle n\times n} matrix A {\displaystyle A} {\displaystyle A} of the form

A = [ a 0 a 1 a 2 a n 1 a 1 a 2 a 2 a 2 n 4 a 2 n 4 a 2 n 3 a n 1 a 2 n 4 a 2 n 3 a 2 n 2 ] . {\displaystyle A={\begin{bmatrix}a_{0}&a_{1}&a_{2}&\ldots &a_{n-1}\\a_{1}&a_{2}&&&\vdots \\a_{2}&&&&a_{2n-4}\\\vdots &&&a_{2n-4}&a_{2n-3}\\a_{n-1}&\ldots &a_{2n-4}&a_{2n-3}&a_{2n-2}\end{bmatrix}}.} {\displaystyle A={\begin{bmatrix}a_{0}&a_{1}&a_{2}&\ldots &a_{n-1}\\a_{1}&a_{2}&&&\vdots \\a_{2}&&&&a_{2n-4}\\\vdots &&&a_{2n-4}&a_{2n-3}\\a_{n-1}&\ldots &a_{2n-4}&a_{2n-3}&a_{2n-2}\end{bmatrix}}.}

In terms of the components, if the i , j {\displaystyle i,j} {\displaystyle i,j} element of A {\displaystyle A} {\displaystyle A} is denoted with A i j {\displaystyle A_{ij}} {\displaystyle A_{ij}}, and assuming i j {\displaystyle i\leq j} {\displaystyle i\leq j}, then we have A i , j = A i + k , j k {\displaystyle A_{i,j}=A_{i+k,j-k}} {\displaystyle A_{i,j}=A_{i+k,j-k}} for all k = 0 , . . . , j i . {\displaystyle k=0,...,j-i.} {\displaystyle k=0,...,j-i.}

Properties

[edit ]
  • Any square Hankel matrix is symmetric.
  • Let J n {\displaystyle J_{n}} {\displaystyle J_{n}} be the n × n {\displaystyle n\times n} {\displaystyle n\times n} exchange matrix. If H {\displaystyle H} {\displaystyle H} is an m × n {\displaystyle m\times n} {\displaystyle m\times n} Hankel matrix, then H = T J n {\displaystyle H=TJ_{n}} {\displaystyle H=TJ_{n}} where T {\displaystyle T} {\displaystyle T} is an m × n {\displaystyle m\times n} {\displaystyle m\times n} Toeplitz matrix.
    • If T {\displaystyle T} {\displaystyle T} is real symmetric, then H = T J n {\displaystyle H=TJ_{n}} {\displaystyle H=TJ_{n}} will have the same eigenvalues as T {\displaystyle T} {\displaystyle T} up to sign.[1]
  • The Hilbert matrix is an example of a Hankel matrix.
  • The determinant of a Hankel matrix is called a catalecticant.

Hankel operator

[edit ]

Given a formal Laurent series f ( z ) = n = N a n z n , {\displaystyle f(z)=\sum _{n=-\infty }^{N}a_{n}z^{n},} {\displaystyle f(z)=\sum _{n=-\infty }^{N}a_{n}z^{n},} the corresponding Hankel operator is defined as[2] H f : C [ z ] z 1 C [ [ z 1 ] ] . {\displaystyle H_{f}:\mathbf {C} [z]\to \mathbf {z} ^{-1}\mathbf {C} [[z^{-1}]].} {\displaystyle H_{f}:\mathbf {C} [z]\to \mathbf {z} ^{-1}\mathbf {C} [[z^{-1}]].} This takes a polynomial g C [ z ] {\displaystyle g\in \mathbf {C} [z]} {\displaystyle g\in \mathbf {C} [z]} and sends it to the product f g {\displaystyle fg} {\displaystyle fg}, but discards all powers of z {\displaystyle z} {\displaystyle z} with a non-negative exponent, so as to give an element in z 1 C [ [ z 1 ] ] {\displaystyle z^{-1}\mathbf {C} [[z^{-1}]]} {\displaystyle z^{-1}\mathbf {C} [[z^{-1}]]}, the formal power series with strictly negative exponents. The map H f {\displaystyle H_{f}} {\displaystyle H_{f}} is in a natural way C [ z ] {\displaystyle \mathbf {C} [z]} {\displaystyle \mathbf {C} [z]}-linear, and its matrix with respect to the elements 1 , z , z 2 , C [ z ] {\displaystyle 1,z,z^{2},\dots \in \mathbf {C} [z]} {\displaystyle 1,z,z^{2},\dots \in \mathbf {C} [z]} and z 1 , z 2 , z 1 C [ [ z 1 ] ] {\displaystyle z^{-1},z^{-2},\dots \in z^{-1}\mathbf {C} [[z^{-1}]]} {\displaystyle z^{-1},z^{-2},\dots \in z^{-1}\mathbf {C} [[z^{-1}]]} is the Hankel matrix [ a 1 a 2 a 2 a 3 a 3 a 4 ] . {\displaystyle {\begin{bmatrix}a_{1}&a_{2}&\ldots \\a_{2}&a_{3}&\ldots \\a_{3}&a_{4}&\ldots \\\vdots &\vdots &\ddots \end{bmatrix}}.} {\displaystyle {\begin{bmatrix}a_{1}&a_{2}&\ldots \\a_{2}&a_{3}&\ldots \\a_{3}&a_{4}&\ldots \\\vdots &\vdots &\ddots \end{bmatrix}}.} Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if f {\displaystyle f} {\displaystyle f} is a rational function, that is, a fraction of two polynomials f ( z ) = p ( z ) q ( z ) . {\displaystyle f(z)={\frac {p(z)}{q(z)}}.} {\displaystyle f(z)={\frac {p(z)}{q(z)}}.}

Approximations

[edit ]

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix A {\displaystyle A} {\displaystyle A} does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

Hankel matrix transform

[edit ]
Not to be confused with Hankel transform.

The Hankel matrix transform, or simply Hankel transform, of a sequence b k {\displaystyle b_{k}} {\displaystyle b_{k}} is the sequence of the determinants of the Hankel matrices formed from b k {\displaystyle b_{k}} {\displaystyle b_{k}}. Given an integer n > 0 {\displaystyle n>0} {\displaystyle n>0}, define the corresponding ( n × n ) {\displaystyle (n\times n)} {\displaystyle (n\times n)}-dimensional Hankel matrix B n {\displaystyle B_{n}} {\displaystyle B_{n}} as having the matrix elements [ B n ] i , j = b i + j . {\displaystyle [B_{n}]_{i,j}=b_{i+j}.} {\displaystyle [B_{n}]_{i,j}=b_{i+j}.} Then the sequence h n {\displaystyle h_{n}} {\displaystyle h_{n}} given by h n = det B n {\displaystyle h_{n}=\det B_{n}} {\displaystyle h_{n}=\det B_{n}} is the Hankel transform of the sequence b k . {\displaystyle b_{k}.} {\displaystyle b_{k}.} The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes c n = k = 0 n ( n k ) b k {\displaystyle c_{n}=\sum _{k=0}^{n}{n \choose k}b_{k}} {\displaystyle c_{n}=\sum _{k=0}^{n}{n \choose k}b_{k}} as the binomial transform of the sequence b n {\displaystyle b_{n}} {\displaystyle b_{n}}, then one has det B n = det C n . {\displaystyle \det B_{n}=\det C_{n}.} {\displaystyle \det B_{n}=\det C_{n}.}

Applications of Hankel matrices

[edit ]

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired.[3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization.[4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

Method of moments for polynomial distributions

[edit ]

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.[5]

Positive Hankel matrices and the Hamburger moment problems

[edit ]
Further information: Hamburger moment problem

See also

[edit ]

Notes

[edit ]
  1. ^ Yasuda, M. (2003). "A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices". SIAM J. Matrix Anal. Appl. 25 (3): 601–605. doi:10.1137/S0895479802418835.
  2. ^ Fuhrmann 2012, §8.3
  3. ^ Aoki, Masanao (1983). "Prediction of Time Series". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 38–47. ISBN 0-387-12696-1.
  4. ^ Aoki, Masanao (1983). "Rank determination of Hankel matrices". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 67–68. ISBN 0-387-12696-1.
  5. ^ J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573

References

[edit ]
Matrix classes
Explicitly constrained entries
Constant
Conditions on eigenvalues or eigenvectors
Satisfying conditions on products or inverses
With specific applications
Used in statistics
Used in graph theory
Used in science and engineering
Related terms

AltStyle によって変換されたページ (->オリジナル) /