The Matrix
Reference Manual by Mike Brookes
The Fundamental Theorem of Linear Algebra
(1993-) by Gil Strang (1934).
Toeplitz
Matrices by Dario Bini.
Toeplitz Determinants
and Spin Correlations by Tai Tsun Wu (1966).
MathWorld (Eric Weisstein):
Hankel Matrix
MathPages (Kevin Brown):
The Resultant and Bezout's Theorem
A Library of Hadamard
Matrices by N.J.A. Sloane.
Complex Hadamard
Matrices by Karol Zyczkowski and Wojciech Tadej.
The Hadamard maximal determinant problem.
Wikipedia: Determinant | List of Matrices | Levinson-Durbin algorithm
The Rise and Fall of Matrices (42:37)
by Walter Ledermann (LMS, 1986).
Gaussian elimination
[ Part 1
|
Part 2 ]
by Norman J. Wildberger (2010).
What's a [symmetric] determinant? (2:50)
James Schloss (LeiosOS, 2016年07月23日).
The determinant, essence of linear algebra (10:02)
Grant Sanderson (2016年08月10日).
Those square matrices whose elements vanish except for one unit element per row and per column are known as permutation matrices. Each corresponds to a permutation of the columns of the identity matrix (I) and they form collectively a multiplicative group isomorphic to the symmetric group on a set of n elements.
One particular permutation matrix is the so-called exchange matrix (J) which is the "mirror image" of the identity matrix itself. For example:
In spite of this superficial finite-dimensional resemblance with the identity matrix, there's no nice generalization of the exchange matrix to infinitely many dimensions.
Of particular interest is Strang's matrix S11 because it has 121 elements...
The scalar at row i and column j in matrix A, is often denoted aij
Similar ad hoc notations are adequate when juggling with just a few distinct matrices, in spite of the fact that the name of the doubly-subscripted symbol is not directly related to the name of the matrix itself (A)...
That flaw is avoided with the nice self-contained notation (due to P.A.M. Dirac) shown on the right-hand side of the following equation:
aij = < i | A | j>
This is sometimes called the element of A between i and j. Abstract meanings are given to separate parts of that notation: | j > is called a ket (or column vector) in the canonical orthonormal basis and < i | is the dual of such a thing (it's called a bra or row vector).
Originally, the words bra and ket where just the two halves of the word bracket, a mnemonic invented by Dirac as a reminder that the row-vector comes before the column-vector in the bracket notation normally used to denote the inner product which forms a scalar. (with the opposite order, you'd form a square matrix instead).
These interpretations are important and do repay study, but they're not a prerequisite to use the notation as a whole. For example, the following equation defines the sum A+B of two matrices A and B by giving every element of it, in terms of a simple addition of scalars :
< i | A+B | j> = < i | A | j> + < i | B | j>
More interestingly, here's how to define the product A B of two matrices:
< i | A B | j> = åk < i | A | k> < k | B | j>
That general rule for matrix multiplication was made explicit in 1812
(using more traditional index notations) by the French mathematician
Jacques Binet
(1786-1856; X1804) who is also remembered for two utterly unrelated things,
both known as Binet's formulas :
X
1) Explicit expression for the n-th
term of a sequence obeying a linear recurrence relation (e.g., the Fibonacci sequence).
2) In celestial mechanics,
Binet's formulas
allow an easy proof that the orbit of two gravitating bodies must be a
conic section.
If we denote by z* the conjugate a-ib of the complex number z = a+ib, (z=z* when z is a real number) then we may define as follows the conjugate transpose A* of the complex matrix A:
" i , " j , < i | A* | j> = < j | A | i>*
This operation is far more important than transposition without conjugation (which is virtually useless in the complex realm). With respect to A, the matrix A* is variously called conjugate transpose, Hermitian conjugate, adjoint, Hermitian adjoint or dual. (It's called transpose for real matrices.)
Here's how to prove for matrices the relation which gives the adjoint of a product: (A B)* = B* A* (this holds for whatever is called adjoint or dual among other multiplicative objects for which the term is defined).
The concepts we introduced in the previous section are natural in the sense that they do not depend on the structure of the set used to index matrix coefficients. In particular, any order between distinct indices was utterly irrelevant in those definitions.
Incidentally, this is true because we only consider matrices of finite dimension over a ring of coefficeints where addition is required to be commutative. If either condition fails, the summation symbol S (subscripted by the dummy name of the summation index) is no longer well-defined. Even with a commutative addition, an infinite sum could depend on the ordering of indices.
The identity matrix I has a natural definition in this sense while other permutation matrices do not (including mirror-image of I previously featured).
[画像: Come back later, we're still working on this one... ]
Transposition is one obvious anti-isomorphism between upper-triangular matrices and lower-triangular matrices. A straight isomorphism can be defined using any involution s among the set of indices. In dimension n+1, if we use the integers from 0 to n as our set of indices, s(i) = n-i will do. The induced involution over the set of matrices is defined by:
< i | s(A) | j> = < s(j) | A | s(i)>
So defined, s is a ring isomorphism between upper-triangular matrices and lower-triangular matrices.
In a vector space of dimension n, a completely antisymmetrical form is a scalar-valued function of n vectors, which is linear with respect to every argument and has the property of being unchanged by any even permutation of the arguments and negated by any odd one.
Even and odd permutations have signature +1 and -1, respectively. Permutations are bijective functions of a finite set onto itself. The signature of a function s which is not bijective is zero ( sgn(s) = 0).
In dimension 3 for example, such a function f would be trilinear and verify:
f ( v1 , v2 , v3 ) = f ( v2 , v3 , v1 ) = - f ( v2 , v1 , v3 )
A key observation is that all such functions are proportional to each other.
[画像: Come back later, we're still working on this one... ]
Leibniz's formula entails a sum of n! terms, each a product of n factors:
The number of deleted rows and columns can be zero (the whole determinant is a minor of itself). An n-th minor is obtained by deleting n rows and n columns. Of particular interest are the first minors of a determinant, which are obtained by deleting one row (the i-th one) and one column (the j-th one).
[画像: Come back later, we're still working on this one... ]
Wikipedia : Minor | Cofactor | Laplace expansion
The adjugate adj(A) of a matrix A may no longer be called its "adjoint", which is now a synonym of transpose conjugate (denoted A* or Adagger ).
A matrix A is invertible iff it has nonzero determinant, in which case:
A-1 = adj(A) / det(A)
As the determinant of the matrix A is an homogeneous polynomial of its elements, it can be written uniquely in the following way, where the polynomials Aij and Bij do not depend on the element aij.
det(A) = a ij A ij + B ij
By definition, A ij is the cofactor of a ij
[画像: Come back later, we're still working on this one... ]
Eigenvalue Problems (in Sixty Symbols) by Seamus Garvey.
For any complex matrix A (Hermitian or not) the Rayleigh quotient (or Rayleigh-Ritz ratio) at a nonzero point x is the following ratio:
x* A x
Vinculum
x* x
If A is a quantum observable, that quantity is its expected value in the quantum state descbribed by ket x.
The range of all Rayleigh quotients is called the numerical range of A. It clearly contains the spectrum of A (i.e., all eigenvalues of A).
[画像: Come back later, we're still working on this one... ]
Rayleigh quotient
|
Numerical range
|
Product numerical range
|
Spectrum
|
Spectral theory
Sturm-Liouville theory (1834)
|
Blaschke products (1915)
|
Wilhelm Blaschke (1885-1962)
Poncelet's_closure theorem
|
Jean-Victor Poncelet (1788-1867; X1807)
Finding Ellipses by
Ulrich Daepp, Pamela Gorkin, Andrew Shaffer, Karl Voss (MAA Press, 2018年10月30日)
[画像: Come back later, we're still working on this one... ]
The trace of a matrix is the sum of the elements in its diagonal:
It's also the coefficient of (-x) in the characteristic polynomial of the operator. As such, the trace is an intrinsic value associated to the operator itself, irrespective of the choice of a coordinate basis. So, we may talk freely about the trace of an operator (or the trace of a tensor).
The trace function is a linear function. Therefore, its kernel is a vector space, consisting of traceless matrices, which play an important rôle in quantum theory: The groups SU(2) and SU(3) Lie groups are represented by traceless matrices, so are their respective standard bases, the three Pauli matrices and the eight Gell-Mann matrices.
Defining Q as the least monic polynomial havind the same roots as P, we obtain it as P/R where R is the greatest (monic) common divisor of P and its derivative P'.
It's not even necessary to know the roots of P to perform that calculation.
The characteristic polynomial P is irreducible when it's equal to the minimal polynomial Q.
That general theorem was first stated in 1858 by Arthur Cayley who only checked it for 3-dimensional matrices. A special case (3-dimensional rotations) had been stated by Hamilton three years earlier. The first general proof was published in 1878 by Georg Frobenius (1849-1917).
As the coefficients of the characteristic polynomial of a matrix A are polynomial functions of its elements, the assertion states that a particular set of n2 polynomials functions (of degree n) in n2 variables vanishes.
To prove that the coefficients of all the relevant polynomials vanish, it suffices to establish that the associated polynomial functions vanish on a sufficiently large set of points. In particular, a dense set of points will do (the zeroes of any continuous function form a closed set).
For an n-dimensional vector space over complex numbers, the set of matrices with distinct eigenvalues is dense among all square matrices.
[画像: Come back later, we're still working on this one... ]
Algebraic Number Fields as
Matrix Algebras by Kapil Hari Paranjape (2002年10月20日).
Wikipedia :
Cayley-Hamilton
theorem
Two matrices A and B are said to be similar when they are related by an inner isomorphism, namely there's an invertible matrix P such that:
B = P A P-1
A matrix is said to be diagonalizable if it's similar to a diagonal matrix.
[画像: Come back later, we're still working on this one..when. ]
[画像: Come back later, we're still working on this one... ]
Jordan matrix | Jordan normal form | Camille Jordan (1838-1922; X1855)
Although Camille Jordan published an early form of it in 1870, that result took a full century to make its way into a graduate textbook (Dunford & Schwartz, 1971). It's now common knowledge among graduate students and it has entered the standard undergraduate curriculum for linear algebra.
What is still lost on most students and some faculty is the remarkable fact that the decomposition can be effectively performed without any knowledge of the eigenvalues of the matrix to be decomposed. The ground field need not even be algebraically closed (like the complex numbers). A perfect field is enough (e,g., the rationals or any finite field).
That crucial observation is due to Claude Chevalley (1951) which fully justifies the name of Jordan-Chevalley decomposition, which is now commonly used. Here's the chronology of the four key publications:
Since the Dunford & Schwartz textbook (1971) was so instrumental in its popularizarion, the concept was known for a while as Dunford's decomposition, a term which is now being deprecated. (Dunford's earliest research on the topic appeared 3 years after Chevalley's final results.) As a clear example of reverse-chauvinism, the French have held on to the attribution to Dunford longer than anyone else... ;)
In a context where the matrix could be put in Jordan's normal form, the existence and uniqueness of the Jordan-Chevalley decomposition are fairly trivial. However, as advertised, we aim for a stronger result and won't assume that the characteristic polynomial or the minimal polynomial can be fully factored.
We merely assume that the gound field is a perfect field. This is all the following algorithm need to uniquely determine the diagonalizable part D of the matrix M. The nilpotent part N is then obtained as M-D. The ground field need not be algebraically closed (as Chevalley's algorithm doesn't require that). The last step will be to remark that N and D commute because they're both obtained as polynomial functions of M QED
Let M be the matrix to decompose. Let Q be its minimal polynomial. The diagonalizable part D of M is a zero of Q.
In elementary numerical analysis, the Newton-Raphson method can be used to find a numerical zero of a polynomial Q (or any other differentiable function) as a limit of a sequence of approximations, defined by recurrence:
xn+1 = xn - Q ( xn ) / Q' ( xn )
If the starting point is close enough to the root, the sequence so defined converges quadratically to that root (i.e., the number of correct digits roughly doubles withh each iteration).
Chevalley's idea was to apply the same strategy and the same iteration formula to matrices instead of numbers. The nice surprise is that the convergence is even much better, as the limit is reached in finitely many steps (no more than the binary logarithm of the dimension of the matrix). Let's show that:
[画像: Come back later, we're still working on this one... ]
Nilpotent matrix
|
Diagonalizable matrix
|
Jordan-Chevalley (Dunford) decomposition
Decomposition of
Matrices in Semisimple and Nilpotent Parts by Miguel (MathOverflow, 2012年09月29日).
The Unapologetic Mathematician:
The Jordan-Chevalley decomposition
& Proof (2012年08月28日).
The Jordan-Chevalley decomposition
by Joo Heon Yoo (2014年09月08日).
Décomposition de Dunford (5:25, in French)
Matthieu Romagny (Les 5 minutes Lebesgue, 2016年01月19日).
Décomposition effective de Jordan-Chevalley et ses retombées en enseignement
by Danielle Couty, Jean Esterlé & Rachid Zarouf (arXiv, 2011年03月25日 / 2013年01月15日).
The spectral radius of a square matrix is the largest modulus of its eigenvalues. A simple eigenvalue is a simple root of the characteristic polynomial. It's associated with an eigenspace of dimension 1.
[画像: Come back later, we're still working on this one... ]
Cardinal voting
|
Perron-Frobenius theorem (Perron 1907, Frobenius 1912)
Oskar Perron (1880-1975)
|
Georg Frobenius (1849-1917)
The Vandermonde matrix associated with a sequence (xn ) is the square matrix of the successive powers of the elements in that sequence. The element on row i and column j is (xj ) i. [Some authors consider the transpose of this.]
The determinant of the above Vandermonde matrix has a nice expression:
Proof : Both sides of the equation are polynomials of degree n(n+1)/2 in n+1 variables, which vanish when two of the variables are equal. Since such polynomials are necessarily proportional, they must be equal if some pair of matching coefficients are equal. That's easily proven for the last pair, by induction on n. (HINT: The coefficient of the last variable raised to the n-th power is the Vandermonde determinant for the n previous variables.) QED
The above Vandermonde determinant and Vandermonde matrix are named after the French mathematician Alexandre-Théophile Vandermonde (1735-1796) who is the rightful founder of the modern theory of determinants...
Earlier authors had stumbled upon determinants when solving several simultaneous linear equations in as many unknowns. Those include Cardano (in the 2 by 2 case only) Leibniz, Cramer (1750) and Bézout (1764).
However, Vandermonde published the first investigation (1771) of determinants in their own right, as functions of the coefficients of a matrix. The importance of the idea was immediately recognized by Laplace (1772) and Lagrange (1773). Curiously, the determinant concept was made clear before the concept of a matrix itself. The modern name "determinant" was coined by Gauss in 1801. When it does not vanish, this quantity "determines" that a system of equations has a unique solution (in which case it's called a "Cramer system", in high-school parlance).
Strangely enough, Vandermonde himself never mentioned the above type of matrices, now named after him. For lack of a better explanation, it has been speculated (by Lebesgue and others) that whoever atttributed this enduring "credit" mistook one of Vandermonde's superscripts for an exponent !
What's perhaps the greatest contribution of Vandermonde is often overlooked: Vandermonde came up with the idea of studying functions which are invariant under any permutation of a given polynomial's roots. He did so in the first of only four mathematical papers he ever published: Mémoire sur la résolution des équations (1770). Arguably, this publication marks the very beginning of modern algebra. This idea of Vandermonde's would ultimately lead to the celebrated Abel-Ruffini theorem (which states the impossibility of a general solution by radicals for algebraic equations of degree 5 or more). The subsequent work of Lagrange, Ruffini, Abel and Galois begat Group Theory and Field Theory.
Such a matrix is known as a generalized Vandermonde matrix.
It's a classical example of a totally positive matrix (i.e., a real matrix whose minors are all positive) which is to say that we always obtain a square matrix of positive determinant by deleting from it an equal number (possibly zero) of rows and columns. (Note that, if a matrix is a generalized Vandermonde matrix, so are all its submatrices.)
The fact that all generalized Vandermonde determinants are positive happens to be a consequence of the fact that they are all nonzero...
Indeed, there's a continuous path (depending on a parameter u that goes continuously from 0 to 1) which goes continuously from the above determinant to an ordinary Vandermonde determinant (which is clearly positive by virtue of the above formula which expresses it as a product of positive factors). The determinant at point u along that path is obtained by replacing the exponent li with the positive quantity:
l'i = (1-u) li + u (i-1)
For any u in [0,1], the above inequalities between exponents are preserved. A continuous function of u over [0,1] cannot go from a negative to a positive value without vanishing at some point. So, to show that all generalized Vandermonde determinants are positive, we only need to establish that none of them is zero. Let's prove that:
We proceed by induction on the size n of the matrix (n = 1 being trivial).
Let's assume (by induction hypothesis) that there are no vanishing generalized Vandermonde determinants of size less than n. Then, if we had a vanishing generalized Vandermonde determinant of size n, there would be a linear combination of different powers of x vanishing for at least n positive values of x (namely, a1 ...an). That is to say that we'd have coefficients ci (not all vanishing) for which:
0 =
c1 xl1 +
c2 xl2 + ... +
cn xln
for at least n positive values of x.
So,
0 =
c1 +
c2 xl2- l1 + ... +
cn xln- l1
for those same values.
By Rolle's theorem, the derivative of the right-hand side would vanish at least once in each of the n-1 intervals separating the n known distinct roots. So would that derivative multiplied into the positive quantity x. Therefore:
0 = c2 (l2- l1) xl2- l1 + ... + cn (ln- l1) xln- l1 for at least n-1 values of x>0
This would contradict the induction hypothesis. QED
Wikipedia : Totally Positive Matrix
(Fekete's criterion)
Generalized Vandermonde Determinants
by Hans Peter Schlickewei & Carlo Viola.
An Explicit Factorization of Totally Positive
Generalized Vandermonde Matrices Avoiding Schur Functions
by Young-Ming Chen, Hsuan-Chu Li, Eng-Tjioe Tan. Applied Mathematics E-Notes, 8 (2008), pp. 138-147.
Sylvester's law of inertia (1852)
|
James Joseph Sylvester (1814-1897)
Computing the signature of an n×n
symmetric matrix by Igor Rivin (MathOverflow, 2012年09月19日).
Theorem of the Day #142,
by Robin Whitty.
By definition, a lower-triangular matrix is a matrix with only zero elements above the diagonal.
This decomposition was introduced by the French engineer André-Louis Cholesky in the special case where the matrix A is a positive-definite symmetrical real matrix. In that case, the matrix L has real coefficients and nonzero diagonal elements.
The matrix L isn't unique, as -L, (L*)t and -(L*)t are satisfactory too.
A related construction which applies to any definite Hermitian matrix (not necessarily positive) involves a diagonal real matrix D and a lower-triangular matrix L whose diagonal elements are all equal to 1 (unity).
L D L* = A
Conversely, any matrix of this form is clearly Hermitian (A* = A).
In the special case where A is real, L and D are real too.
The requirement of definiteness (no zero eigenvalue) can be dropped. It's only useful to make the actual construction of L more straightforward.
Because the decomposition can be performed quite efficiently, it can be used to solve a symmetric system of linear equations in real numbers much faster than with general-purpose Gaussian elimination (such symmetrical systems arise in the linear least square method, in particular):
L L* x = b
The triangular systems L y = b and L* x = y are sequentially solved.
Cholesky's method was published posthumously in 1924. It received little or no attention until World War II, when Jack Todd taught it in his analysis course at King's College, London. Alan M. Turing In 1948, Alan Turing proved the numerical stability of the procedure (after empirical results published the same year by J.D. Fox, H.D. Huskey & J.H. Wilkinson).
Cholesky decomposition
|
HP Prime implementation
|
André-Louis Cholesky
(1875-1918; X1895)
How accurate is
Gaussian elimination? by Nicholas J. Higham (July 1989)
In other words, the value of the element on the ith row and jth column of a Toeplitz matrix depends only on the difference (j-i).
Such matrices have been named after Otto Toeplitz (1881-1940).
Wikipedia:
Toeplitz matrix
|
List of matrices
On Calculating the Determinants
of Toeplitz Matrices by Hsuan-Chu Li (2011)
The value of the element on the ith row and jth column of an n by n circulant matrix depends only on the difference (j-i) modulo n.
Each row is thus equal to the previous row shifted one place to the right.
The determinant and eigenvalues ( lk ) of the above circulant matrix are:
In this, w = exp ( 2ip/n ) is a primitive nth root of unity. For example:
The determinant of a circulant matrix is commonly called the circulant determinant or, for short, the circulant of the n coefficients involved.
The order of those coefficients matters, except when n = 3.
Let's consider a sequence u obeying a linear recurrence of order k-1 :
un+k-1 = a1 un+k-2 + a2 un+k-3 + ... + ak-2 un+1 + ak-1 un
Well, if u is periodic of period k
[画像: Come back later, we're still working on this one... ]
There are only four recurrences of order 3 yielding sequences of period 4:
Dirichlet characters | Sárközy's condition (1978) | András Sárközy (1941-)
In his 1894 dissertation about Fermat's Last Theorem, Ernst Adolf Wendt (1872-1946) investigated the resultant Wn of Xn-1 and (X+1)n-1.
Emma Lehmer proved that Wn vanishes if and only if n is a multiple of 6.
The integer Wn boils down to the circulant of n binomial coefficients (more precisely, a line of Pascal's triangle, without the rightmost "1").
The element at line i and column j is best described as equal to C(n,|i-j|). With the above notations, xj = C(n,j) [where j goes from 0 to n-1 ]. So:
If p divides q , then Wp divides Wq . The nth Mersenne number (2n-1) divides Wn and the quotient |Wn| / (2n-1) is a perfect square...
On Wendt's Determinant and Sophie Germain's Theorem by David Ford and Vijay Jha (1993)
On
Wendt's Determinant by Charles Helou (1997)
Upper Bounds for the Prime Divisors of
Wendt's Determinant by Anastasios Simalarides (2000)
In other words, the value of the element on the ith row and jth column of a persymmetric matrix depends only on the sum of the indices (i+j).
With those notations, the Hankel transform of the sequence xn is defined to be the sequence yn = det ( Mn+1 ) which starts with y0 = x0.
Unfortunately, the above is not the only thing called Hankel transform...
Thus, the elements of a phase matrix are complex numbers of the form:
e iq = cos q + i sin q [ where q is real ]
If such a matrix is real, it's called a sign matrix (its elements are +1 or -1).
In the realm of real numbers, Hadamard matrices are square matrices whose elements are either -1 or +1 and whose rows are pairwise orthogonal.
Such matrices are often represented graphically as square mosaics of square tiles that are either light or dark... Here are special Hadamard matrices, known as Walsh matrices, whose orders are powers of 2 (1, 2, 4, 8, 16).
[画像: Hadamard matrix of order 1 ] [画像: Hadamard matrix of order 2 ] [画像: Hadamard matrix of order 4 ] [画像: Hadamard matrix of order 8 ] [画像: Hadamard matrix of order 16 ]
In the so-called normalized cases, one row and one column have a uniform color which is traditionally light (so that there are ½ n(n+1) light squares and ½ n(n-1) dark ones). The exact color-code can't be fixed (as the opposite of an Hadamard matrix is also Hadamard) but we usually assume that the dominant (light) color stands for the value +1. When its first column and its first row have a uniform value of +1, the matrix is said to be dephased (a jargon term discussed below, originally applied to complex Hadamard matrices c. 1970).
The above serves to illustrate a remark of Sylvester (1867) that an Hadamard matrix of order 2n may be obtained from a known Hadamard matrix H of order n, simply by juxtaposing four copies of H and negating one of those:
More generally, an Hadamard matrix of order mn may be constructed from an Hadamard matrix K of order m and an Hadamard matrix H of order n by substituting in K all occurrences of +1 by H and all occurences of -1 by -H (the above is the special case m = 2). In other words, the Kronecker product of two Hadamard matrices is also an Hadamard matrix...
Hadamard matrices of order n may be defined as square matrices of order n whose elements have unit length ( modulus ) and which verify the relation:
M M* = n In
In this, M* is the transpose of a real matrix M or, more generally, the transpose conjugate of a complex one. Matrices with complex elements of unit moduli which satisfy the above relation are called complex Hadamard matrices. They were introduced in 1970 by Richard J. Turyn (1930-?) as interesting generalizations of the more traditional (real) Hadamard matrices, which can be considered to be merely of a restricted subtype.
A (complex) Hadamard matrix is trivially obtained from another by using one or several of the following types of transformations:
Matrices so obtained from each other are said to be Hadamard-equivalent. A complex Hadamard matrix where all first-row and first-column elements are equal to 1 is said to be dephased. Any complex Hadamard matrix is equivalent to a dephased one (HINT: Use the first type of transformations.)
It's not difficult to prove that all complex Hadamard matrices of order 2 are Hadamard-equivalent to the aforementioned Walsh matrix of order 2.
Similarly, there's essentially only one complex Hadamard matrix of order 3. It involves a primitive cube root of 3, denoted w :
That's an example of a so-called Butson-Hadamard matrix, namely a complex Hadamard matrix of order n whose elements are p-th roots of unity. Those were first discussed in 1961 by Alton T. Butson (1926-1997) before Turyn's general approach to complex Hadamard matrices (1970).
For any order n, one example of a Butson-Hadamard matrix is given by the Vandermonde matrix of all the n-th roots of unity (which is Hadamard-equivalent to their circulant matrix). Another example is provided, for any composite order, by extending to complex Hadamard matrices what we've already remarked about real ones, namely that the Kronecker product of two (complex) Hadamard matrices is a (complex) Hadamard matrix. In particular, if H is a complex Hadamard matrix of order m, a complex Hadamard matrix of order 3m is obtained from the following pattern:
This can be applied repeatedly to obtain the following ternary equivalent of the aforementioned (binary) Walsh matrices, using a three-color code:
[画像: Complex Hadamard matrix of order 1 ] [画像: Complex Hadamard matrix of order 3 ] [画像: Complex Hadamard matrix of order 9 ] [画像: Complex Hadamard matrix of order 27 ]
For any odd prime p, a complex Hadamard matrix whose elements are pth roots of unity must have an order which is a multiple of p. Conversely, it is conjectured that such matrices exist for any order which is a multiple of p.
The existence of ordinary (real) Hadamard matrices (as discussed next) can be construed to be the case p=2, for which a slightly different rule holds...
Back to ordinary Hadamard matrices (whose elements are either -1 or +1).
The empty matrix (of order 0) is vacuously an Hadamard matrix. The order of any Hadamard matrix must be 1, 2, or a multiple of 4 (Hadamard, 1893). Conversely, the Hadamard-Paley conjecture (better known as the Hadamard Conjecture ) states that there are Hadamard matrices of all such orders...
In 1933, Raymond Paley used finite fields to construct Hadamard matrices of all orders of the form q+1 (resp. 2q+2) where q is the power of a prime congruent to 3 (resp. 1) modulo 4. Paley's Theorem states that Hadamard matrices can be constructed (using Paley's construction and the aforementioned construction of Sylvester) for all positive orders divisible by 4 except those in the following sequence (i.e., multiples of 4 not equal to a power of 2 multiplied by q+1, for some power q of an odd prime):
92, 116, 156, 172, 184, 188, 232, 236, 260, 268, 292, 324, 356, 372, 376, 404, 412, 428, 436, 452, 472, 476, 508, 520, 532, 536, 584, 596, 604, 612, 652, 668, 712, 716, 732, 756, 764, 772, 808, 836, 852, 856, 872, 876, 892, 904, 932, 940, 944, 952, 956, 964, 980, 988, 996, 1004, 1012, 1016, 1028, 1036, 1068, 1072, 1076, 1100, 1108, 1132, 1148, 1168, 1180, 1192, 1196, 1208, 1212, 1220, 1244, 1268, 1276, 1300, 1316, 1336, 1340, 1364, 1372, 1380, 1388, 1396, 1412, 1432, 1436, 1444, 1464, 1476, 1492, 1508, 1528, 1556, 1564, 1588, 1604, 1612, 1616, 1636, 1652, 1672, 1676, 1692, 1704, 1712, 1732, 1740, 1744, 1752, 1772, 1780, 1796, 1804, 1808, 1820, 1828, 1836, 1844, 1852, 1864, 1888, 1892, 1900, 1904, 1912, 1916, 1928, 1940, 1948, 1960, 1964, 1972, 1976, 1992, 2008, 2024, 2032, 2036, 2052, 2056, 2060, 2072, 2076, 2092, 2108, 2116, 2136, 2148, 2152, 2156, 2164, 2172, 2200, 2212, 2216, 2228, 2264, 2276, 2284, 2292, 2296, 2300 ... (A046116)
For many of those remaining orders, Hadamard matrices have been discovered by other methods. Here's a brief summary...
In 1962, Baumert, Golomb, and Hall found an Hadamard matrix of order 92, using the general approach introduced in 1944, by John Williamson (who so constructed an Hadamard matrix of order 172). A Williamson matrix of order 4m is an Hadamard matrix obtained as follows from 4 matrices of order m (with unit elements) A, B, C and D, provided 4 matching relations are satisfied:
For example, we may build an Hadamard matrix of order 12 with 4 matrices of order 3, by letting one of them be the matrix U whose 9 elements are equal to +1 (the square of U is 3U) while the 3 others are equal to U-2I.
With A = U, the resulting Hadamard matrix of order 12 (which isn't normalized in the above sense) is shown graphically below, next to the Hadamard matrices of order 24 and 48 obtained from it with the aforementioned Sylvester construct:
[画像: Hadamard matrix of order 12 ] [画像: Hadamard matrix of order 24 ] [画像: Hadamard matrix of order 48 ]
It turns out that all (real) Hadamard matrices of order 12 are equivalent. In particular, the above order-12 Hadamard matrix is Hadamard-equivalent to the following, which is normalized (dephased) and symmetrical:
[画像: Symmetrical Hadamard matrix of order 12 ]
In 1985, K. Sawade found an Hadamard matrix of order 268.
In June 2004, Hadi Kharaghani and Behruz Tayfeh-Rezaie built the Hadamard matrix of order 428 shown below (1 pixel per matrix element).
If we denote by M the symmetrical of any M with respect to its antidiagonal, the above matrix is based on 4 matrices A, B, C, D of order m = 107 put together in the following pattern, which gives an Hadamard matrix of order 4m if and only if all the 10 conditions listed are met. (Note that M N = N M .)
Since June 2004, the order 668 has been the smallest multiple of 4 for which no Hadamard matrix is yet known (as of July 2014).
Generalized
Hadamard Matrices by A. T. Butson (1961).
Recent
advances in the construction of Hadamard matrices
by Jennifer Seberry (1973).
Walsh-Hadamard transform
|
Hadamard Matrix :
Wikipedia
|
MathWorld
Usage note : Both spellings "a Hadamard matrix" and "an Hadamard matrix" can be construed as correct: The former one applies to an anglicized pronounciation of "Hadamard" ("h" and final "d" both pronounced) whereas the latter must be used with the original French pronounciation (initial "h" and final "d" both silent).
Consider two polynomials A and B of respective degrees m and n.
Their Sylvester matrix is a square matrix (of dimension m+n) whose first n rows feature the m+1 coefficients of A and whose last m rows feature the n+1 coefficients of B. It is defined as follows:
The determinant of that matrix equals the so-called resultant of A and B which may be expressed as follows, in terms of all the complex roots (ai , bj ) of those polynomials and their leading coefficients am and bn :
See also: Bézout matrix (Bézoutian}
I fondly remember learning this orally from Claude Delagarde (1974).
The word "discriminant" was coined by James Joseph Sylvester (1814-1897) who found the correct expression for cubic polynomials (in 1851) and generalized it to any polynomial (including the well-known quadratic case).
[画像: Come back later, we're still working on this one... ]
Discriminant (Russian Wikipedia)
The Pfaffian is zero for odd orders. For an antisymmetrical matrix of even order 2n, the Pfaffian is an homogeneous polynomial of degree n in the coefficents of the matrix.
[画像: Come back later, we're still working on this one... ]
Pfaffian | Johann Friedrich Pfaff (1765-1825) | Arthur Cayley (1821-1895)
[画像: Come back later, we're still working on this one... ]
Matrices over noncommutative rings
(Mathematics Stack Exchange, 2016年10月20日).
Artin-Wedderburn classification theorem
|
Emil Artin (1899-1962)
|
Joseph Wedderburn (1882-1948)
Morita equivalence
(category theory)
|
Kiiti Morita (1915-1995)
[画像: Come back later, we're still working on this one... ]
Det ( exp(t A) ) = exp( t Tr(A) )
Matrix exponential
How to raise e to the power of a matrix... and why (27:06)
by Grant Sanderson (2021年04月01日).
[画像: Come back later, we're still working on this one... ]
Gamma function | Lanczos approximation
Computation of matrix gamma function
Joao R. Cardoso, Amir Sadeghi (arXiv, 2018年06月27日).
Generalized Extended Matrix Variate Beta
and... Nagar, Noguera, Gupta
[画像: Come back later, we're still working on this one... ]
Mittag-Leffler function
| Gösta Mittag-Leffler (1846-1927).
Computing the matrix Mittag-Leffler function with applications to fractional calculus
by Roberto Garrappa and Marina Popolizio (arXiv, 2018年04月13日).
One intriguing example, due to Euler, is the factorial of a particular matrix:
Where g is the Euler-Mascheroni constant.
[画像: Come back later, we're still working on this one... ]
Computation of matrix gamma function
João R. Cardosoa & Amir Sadeghi (Arxiv, 2018年06月27日).
Divergent Series Redux
Euler's Secret (5:20) by
Peyam R. Tabrizian (Dr Peyam, 2021年05月12日).