Alternant matrix
In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.
Generally, if {\displaystyle f_{1},f_{2},\dots ,f_{n}} are functions from a set {\displaystyle X} to a field {\displaystyle F}, and {\displaystyle {\alpha _{1},\alpha _{2},\ldots ,\alpha _{m}}\in X}, then the alternant matrix has size {\displaystyle m\times n} and is defined by
- {\displaystyle M={\begin{bmatrix}f_{1}(\alpha _{1})&f_{2}(\alpha _{1})&\cdots &f_{n}(\alpha _{1})\\f_{1}(\alpha _{2})&f_{2}(\alpha _{2})&\cdots &f_{n}(\alpha _{2})\\f_{1}(\alpha _{3})&f_{2}(\alpha _{3})&\cdots &f_{n}(\alpha _{3})\\\vdots &\vdots &\ddots &\vdots \\f_{1}(\alpha _{m})&f_{2}(\alpha _{m})&\cdots &f_{n}(\alpha _{m})\\\end{bmatrix}}}
or, more compactly, {\displaystyle M_{ij}=f_{j}(\alpha _{i})}. (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which {\displaystyle f_{j}(\alpha )=\alpha ^{j-1}}, and Moore matrices, for which {\displaystyle f_{j}(\alpha )=\alpha ^{q^{j-1}}}.
Properties
[edit ]- The alternant can be used to check the linear independence of the functions {\displaystyle f_{1},f_{2},\dots ,f_{n}} in function space. For example, let {\displaystyle f_{1}(x)=\sin(x)}, {\displaystyle f_{2}(x)=\cos(x)} and choose {\displaystyle \alpha _{1}=0,\alpha _{2}=\pi /2}. Then the alternant is the matrix {\displaystyle \left[{\begin{smallmatrix}0&1\1円&0\end{smallmatrix}}\right]} and the alternant determinant is {\displaystyle -1\neq 0}. Therefore M is invertible and the vectors {\displaystyle \{\sin(x),\cos(x)\}} form a basis for their spanning set: in particular, {\displaystyle \sin(x)} and {\displaystyle \cos(x)} are linearly independent.
- Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let {\displaystyle f_{1}(x)=\sin(x)}, {\displaystyle f_{2}=\cos(x)} and choose {\displaystyle \alpha _{1}=0,\alpha _{2}=\pi }. Then the alternant is {\displaystyle \left[{\begin{smallmatrix}0&1\0円&-1\end{smallmatrix}}\right]} and the alternant determinant is 0, but we have already seen that {\displaystyle \sin(x)} and {\displaystyle \cos(x)} are linearly independent.
- Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which {\displaystyle {\frac {A}{x+1}}+{\frac {B}{x+2}}={\frac {1}{(x+1)(x+2)}}}. Choosing {\displaystyle f_{1}(x)={\frac {1}{x+1}}}, {\displaystyle f_{2}(x)={\frac {1}{x+2}}}, {\displaystyle f_{3}(x)={\frac {1}{(x+1)(x+2)}}} and {\displaystyle (\alpha _{1},\alpha _{2},\alpha _{3})=(1,2,3)}, we obtain the alternant {\displaystyle {\begin{bmatrix}1/2&1/3&1/6\1円/3&1/4&1/12\1円/4&1/5&1/20\end{bmatrix}}\sim {\begin{bmatrix}1&0&1\0円&1&-1\0円&0&0\end{bmatrix}}}. Therefore, {\displaystyle (1,-1,-1)} is in the nullspace of the matrix: that is, {\displaystyle f_{1}-f_{2}-f_{3}=0}. Moving {\displaystyle f_{3}} to the other side of the equation gives the partial fraction decomposition {\displaystyle A=1,B=-1}.
- If {\displaystyle n=m} and {\displaystyle \alpha _{i}=\alpha _{j}} for any {\displaystyle i\neq j}, then the alternant determinant is zero (as a row is repeated).
- If {\displaystyle n=m} and the functions {\displaystyle f_{j}(x)} are all polynomials, then {\displaystyle (\alpha _{j}-\alpha _{i})} divides the alternant determinant for all {\displaystyle 1\leq i<j\leq n}. In particular, if V is a Vandermonde matrix, then {\textstyle \prod _{i<j}(\alpha _{j}-\alpha _{i})=\det V} divides such polynomial alternant determinants. The ratio {\textstyle {\frac {\det M}{\det V}}} is therefore a polynomial in {\displaystyle \alpha _{1},\ldots ,\alpha _{m}} called the bialternant. The Schur polynomial {\displaystyle s_{(\lambda _{1},\dots ,\lambda _{n})}} is classically defined as the bialternant of the polynomials {\displaystyle f_{j}(x)=x^{\lambda _{j}}}.
Applications
[edit ]- Alternant matrices are used in coding theory in the construction of alternant codes.
See also
[edit ]References
[edit ]- Muir, Thomas (2003) [1960]. A treatise on the theory of determinants. Dover Publications. pp. 321–363. ISBN 978-0-486-49553-8. OCLC 52203124.
- Aitken, A.C. (1956). Determinants and Matrices (9th ed.). Oliver and Boyd Ltd. pp. 111–123. OCLC 271302373.
- Stanley, Richard P. (1999). Enumerative Combinatorics (2nd ed.). Cambridge University Press. pp. 334–342. doi:10.1017/CBO9781139058520. ISBN 978-1-107-01542-5. OCLC 897778191.