Jump to content
Wikipedia The Free Encyclopedia

Alternant matrix

From Wikipedia, the free encyclopedia
Not to be confused with alternating sign matrix.

In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix.

Generally, if f 1 , f 2 , , f n {\displaystyle f_{1},f_{2},\dots ,f_{n}} {\displaystyle f_{1},f_{2},\dots ,f_{n}} are functions from a set X {\displaystyle X} {\displaystyle X} to a field F {\displaystyle F} {\displaystyle F}, and α 1 , α 2 , , α m X {\displaystyle {\alpha _{1},\alpha _{2},\ldots ,\alpha _{m}}\in X} {\displaystyle {\alpha _{1},\alpha _{2},\ldots ,\alpha _{m}}\in X}, then the alternant matrix has size m × n {\displaystyle m\times n} {\displaystyle m\times n} and is defined by

M = [ f 1 ( α 1 ) f 2 ( α 1 ) f n ( α 1 ) f 1 ( α 2 ) f 2 ( α 2 ) f n ( α 2 ) f 1 ( α 3 ) f 2 ( α 3 ) f n ( α 3 ) f 1 ( α m ) f 2 ( α m ) f n ( α m ) ] {\displaystyle M={\begin{bmatrix}f_{1}(\alpha _{1})&f_{2}(\alpha _{1})&\cdots &f_{n}(\alpha _{1})\\f_{1}(\alpha _{2})&f_{2}(\alpha _{2})&\cdots &f_{n}(\alpha _{2})\\f_{1}(\alpha _{3})&f_{2}(\alpha _{3})&\cdots &f_{n}(\alpha _{3})\\\vdots &\vdots &\ddots &\vdots \\f_{1}(\alpha _{m})&f_{2}(\alpha _{m})&\cdots &f_{n}(\alpha _{m})\\\end{bmatrix}}} {\displaystyle M={\begin{bmatrix}f_{1}(\alpha _{1})&f_{2}(\alpha _{1})&\cdots &f_{n}(\alpha _{1})\\f_{1}(\alpha _{2})&f_{2}(\alpha _{2})&\cdots &f_{n}(\alpha _{2})\\f_{1}(\alpha _{3})&f_{2}(\alpha _{3})&\cdots &f_{n}(\alpha _{3})\\\vdots &\vdots &\ddots &\vdots \\f_{1}(\alpha _{m})&f_{2}(\alpha _{m})&\cdots &f_{n}(\alpha _{m})\\\end{bmatrix}}}

or, more compactly, M i j = f j ( α i ) {\displaystyle M_{ij}=f_{j}(\alpha _{i})} {\displaystyle M_{ij}=f_{j}(\alpha _{i})}. (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which f j ( α ) = α j 1 {\displaystyle f_{j}(\alpha )=\alpha ^{j-1}} {\displaystyle f_{j}(\alpha )=\alpha ^{j-1}}, and Moore matrices, for which f j ( α ) = α q j 1 {\displaystyle f_{j}(\alpha )=\alpha ^{q^{j-1}}} {\displaystyle f_{j}(\alpha )=\alpha ^{q^{j-1}}}.

Properties

[edit ]
  • The alternant can be used to check the linear independence of the functions f 1 , f 2 , , f n {\displaystyle f_{1},f_{2},\dots ,f_{n}} {\displaystyle f_{1},f_{2},\dots ,f_{n}} in function space. For example, let f 1 ( x ) = sin ( x ) {\displaystyle f_{1}(x)=\sin(x)} {\displaystyle f_{1}(x)=\sin(x)}, f 2 ( x ) = cos ( x ) {\displaystyle f_{2}(x)=\cos(x)} {\displaystyle f_{2}(x)=\cos(x)} and choose α 1 = 0 , α 2 = π / 2 {\displaystyle \alpha _{1}=0,\alpha _{2}=\pi /2} {\displaystyle \alpha _{1}=0,\alpha _{2}=\pi /2}. Then the alternant is the matrix [ 0 1 1 0 ] {\displaystyle \left[{\begin{smallmatrix}0&1\1円&0\end{smallmatrix}}\right]} {\displaystyle \left[{\begin{smallmatrix}0&1\1円&0\end{smallmatrix}}\right]} and the alternant determinant is 1 0 {\displaystyle -1\neq 0} {\displaystyle -1\neq 0}. Therefore M is invertible and the vectors { sin ( x ) , cos ( x ) } {\displaystyle \{\sin(x),\cos(x)\}} {\displaystyle \{\sin(x),\cos(x)\}} form a basis for their spanning set: in particular, sin ( x ) {\displaystyle \sin(x)} {\displaystyle \sin(x)} and cos ( x ) {\displaystyle \cos(x)} {\displaystyle \cos(x)} are linearly independent.
  • Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let f 1 ( x ) = sin ( x ) {\displaystyle f_{1}(x)=\sin(x)} {\displaystyle f_{1}(x)=\sin(x)}, f 2 = cos ( x ) {\displaystyle f_{2}=\cos(x)} {\displaystyle f_{2}=\cos(x)} and choose α 1 = 0 , α 2 = π {\displaystyle \alpha _{1}=0,\alpha _{2}=\pi } {\displaystyle \alpha _{1}=0,\alpha _{2}=\pi }. Then the alternant is [ 0 1 0 1 ] {\displaystyle \left[{\begin{smallmatrix}0&1\0円&-1\end{smallmatrix}}\right]} {\displaystyle \left[{\begin{smallmatrix}0&1\0円&-1\end{smallmatrix}}\right]} and the alternant determinant is 0, but we have already seen that sin ( x ) {\displaystyle \sin(x)} {\displaystyle \sin(x)} and cos ( x ) {\displaystyle \cos(x)} {\displaystyle \cos(x)} are linearly independent.
  • Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which A x + 1 + B x + 2 = 1 ( x + 1 ) ( x + 2 ) {\displaystyle {\frac {A}{x+1}}+{\frac {B}{x+2}}={\frac {1}{(x+1)(x+2)}}} {\displaystyle {\frac {A}{x+1}}+{\frac {B}{x+2}}={\frac {1}{(x+1)(x+2)}}}. Choosing f 1 ( x ) = 1 x + 1 {\displaystyle f_{1}(x)={\frac {1}{x+1}}} {\displaystyle f_{1}(x)={\frac {1}{x+1}}}, f 2 ( x ) = 1 x + 2 {\displaystyle f_{2}(x)={\frac {1}{x+2}}} {\displaystyle f_{2}(x)={\frac {1}{x+2}}}, f 3 ( x ) = 1 ( x + 1 ) ( x + 2 ) {\displaystyle f_{3}(x)={\frac {1}{(x+1)(x+2)}}} {\displaystyle f_{3}(x)={\frac {1}{(x+1)(x+2)}}} and ( α 1 , α 2 , α 3 ) = ( 1 , 2 , 3 ) {\displaystyle (\alpha _{1},\alpha _{2},\alpha _{3})=(1,2,3)} {\displaystyle (\alpha _{1},\alpha _{2},\alpha _{3})=(1,2,3)}, we obtain the alternant [ 1 / 2 1 / 3 1 / 6 1 / 3 1 / 4 1 / 12 1 / 4 1 / 5 1 / 20 ] [ 1 0 1 0 1 1 0 0 0 ] {\displaystyle {\begin{bmatrix}1/2&1/3&1/6\1円/3&1/4&1/12\1円/4&1/5&1/20\end{bmatrix}}\sim {\begin{bmatrix}1&0&1\0円&1&-1\0円&0&0\end{bmatrix}}} {\displaystyle {\begin{bmatrix}1/2&1/3&1/6\1円/3&1/4&1/12\1円/4&1/5&1/20\end{bmatrix}}\sim {\begin{bmatrix}1&0&1\0円&1&-1\0円&0&0\end{bmatrix}}}. Therefore, ( 1 , 1 , 1 ) {\displaystyle (1,-1,-1)} {\displaystyle (1,-1,-1)} is in the nullspace of the matrix: that is, f 1 f 2 f 3 = 0 {\displaystyle f_{1}-f_{2}-f_{3}=0} {\displaystyle f_{1}-f_{2}-f_{3}=0}. Moving f 3 {\displaystyle f_{3}} {\displaystyle f_{3}} to the other side of the equation gives the partial fraction decomposition A = 1 , B = 1 {\displaystyle A=1,B=-1} {\displaystyle A=1,B=-1}.
  • If n = m {\displaystyle n=m} {\displaystyle n=m} and α i = α j {\displaystyle \alpha _{i}=\alpha _{j}} {\displaystyle \alpha _{i}=\alpha _{j}} for any i j {\displaystyle i\neq j} {\displaystyle i\neq j}, then the alternant determinant is zero (as a row is repeated).
  • If n = m {\displaystyle n=m} {\displaystyle n=m} and the functions f j ( x ) {\displaystyle f_{j}(x)} {\displaystyle f_{j}(x)} are all polynomials, then ( α j α i ) {\displaystyle (\alpha _{j}-\alpha _{i})} {\displaystyle (\alpha _{j}-\alpha _{i})} divides the alternant determinant for all 1 i < j n {\displaystyle 1\leq i<j\leq n} {\displaystyle 1\leq i<j\leq n}. In particular, if V is a Vandermonde matrix, then i < j ( α j α i ) = det V {\textstyle \prod _{i<j}(\alpha _{j}-\alpha _{i})=\det V} {\textstyle \prod _{i<j}(\alpha _{j}-\alpha _{i})=\det V} divides such polynomial alternant determinants. The ratio det M det V {\textstyle {\frac {\det M}{\det V}}} {\textstyle {\frac {\det M}{\det V}}} is therefore a polynomial in α 1 , , α m {\displaystyle \alpha _{1},\ldots ,\alpha _{m}} {\displaystyle \alpha _{1},\ldots ,\alpha _{m}} called the bialternant. The Schur polynomial s ( λ 1 , , λ n ) {\displaystyle s_{(\lambda _{1},\dots ,\lambda _{n})}} {\displaystyle s_{(\lambda _{1},\dots ,\lambda _{n})}} is classically defined as the bialternant of the polynomials f j ( x ) = x λ j {\displaystyle f_{j}(x)=x^{\lambda _{j}}} {\displaystyle f_{j}(x)=x^{\lambda _{j}}}.

Applications

[edit ]

See also

[edit ]

References

[edit ]
Matrix classes
Explicitly constrained entries
Constant
Conditions on eigenvalues or eigenvectors
Satisfying conditions on products or inverses
With specific applications
Used in statistics
Used in graph theory
Used in science and engineering
Related terms

AltStyle によって変換されたページ (->オリジナル) /