Hypergeometric function of a matrix argument
In mathematics, the hypergeometric function of a matrix argument is a generalization of the classical hypergeometric series. It is a function defined by an infinite summation which can be used to evaluate certain multivariate integrals.
Hypergeometric functions of a matrix argument have applications in random matrix theory. For example, the distributions of the extreme eigenvalues of random matrices are often expressed in terms of the hypergeometric function of a matrix argument.
Definition
[edit ]Let {\displaystyle p\geq 0} and {\displaystyle q\geq 0} be integers, and let {\displaystyle X} be an {\displaystyle m\times m} complex symmetric matrix. Then the hypergeometric function of a matrix argument {\displaystyle X} and parameter {\displaystyle \alpha >0} is defined as
- {\displaystyle _{p}F_{q}^{(\alpha )}(a_{1},\ldots ,a_{p};b_{1},\ldots ,b_{q};X)=\sum _{k=0}^{\infty }\sum _{\kappa \vdash k}{\frac {1}{k!}}\cdot {\frac {(a_{1})_{\kappa }^{(\alpha )}\cdots (a_{p})_{\kappa }^{(\alpha )}}{(b_{1})_{\kappa }^{(\alpha )}\cdots (b_{q})_{\kappa }^{(\alpha )}}}\cdot C_{\kappa }^{(\alpha )}(X),}
where {\displaystyle \kappa \vdash k} means {\displaystyle \kappa } is a partition of {\displaystyle k}, {\displaystyle (a_{i})_{\kappa }^{(\alpha )}} is the generalized Pochhammer symbol, and {\displaystyle C_{\kappa }^{(\alpha )}(X)} is the "C" normalization of the Jack function.
Two matrix arguments
[edit ]If {\displaystyle X} and {\displaystyle Y} are two {\displaystyle m\times m} complex symmetric matrices, then the hypergeometric function of two matrix arguments is defined as:
- {\displaystyle _{p}F_{q}^{(\alpha )}(a_{1},\ldots ,a_{p};b_{1},\ldots ,b_{q};X,Y)=\sum _{k=0}^{\infty }\sum _{\kappa \vdash k}{\frac {1}{k!}}\cdot {\frac {(a_{1})_{\kappa }^{(\alpha )}\cdots (a_{p})_{\kappa }^{(\alpha )}}{(b_{1})_{\kappa }^{(\alpha )}\cdots (b_{q})_{\kappa }^{(\alpha )}}}\cdot {\frac {C_{\kappa }^{(\alpha )}(X)C_{\kappa }^{(\alpha )}(Y)}{C_{\kappa }^{(\alpha )}(I)}},}
where {\displaystyle I} is the identity matrix of size {\displaystyle m}.
Not a typical function of a matrix argument
[edit ]Unlike other functions of matrix argument, such as the matrix exponential, which are matrix-valued, the hypergeometric function of (one or two) matrix arguments is scalar-valued.
The parameter α
[edit ]In many publications the parameter {\displaystyle \alpha } is omitted. Also, in different publications different values of {\displaystyle \alpha } are being implicitly assumed. For example, in the theory of real random matrices (see, e.g., Muirhead, 1984), {\displaystyle \alpha =2} whereas in other settings (e.g., in the complex case—see Gross and Richards, 1989), {\displaystyle \alpha =1}. To make matters worse, in random matrix theory researchers tend to prefer a parameter called {\displaystyle \beta } instead of {\displaystyle \alpha } which is used in combinatorics.
The thing to remember is that
- {\displaystyle \alpha ={\frac {2}{\beta }}.}
Care should be exercised as to whether a particular text is using a parameter {\displaystyle \alpha } or {\displaystyle \beta } and which the particular value of that parameter is.
Typically, in settings involving real random matrices, {\displaystyle \alpha =2} and thus {\displaystyle \beta =1}. In settings involving complex random matrices, one has {\displaystyle \alpha =1} and {\displaystyle \beta =2}.
References
[edit ]- K. I. Gross and D. St. P. Richards, "Total positivity, spherical series, and hypergeometric functions of matrix argument", J. Approx. Theory, 59, no. 2, 224–246, 1989.
- J. Kaneko, "Selberg Integrals and hypergeometric functions associated with Jack polynomials", SIAM Journal on Mathematical Analysis, 24, no. 4, 1086-1110, 1993.
- Plamen Koev and Alan Edelman, "The efficient evaluation of the hypergeometric function of a matrix argument", Mathematics of Computation, 75, no. 254, 833-846, 2006.
- Robb Muirhead, Aspects of Multivariate Statistical Theory, John Wiley & Sons, Inc., New York, 1984.