Matrix variate beta distribution
| Matrix variate beta distribution | |||
|---|---|---|---|
| Notation | {\displaystyle {\rm {B}}_{p}(a,b)} | ||
| Parameters | {\displaystyle a,b} | ||
| Support | {\displaystyle p\times p} matrices with both {\displaystyle U} and {\displaystyle I_{p}-U} positive definite | ||
| {\displaystyle \left\{\beta _{p}\left(a,b\right)\right\}^{-1}\det \left(U\right)^{a-(p+1)/2}\det \left(I_{p}-U\right)^{b-(p+1)/2}.} | |||
| CDF | {\displaystyle {}_{1}F_{1}\left(a;a+b;iZ\right)} | ||
In statistics, the matrix variate beta distribution is a generalization of the beta distribution. It is also called the MANOVA ensemble and the Jacobi ensemble.
If {\displaystyle U} is a {\displaystyle p\times p} positive definite matrix with a matrix variate beta distribution, and {\displaystyle a,b>(p-1)/2} are real parameters, we write {\displaystyle U\sim B_{p}\left(a,b\right)} (sometimes {\displaystyle B_{p}^{I}\left(a,b\right)}). The probability density function for {\displaystyle U} is:{\displaystyle \left\{\beta _{p}\left(a,b\right)\right\}^{-1}\det \left(U\right)^{a-(p+1)/2}\det \left(I_{p}-U\right)^{b-(p+1)/2}.}
Here {\displaystyle \beta _{p}\left(a,b\right)} is the multivariate beta function:
- {\displaystyle \beta _{p}\left(a,b\right)={\frac {\Gamma _{p}\left(a\right)\Gamma _{p}\left(b\right)}{\Gamma _{p}\left(a+b\right)}}}
where {\displaystyle \Gamma _{p}\left(a\right)} is the multivariate gamma function given by
- {\displaystyle \Gamma _{p}\left(a\right)=\pi ^{p(p-1)/4}\prod _{i=1}^{p}\Gamma \left(a-(i-1)/2\right).}
Theorems
[edit ]Distribution of matrix inverse
[edit ]If {\displaystyle U\sim B_{p}(a,b)} then the density of {\displaystyle X=U^{-1}} is given by
- {\displaystyle {\frac {1}{\beta _{p}\left(a,b\right)}}\det(X)^{-(a+b)}\det \left(X-I_{p}\right)^{b-(p+1)/2}}
provided that {\displaystyle X>I_{p}} and {\displaystyle a,b>(p-1)/2}.
Orthogonal transform
[edit ]If {\displaystyle U\sim B_{p}(a,b)} and {\displaystyle H} is a constant {\displaystyle p\times p} orthogonal matrix, then {\displaystyle HUH^{T}\sim B(a,b).}
Also, if {\displaystyle H} is a random orthogonal {\displaystyle p\times p} matrix which is independent of {\displaystyle U}, then {\displaystyle HUH^{T}\sim B_{p}(a,b)}, distributed independently of {\displaystyle H}.
If {\displaystyle A} is any constant {\displaystyle q\times p}, {\displaystyle q\leq p} matrix of rank {\displaystyle q}, then {\displaystyle AUA^{T}} has a generalized matrix variate beta distribution, specifically {\displaystyle AUA^{T}\sim GB_{q}\left(a,b;AA^{T},0\right)}.
Partitioned matrix results
[edit ]If {\displaystyle U\sim B_{p}\left(a,b\right)} and we partition {\displaystyle U} as
- {\displaystyle U={\begin{bmatrix}U_{11}&U_{12}\\U_{21}&U_{22}\end{bmatrix}}}
where {\displaystyle U_{11}} is {\displaystyle p_{1}\times p_{1}} and {\displaystyle U_{22}} is {\displaystyle p_{2}\times p_{2}}, then defining the Schur complement {\displaystyle U_{22\cdot 1}} as {\displaystyle U_{22}-U_{21}{U_{11}}^{-1}U_{12}} gives the following results:
- {\displaystyle U_{11}} is independent of {\displaystyle U_{22\cdot 1}}
- {\displaystyle U_{11}\sim B_{p_{1}}\left(a,b\right)}
- {\displaystyle U_{22\cdot 1}\sim B_{p_{2}}\left(a-p_{1}/2,b\right)}
- {\displaystyle U_{21}\mid U_{11},U_{22\cdot 1}} has an inverted matrix variate t distribution, specifically {\displaystyle U_{21}\mid U_{11},U_{22\cdot 1}\sim IT_{p_{2},p_{1}}\left(2b-p+1,0,I_{p_{2}}-U_{22\cdot 1},U_{11}(I_{p_{1}}-U_{11})\right).}
Wishart results
[edit ]Mitra proves the following theorem which illustrates a useful property of the matrix variate beta distribution. Suppose {\displaystyle S_{1},S_{2}} are independent Wishart {\displaystyle p\times p} matrices {\displaystyle S_{1}\sim W_{p}(n_{1},\Sigma ),S_{2}\sim W_{p}(n_{2},\Sigma )}. Assume that {\displaystyle \Sigma } is positive definite and that {\displaystyle n_{1}+n_{2}\geq p}. If
- {\displaystyle U=S^{-1/2}S_{1}\left(S^{-1/2}\right)^{T},}
where {\displaystyle S=S_{1}+S_{2}}, then {\displaystyle U} has a matrix variate beta distribution {\displaystyle B_{p}(n_{1}/2,n_{2}/2)}. In particular, {\displaystyle U} is independent of {\displaystyle \Sigma }.
Spectral density
[edit ]The spectral density is expressed by a Jacobi polynomial.[1]
Extreme value distribution
[edit ]The distribution of the largest eigenvalue is well approximated by a transform of the Tracy–Widom distribution.[2]
See also
[edit ]References
[edit ]- ^ (Potters & Bouchaud 2020)
- ^ Johnstone, Iain M. (2008年12月01日). "Multivariate analysis and Jacobi ensembles: Largest eigenvalue, Tracy–Widom limits and rates of convergence". The Annals of Statistics. 36 (6). arXiv:0803.3408 . doi:10.1214/08-AOS605. ISSN 0090-5364.
- Potters, Marc; Bouchaud, Jean-Philippe (2020年11月30日). "7. The Jacobi Ensemble". A First Course in Random Matrix Theory: for Physicists, Engineers and Data Scientists. Cambridge University Press. doi:10.1017/9781108768900. ISBN 978-1-108-76890-0.
- Forrester, Peter (2010). "3. Laguerre and Jacobi ensembles". Log-gases and random matrices. London Mathematical Society monographs. Princeton: Princeton University Press. ISBN 978-0-691-12829-0.
- Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010). "4. Some generalities". An introduction to random matrices. Cambridge: Cambridge University Press. ISBN 978-0-521-19452-5.
- Mehta, M.L. (2004). "19. Matrix ensembles and classical orthogonal polynomials". Random Matrices. Amsterdam: Elsevier/Academic Press. ISBN 0-12-088409-7.
- Gupta, A. K.; Nagar, D. K. (1999). Matrix Variate Distributions. Chapman and Hall. ISBN 1-58488-046-5.
- Khatri, C. G. (1992). "Matrix Beta Distribution with Applications to Linear Models, Testing, Skewness and Kurtosis". In Venugopal, N. (ed.). Contributions to Stochastics. John Wiley & Sons. pp. 26–34. ISBN 0-470-22050-3.
- Mitra, S. K. (1970). "A density-free approach to matrix variate beta distribution". The Indian Journal of Statistics. Series A (1961–2002). 32 (1): 81–88. JSTOR 25049638.