Jump to content
Wikipedia The Free Encyclopedia

Matrix variate beta distribution

From Wikipedia, the free encyclopedia
Generalization of beta distribution
Matrix variate beta distribution
Notation B p ( a , b ) {\displaystyle {\rm {B}}_{p}(a,b)} {\displaystyle {\rm {B}}_{p}(a,b)}
Parameters a , b {\displaystyle a,b} {\displaystyle a,b}
Support p × p {\displaystyle p\times p} {\displaystyle p\times p} matrices with both U {\displaystyle U} {\displaystyle U} and I p U {\displaystyle I_{p}-U} {\displaystyle I_{p}-U} positive definite
PDF { β p ( a , b ) } 1 det ( U ) a ( p + 1 ) / 2 det ( I p U ) b ( p + 1 ) / 2 . {\displaystyle \left\{\beta _{p}\left(a,b\right)\right\}^{-1}\det \left(U\right)^{a-(p+1)/2}\det \left(I_{p}-U\right)^{b-(p+1)/2}.} {\displaystyle \left\{\beta _{p}\left(a,b\right)\right\}^{-1}\det \left(U\right)^{a-(p+1)/2}\det \left(I_{p}-U\right)^{b-(p+1)/2}.}
CDF 1 F 1 ( a ; a + b ; i Z ) {\displaystyle {}_{1}F_{1}\left(a;a+b;iZ\right)} {\displaystyle {}_{1}F_{1}\left(a;a+b;iZ\right)}

In statistics, the matrix variate beta distribution is a generalization of the beta distribution. It is also called the MANOVA ensemble and the Jacobi ensemble.

If U {\displaystyle U} {\displaystyle U} is a p × p {\displaystyle p\times p} {\displaystyle p\times p} positive definite matrix with a matrix variate beta distribution, and a , b > ( p 1 ) / 2 {\displaystyle a,b>(p-1)/2} {\displaystyle a,b>(p-1)/2} are real parameters, we write U B p ( a , b ) {\displaystyle U\sim B_{p}\left(a,b\right)} {\displaystyle U\sim B_{p}\left(a,b\right)} (sometimes B p I ( a , b ) {\displaystyle B_{p}^{I}\left(a,b\right)} {\displaystyle B_{p}^{I}\left(a,b\right)}). The probability density function for U {\displaystyle U} {\displaystyle U} is: { β p ( a , b ) } 1 det ( U ) a ( p + 1 ) / 2 det ( I p U ) b ( p + 1 ) / 2 . {\displaystyle \left\{\beta _{p}\left(a,b\right)\right\}^{-1}\det \left(U\right)^{a-(p+1)/2}\det \left(I_{p}-U\right)^{b-(p+1)/2}.} {\displaystyle \left\{\beta _{p}\left(a,b\right)\right\}^{-1}\det \left(U\right)^{a-(p+1)/2}\det \left(I_{p}-U\right)^{b-(p+1)/2}.}

Here β p ( a , b ) {\displaystyle \beta _{p}\left(a,b\right)} {\displaystyle \beta _{p}\left(a,b\right)} is the multivariate beta function:

β p ( a , b ) = Γ p ( a ) Γ p ( b ) Γ p ( a + b ) {\displaystyle \beta _{p}\left(a,b\right)={\frac {\Gamma _{p}\left(a\right)\Gamma _{p}\left(b\right)}{\Gamma _{p}\left(a+b\right)}}} {\displaystyle \beta _{p}\left(a,b\right)={\frac {\Gamma _{p}\left(a\right)\Gamma _{p}\left(b\right)}{\Gamma _{p}\left(a+b\right)}}}

where Γ p ( a ) {\displaystyle \Gamma _{p}\left(a\right)} {\displaystyle \Gamma _{p}\left(a\right)} is the multivariate gamma function given by

Γ p ( a ) = π p ( p 1 ) / 4 i = 1 p Γ ( a ( i 1 ) / 2 ) . {\displaystyle \Gamma _{p}\left(a\right)=\pi ^{p(p-1)/4}\prod _{i=1}^{p}\Gamma \left(a-(i-1)/2\right).} {\displaystyle \Gamma _{p}\left(a\right)=\pi ^{p(p-1)/4}\prod _{i=1}^{p}\Gamma \left(a-(i-1)/2\right).}

Theorems

[edit ]

Distribution of matrix inverse

[edit ]

If U B p ( a , b ) {\displaystyle U\sim B_{p}(a,b)} {\displaystyle U\sim B_{p}(a,b)} then the density of X = U 1 {\displaystyle X=U^{-1}} {\displaystyle X=U^{-1}} is given by

1 β p ( a , b ) det ( X ) ( a + b ) det ( X I p ) b ( p + 1 ) / 2 {\displaystyle {\frac {1}{\beta _{p}\left(a,b\right)}}\det(X)^{-(a+b)}\det \left(X-I_{p}\right)^{b-(p+1)/2}} {\displaystyle {\frac {1}{\beta _{p}\left(a,b\right)}}\det(X)^{-(a+b)}\det \left(X-I_{p}\right)^{b-(p+1)/2}}

provided that X > I p {\displaystyle X>I_{p}} {\displaystyle X>I_{p}} and a , b > ( p 1 ) / 2 {\displaystyle a,b>(p-1)/2} {\displaystyle a,b>(p-1)/2}.

Orthogonal transform

[edit ]

If U B p ( a , b ) {\displaystyle U\sim B_{p}(a,b)} {\displaystyle U\sim B_{p}(a,b)} and H {\displaystyle H} {\displaystyle H} is a constant p × p {\displaystyle p\times p} {\displaystyle p\times p} orthogonal matrix, then H U H T B ( a , b ) . {\displaystyle HUH^{T}\sim B(a,b).} {\displaystyle HUH^{T}\sim B(a,b).}

Also, if H {\displaystyle H} {\displaystyle H} is a random orthogonal p × p {\displaystyle p\times p} {\displaystyle p\times p} matrix which is independent of U {\displaystyle U} {\displaystyle U}, then H U H T B p ( a , b ) {\displaystyle HUH^{T}\sim B_{p}(a,b)} {\displaystyle HUH^{T}\sim B_{p}(a,b)}, distributed independently of H {\displaystyle H} {\displaystyle H}.

If A {\displaystyle A} {\displaystyle A} is any constant q × p {\displaystyle q\times p} {\displaystyle q\times p}, q p {\displaystyle q\leq p} {\displaystyle q\leq p} matrix of rank q {\displaystyle q} {\displaystyle q}, then A U A T {\displaystyle AUA^{T}} {\displaystyle AUA^{T}} has a generalized matrix variate beta distribution, specifically A U A T G B q ( a , b ; A A T , 0 ) {\displaystyle AUA^{T}\sim GB_{q}\left(a,b;AA^{T},0\right)} {\displaystyle AUA^{T}\sim GB_{q}\left(a,b;AA^{T},0\right)}.

Partitioned matrix results

[edit ]

If U B p ( a , b ) {\displaystyle U\sim B_{p}\left(a,b\right)} {\displaystyle U\sim B_{p}\left(a,b\right)} and we partition U {\displaystyle U} {\displaystyle U} as

U = [ U 11 U 12 U 21 U 22 ] {\displaystyle U={\begin{bmatrix}U_{11}&U_{12}\\U_{21}&U_{22}\end{bmatrix}}} {\displaystyle U={\begin{bmatrix}U_{11}&U_{12}\\U_{21}&U_{22}\end{bmatrix}}}

where U 11 {\displaystyle U_{11}} {\displaystyle U_{11}} is p 1 × p 1 {\displaystyle p_{1}\times p_{1}} {\displaystyle p_{1}\times p_{1}} and U 22 {\displaystyle U_{22}} {\displaystyle U_{22}} is p 2 × p 2 {\displaystyle p_{2}\times p_{2}} {\displaystyle p_{2}\times p_{2}}, then defining the Schur complement U 22 1 {\displaystyle U_{22\cdot 1}} {\displaystyle U_{22\cdot 1}} as U 22 U 21 U 11 1 U 12 {\displaystyle U_{22}-U_{21}{U_{11}}^{-1}U_{12}} {\displaystyle U_{22}-U_{21}{U_{11}}^{-1}U_{12}} gives the following results:

  • U 11 {\displaystyle U_{11}} {\displaystyle U_{11}} is independent of U 22 1 {\displaystyle U_{22\cdot 1}} {\displaystyle U_{22\cdot 1}}
  • U 11 B p 1 ( a , b ) {\displaystyle U_{11}\sim B_{p_{1}}\left(a,b\right)} {\displaystyle U_{11}\sim B_{p_{1}}\left(a,b\right)}
  • U 22 1 B p 2 ( a p 1 / 2 , b ) {\displaystyle U_{22\cdot 1}\sim B_{p_{2}}\left(a-p_{1}/2,b\right)} {\displaystyle U_{22\cdot 1}\sim B_{p_{2}}\left(a-p_{1}/2,b\right)}
  • U 21 U 11 , U 22 1 {\displaystyle U_{21}\mid U_{11},U_{22\cdot 1}} {\displaystyle U_{21}\mid U_{11},U_{22\cdot 1}} has an inverted matrix variate t distribution, specifically U 21 U 11 , U 22 1 I T p 2 , p 1 ( 2 b p + 1 , 0 , I p 2 U 22 1 , U 11 ( I p 1 U 11 ) ) . {\displaystyle U_{21}\mid U_{11},U_{22\cdot 1}\sim IT_{p_{2},p_{1}}\left(2b-p+1,0,I_{p_{2}}-U_{22\cdot 1},U_{11}(I_{p_{1}}-U_{11})\right).} {\displaystyle U_{21}\mid U_{11},U_{22\cdot 1}\sim IT_{p_{2},p_{1}}\left(2b-p+1,0,I_{p_{2}}-U_{22\cdot 1},U_{11}(I_{p_{1}}-U_{11})\right).}

Wishart results

[edit ]

Mitra proves the following theorem which illustrates a useful property of the matrix variate beta distribution. Suppose S 1 , S 2 {\displaystyle S_{1},S_{2}} {\displaystyle S_{1},S_{2}} are independent Wishart p × p {\displaystyle p\times p} {\displaystyle p\times p} matrices S 1 W p ( n 1 , Σ ) , S 2 W p ( n 2 , Σ ) {\displaystyle S_{1}\sim W_{p}(n_{1},\Sigma ),S_{2}\sim W_{p}(n_{2},\Sigma )} {\displaystyle S_{1}\sim W_{p}(n_{1},\Sigma ),S_{2}\sim W_{p}(n_{2},\Sigma )}. Assume that Σ {\displaystyle \Sigma } {\displaystyle \Sigma } is positive definite and that n 1 + n 2 p {\displaystyle n_{1}+n_{2}\geq p} {\displaystyle n_{1}+n_{2}\geq p}. If

U = S 1 / 2 S 1 ( S 1 / 2 ) T , {\displaystyle U=S^{-1/2}S_{1}\left(S^{-1/2}\right)^{T},} {\displaystyle U=S^{-1/2}S_{1}\left(S^{-1/2}\right)^{T},}

where S = S 1 + S 2 {\displaystyle S=S_{1}+S_{2}} {\displaystyle S=S_{1}+S_{2}}, then U {\displaystyle U} {\displaystyle U} has a matrix variate beta distribution B p ( n 1 / 2 , n 2 / 2 ) {\displaystyle B_{p}(n_{1}/2,n_{2}/2)} {\displaystyle B_{p}(n_{1}/2,n_{2}/2)}. In particular, U {\displaystyle U} {\displaystyle U} is independent of Σ {\displaystyle \Sigma } {\displaystyle \Sigma }.

Spectral density

[edit ]

The spectral density is expressed by a Jacobi polynomial.[1]

Extreme value distribution

[edit ]

The distribution of the largest eigenvalue is well approximated by a transform of the Tracy–Widom distribution.[2]

See also

[edit ]

References

[edit ]
  • Potters, Marc; Bouchaud, Jean-Philippe (2020年11月30日). "7. The Jacobi Ensemble". A First Course in Random Matrix Theory: for Physicists, Engineers and Data Scientists. Cambridge University Press. doi:10.1017/9781108768900. ISBN 978-1-108-76890-0.
  • Forrester, Peter (2010). "3. Laguerre and Jacobi ensembles". Log-gases and random matrices. London Mathematical Society monographs. Princeton: Princeton University Press. ISBN 978-0-691-12829-0.
  • Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010). "4. Some generalities". An introduction to random matrices. Cambridge: Cambridge University Press. ISBN 978-0-521-19452-5.
  • Mehta, M.L. (2004). "19. Matrix ensembles and classical orthogonal polynomials". Random Matrices. Amsterdam: Elsevier/Academic Press. ISBN 0-12-088409-7.
  • Gupta, A. K.; Nagar, D. K. (1999). Matrix Variate Distributions. Chapman and Hall. ISBN 1-58488-046-5.
  • Khatri, C. G. (1992). "Matrix Beta Distribution with Applications to Linear Models, Testing, Skewness and Kurtosis". In Venugopal, N. (ed.). Contributions to Stochastics. John Wiley & Sons. pp. 26–34. ISBN 0-470-22050-3.
  • Mitra, S. K. (1970). "A density-free approach to matrix variate beta distribution". The Indian Journal of Statistics. Series A (1961–2002). 32 (1): 81–88. JSTOR 25049638.
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
Concepts
Ensembles
Laws
Techniques

AltStyle によって変換されたページ (->オリジナル) /