Jump to content
Wikipedia The Free Encyclopedia

Quadratic form (statistics)

From Wikipedia, the free encyclopedia
This article needs additional citations for verification . Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Quadratic form" statistics – news · newspapers · books · scholar · JSTOR
(December 2009) (Learn how and when to remove this message)

In multivariate statistics, if ε {\displaystyle \varepsilon } {\displaystyle \varepsilon } is a vector of n {\displaystyle n} {\displaystyle n} random variables, and Λ {\displaystyle \Lambda } {\displaystyle \Lambda } is an n {\displaystyle n} {\displaystyle n}-dimensional symmetric matrix, then the scalar quantity ε T Λ ε {\displaystyle \varepsilon ^{T}\Lambda \varepsilon } {\displaystyle \varepsilon ^{T}\Lambda \varepsilon } is known as a quadratic form in ε {\displaystyle \varepsilon } {\displaystyle \varepsilon }.

Expectation

[edit ]

It can be shown that[1]

E [ ε T Λ ε ] = tr [ Λ Σ ] + μ T Λ μ {\displaystyle \operatorname {E} \left[\varepsilon ^{T}\Lambda \varepsilon \right]=\operatorname {tr} \left[\Lambda \Sigma \right]+\mu ^{T}\Lambda \mu } {\displaystyle \operatorname {E} \left[\varepsilon ^{T}\Lambda \varepsilon \right]=\operatorname {tr} \left[\Lambda \Sigma \right]+\mu ^{T}\Lambda \mu }

where μ {\displaystyle \mu } {\displaystyle \mu } and Σ {\displaystyle \Sigma } {\displaystyle \Sigma } are the expected value and variance-covariance matrix of ε {\displaystyle \varepsilon } {\displaystyle \varepsilon }, respectively, and tr denotes the trace of a matrix. This result only depends on the existence of μ {\displaystyle \mu } {\displaystyle \mu } and Σ {\displaystyle \Sigma } {\displaystyle \Sigma }; in particular, normality of ε {\displaystyle \varepsilon } {\displaystyle \varepsilon } is not required.

A book treatment of the topic of quadratic forms in random variables is that of Mathai and Provost.[2]

Proof

[edit ]

Since the quadratic form is a scalar quantity, ε T Λ ε = tr ( ε T Λ ε ) {\displaystyle \varepsilon ^{T}\Lambda \varepsilon =\operatorname {tr} (\varepsilon ^{T}\Lambda \varepsilon )} {\displaystyle \varepsilon ^{T}\Lambda \varepsilon =\operatorname {tr} (\varepsilon ^{T}\Lambda \varepsilon )}.

Next, by the cyclic property of the trace operator,

E [ tr ( ε T Λ ε ) ] = E [ tr ( Λ ε ε T ) ] . {\displaystyle \operatorname {E} [\operatorname {tr} (\varepsilon ^{T}\Lambda \varepsilon )]=\operatorname {E} [\operatorname {tr} (\Lambda \varepsilon \varepsilon ^{T})].} {\displaystyle \operatorname {E} [\operatorname {tr} (\varepsilon ^{T}\Lambda \varepsilon )]=\operatorname {E} [\operatorname {tr} (\Lambda \varepsilon \varepsilon ^{T})].}

Since the trace operator is a linear combination of the components of the matrix, it therefore follows from the linearity of the expectation operator that

E [ tr ( Λ ε ε T ) ] = tr ( Λ E ( ε ε T ) ) . {\displaystyle \operatorname {E} [\operatorname {tr} (\Lambda \varepsilon \varepsilon ^{T})]=\operatorname {tr} (\Lambda \operatorname {E} (\varepsilon \varepsilon ^{T})).} {\displaystyle \operatorname {E} [\operatorname {tr} (\Lambda \varepsilon \varepsilon ^{T})]=\operatorname {tr} (\Lambda \operatorname {E} (\varepsilon \varepsilon ^{T})).}

A standard property of variances then tells us that this is

tr ( Λ ( Σ + μ μ T ) ) . {\displaystyle \operatorname {tr} (\Lambda (\Sigma +\mu \mu ^{T})).} {\displaystyle \operatorname {tr} (\Lambda (\Sigma +\mu \mu ^{T})).}

Applying the cyclic property of the trace operator again, we get

tr ( Λ Σ ) + tr ( Λ μ μ T ) = tr ( Λ Σ ) + tr ( μ T Λ μ ) = tr ( Λ Σ ) + μ T Λ μ . {\displaystyle \operatorname {tr} (\Lambda \Sigma )+\operatorname {tr} (\Lambda \mu \mu ^{T})=\operatorname {tr} (\Lambda \Sigma )+\operatorname {tr} (\mu ^{T}\Lambda \mu )=\operatorname {tr} (\Lambda \Sigma )+\mu ^{T}\Lambda \mu .} {\displaystyle \operatorname {tr} (\Lambda \Sigma )+\operatorname {tr} (\Lambda \mu \mu ^{T})=\operatorname {tr} (\Lambda \Sigma )+\operatorname {tr} (\mu ^{T}\Lambda \mu )=\operatorname {tr} (\Lambda \Sigma )+\mu ^{T}\Lambda \mu .}

Variance in the Gaussian case

[edit ]

In general, the variance of a quadratic form depends greatly on the distribution of ε {\displaystyle \varepsilon } {\displaystyle \varepsilon }. However, if ε {\displaystyle \varepsilon } {\displaystyle \varepsilon } does follow a multivariate normal distribution, the variance of the quadratic form becomes particularly tractable. Assume for the moment that Λ {\displaystyle \Lambda } {\displaystyle \Lambda } is a symmetric matrix. Then,

var [ ε T Λ ε ] = 2 tr [ Λ Σ Λ Σ ] + 4 μ T Λ Σ Λ μ {\displaystyle \operatorname {var} \left[\varepsilon ^{T}\Lambda \varepsilon \right]=2\operatorname {tr} \left[\Lambda \Sigma \Lambda \Sigma \right]+4\mu ^{T}\Lambda \Sigma \Lambda \mu } {\displaystyle \operatorname {var} \left[\varepsilon ^{T}\Lambda \varepsilon \right]=2\operatorname {tr} \left[\Lambda \Sigma \Lambda \Sigma \right]+4\mu ^{T}\Lambda \Sigma \Lambda \mu }.[3]

In fact, this can be generalized to find the covariance between two quadratic forms on the same ε {\displaystyle \varepsilon } {\displaystyle \varepsilon } (once again, Λ 1 {\displaystyle \Lambda _{1}} {\displaystyle \Lambda _{1}} and Λ 2 {\displaystyle \Lambda _{2}} {\displaystyle \Lambda _{2}} must both be symmetric):

cov [ ε T Λ 1 ε , ε T Λ 2 ε ] = 2 tr [ Λ 1 Σ Λ 2 Σ ] + 4 μ T Λ 1 Σ Λ 2 μ {\displaystyle \operatorname {cov} \left[\varepsilon ^{T}\Lambda _{1}\varepsilon ,\varepsilon ^{T}\Lambda _{2}\varepsilon \right]=2\operatorname {tr} \left[\Lambda _{1}\Sigma \Lambda _{2}\Sigma \right]+4\mu ^{T}\Lambda _{1}\Sigma \Lambda _{2}\mu } {\displaystyle \operatorname {cov} \left[\varepsilon ^{T}\Lambda _{1}\varepsilon ,\varepsilon ^{T}\Lambda _{2}\varepsilon \right]=2\operatorname {tr} \left[\Lambda _{1}\Sigma \Lambda _{2}\Sigma \right]+4\mu ^{T}\Lambda _{1}\Sigma \Lambda _{2}\mu }.[4]

In addition, a quadratic form such as this follows a generalized chi-squared distribution.

Computing the variance in the non-symmetric case

[edit ]

The case for general Λ {\displaystyle \Lambda } {\displaystyle \Lambda } can be derived by noting that

ε T Λ T ε = ε T Λ ε {\displaystyle \varepsilon ^{T}\Lambda ^{T}\varepsilon =\varepsilon ^{T}\Lambda \varepsilon } {\displaystyle \varepsilon ^{T}\Lambda ^{T}\varepsilon =\varepsilon ^{T}\Lambda \varepsilon }

so

ε T Λ ~ ε = ε T ( Λ + Λ T ) ε / 2 {\displaystyle \varepsilon ^{T}{\tilde {\Lambda }}\varepsilon =\varepsilon ^{T}\left(\Lambda +\Lambda ^{T}\right)\varepsilon /2} {\displaystyle \varepsilon ^{T}{\tilde {\Lambda }}\varepsilon =\varepsilon ^{T}\left(\Lambda +\Lambda ^{T}\right)\varepsilon /2}

is a quadratic form in the symmetric matrix Λ ~ = ( Λ + Λ T ) / 2 {\displaystyle {\tilde {\Lambda }}=\left(\Lambda +\Lambda ^{T}\right)/2} {\displaystyle {\tilde {\Lambda }}=\left(\Lambda +\Lambda ^{T}\right)/2}, so the mean and variance expressions are the same, provided Λ {\displaystyle \Lambda } {\displaystyle \Lambda } is replaced by Λ ~ {\displaystyle {\tilde {\Lambda }}} {\displaystyle {\tilde {\Lambda }}} therein.

Examples of quadratic forms

[edit ]

In the setting where one has a set of observations y {\displaystyle y} {\displaystyle y} and an operator matrix H {\displaystyle H} {\displaystyle H}, then the residual sum of squares can be written as a quadratic form in y {\displaystyle y} {\displaystyle y}:

RSS = y T ( I H ) T ( I H ) y . {\displaystyle {\textrm {RSS}}=y^{T}(I-H)^{T}(I-H)y.} {\displaystyle {\textrm {RSS}}=y^{T}(I-H)^{T}(I-H)y.}

For procedures where the matrix H {\displaystyle H} {\displaystyle H} is symmetric and idempotent, and the errors are Gaussian with covariance matrix σ 2 I {\displaystyle \sigma ^{2}I} {\displaystyle \sigma ^{2}I}, RSS / σ 2 {\displaystyle {\textrm {RSS}}/\sigma ^{2}} {\displaystyle {\textrm {RSS}}/\sigma ^{2}} has a chi-squared distribution with k {\displaystyle k} {\displaystyle k} degrees of freedom and noncentrality parameter λ {\displaystyle \lambda } {\displaystyle \lambda }, where

k = tr [ ( I H ) T ( I H ) ] {\displaystyle k=\operatorname {tr} \left[(I-H)^{T}(I-H)\right]} {\displaystyle k=\operatorname {tr} \left[(I-H)^{T}(I-H)\right]}
λ = μ T ( I H ) T ( I H ) μ / 2 {\displaystyle \lambda =\mu ^{T}(I-H)^{T}(I-H)\mu /2} {\displaystyle \lambda =\mu ^{T}(I-H)^{T}(I-H)\mu /2}

may be found by matching the first two central moments of a noncentral chi-squared random variable to the expressions given in the first two sections. If H y {\displaystyle Hy} {\displaystyle Hy} estimates μ {\displaystyle \mu } {\displaystyle \mu } with no bias, then the noncentrality λ {\displaystyle \lambda } {\displaystyle \lambda } is zero and RSS / σ 2 {\displaystyle {\textrm {RSS}}/\sigma ^{2}} {\displaystyle {\textrm {RSS}}/\sigma ^{2}} follows a central chi-squared distribution.

See also

[edit ]

References

[edit ]
  1. ^ Bates, Douglas. "Quadratic Forms of Random Variables" (PDF). STAT 849 lectures. Retrieved August 21, 2011.
  2. ^ Mathai, A. M. & Provost, Serge B. (1992). Quadratic Forms in Random Variables. CRC Press. p. 424. ISBN 978-0824786915.
  3. ^ Rencher, Alvin C.; Schaalje, G. Bruce. (2008). Linear models in statistics (2nd ed.). Hoboken, N.J.: Wiley-Interscience. ISBN 9780471754985. OCLC 212120778.
  4. ^ Graybill, Franklin A. Matrices with applications in statistics (2. ed.). Wadsworth: Belmont, Calif. p. 367. ISBN 0534980384.

AltStyle によって変換されたページ (->オリジナル) /