Jump to content
Wikipedia The Free Encyclopedia

Autocovariance

From Wikipedia, the free encyclopedia
(Redirected from Autocovariance function)
Concept in probability and statistics
Part of a series on Statistics
Correlation and covariance

In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process in question.

Auto-covariance of stochastic processes

[edit ]

Definition

[edit ]

With the usual notation E {\displaystyle \operatorname {E} } {\displaystyle \operatorname {E} } for the expectation operator, if the stochastic process { X t } {\displaystyle \left\{X_{t}\right\}} {\displaystyle \left\{X_{t}\right\}} has the mean function μ t = E [ X t ] {\displaystyle \mu _{t}=\operatorname {E} [X_{t}]} {\displaystyle \mu _{t}=\operatorname {E} [X_{t}]}, then the autocovariance is given by[1] : p. 162 

K X X ( t 1 , t 2 ) = cov [ X t 1 , X t 2 ] = E [ ( X t 1 μ t 1 ) ( X t 2 μ t 2 ) ] = E [ X t 1 X t 2 ] μ t 1 μ t 2 {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {cov} \left[X_{t_{1}},X_{t_{2}}\right]=\operatorname {E} [(X_{t_{1}}-\mu _{t_{1}})(X_{t_{2}}-\mu _{t_{2}})]=\operatorname {E} [X_{t_{1}}X_{t_{2}}]-\mu _{t_{1}}\mu _{t_{2}}} {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {cov} \left[X_{t_{1}},X_{t_{2}}\right]=\operatorname {E} [(X_{t_{1}}-\mu _{t_{1}})(X_{t_{2}}-\mu _{t_{2}})]=\operatorname {E} [X_{t_{1}}X_{t_{2}}]-\mu _{t_{1}}\mu _{t_{2}}} Eq.1

where t 1 {\displaystyle t_{1}} {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} {\displaystyle t_{2}} are two instances in time.

Definition for weakly stationary process

[edit ]

If { X t } {\displaystyle \left\{X_{t}\right\}} {\displaystyle \left\{X_{t}\right\}} is a weakly stationary (WSS) process, then the following are true:[1] : p. 163 

μ t 1 = μ t 2 μ {\displaystyle \mu _{t_{1}}=\mu _{t_{2}}\triangleq \mu } {\displaystyle \mu _{t_{1}}=\mu _{t_{2}}\triangleq \mu } for all t 1 , t 2 {\displaystyle t_{1},t_{2}} {\displaystyle t_{1},t_{2}}

and

E [ | X t | 2 ] < {\displaystyle \operatorname {E} [|X_{t}|^{2}]<\infty } {\displaystyle \operatorname {E} [|X_{t}|^{2}]<\infty } for all t {\displaystyle t} {\displaystyle t}

and

K X X ( t 1 , t 2 ) = K X X ( t 2 t 1 , 0 ) K X X ( t 2 t 1 ) = K X X ( τ ) , {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {K} _{XX}(t_{2}-t_{1},0)\triangleq \operatorname {K} _{XX}(t_{2}-t_{1})=\operatorname {K} _{XX}(\tau ),} {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {K} _{XX}(t_{2}-t_{1},0)\triangleq \operatorname {K} _{XX}(t_{2}-t_{1})=\operatorname {K} _{XX}(\tau ),}

where τ = t 2 t 1 {\displaystyle \tau =t_{2}-t_{1}} {\displaystyle \tau =t_{2}-t_{1}} is the lag time, or the amount of time by which the signal has been shifted.

The autocovariance function of a WSS process is therefore given by:[2] : p. 517 

K X X ( τ ) = E [ ( X t μ t ) ( X t τ μ t τ ) ] = E [ X t X t τ ] μ t μ t τ {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} [(X_{t}-\mu _{t})(X_{t-\tau }-\mu _{t-\tau })]=\operatorname {E} [X_{t}X_{t-\tau }]-\mu _{t}\mu _{t-\tau }} {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} [(X_{t}-\mu _{t})(X_{t-\tau }-\mu _{t-\tau })]=\operatorname {E} [X_{t}X_{t-\tau }]-\mu _{t}\mu _{t-\tau }} Eq.2

which is equivalent to

K X X ( τ ) = E [ ( X t + τ μ t + τ ) ( X t μ t ) ] = E [ X t + τ X t ] μ 2 {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} [(X_{t+\tau }-\mu _{t+\tau })(X_{t}-\mu _{t})]=\operatorname {E} [X_{t+\tau }X_{t}]-\mu ^{2}} {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} [(X_{t+\tau }-\mu _{t+\tau })(X_{t}-\mu _{t})]=\operatorname {E} [X_{t+\tau }X_{t}]-\mu ^{2}}.

Normalization

[edit ]

It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.

The definition of the normalized auto-correlation of a stochastic process is

ρ X X ( t 1 , t 2 ) = K X X ( t 1 , t 2 ) σ t 1 σ t 2 = E [ ( X t 1 μ t 1 ) ( X t 2 μ t 2 ) ] σ t 1 σ t 2 {\displaystyle \rho _{XX}(t_{1},t_{2})={\frac {\operatorname {K} _{XX}(t_{1},t_{2})}{\sigma _{t_{1}}\sigma _{t_{2}}}}={\frac {\operatorname {E} [(X_{t_{1}}-\mu _{t_{1}})(X_{t_{2}}-\mu _{t_{2}})]}{\sigma _{t_{1}}\sigma _{t_{2}}}}} {\displaystyle \rho _{XX}(t_{1},t_{2})={\frac {\operatorname {K} _{XX}(t_{1},t_{2})}{\sigma _{t_{1}}\sigma _{t_{2}}}}={\frac {\operatorname {E} [(X_{t_{1}}-\mu _{t_{1}})(X_{t_{2}}-\mu _{t_{2}})]}{\sigma _{t_{1}}\sigma _{t_{2}}}}}.

If the function ρ X X {\displaystyle \rho _{XX}} {\displaystyle \rho _{XX}} is well-defined, its value must lie in the range [ 1 , 1 ] {\displaystyle [-1,1]} {\displaystyle [-1,1]}, with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.

For a WSS process, the definition is

ρ X X ( τ ) = K X X ( τ ) σ 2 = E [ ( X t μ ) ( X t + τ μ ) ] σ 2 {\displaystyle \rho _{XX}(\tau )={\frac {\operatorname {K} _{XX}(\tau )}{\sigma ^{2}}}={\frac {\operatorname {E} [(X_{t}-\mu )(X_{t+\tau }-\mu )]}{\sigma ^{2}}}} {\displaystyle \rho _{XX}(\tau )={\frac {\operatorname {K} _{XX}(\tau )}{\sigma ^{2}}}={\frac {\operatorname {E} [(X_{t}-\mu )(X_{t+\tau }-\mu )]}{\sigma ^{2}}}}.

where

K X X ( 0 ) = σ 2 {\displaystyle \operatorname {K} _{XX}(0)=\sigma ^{2}} {\displaystyle \operatorname {K} _{XX}(0)=\sigma ^{2}}.

Properties

[edit ]

Symmetry property

[edit ]
K X X ( t 1 , t 2 ) = K X X ( t 2 , t 1 ) ¯ {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})={\overline {\operatorname {K} _{XX}(t_{2},t_{1})}}} {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})={\overline {\operatorname {K} _{XX}(t_{2},t_{1})}}}[3] : p.169 

respectively for a WSS process:

K X X ( τ ) = K X X ( τ ) ¯ {\displaystyle \operatorname {K} _{XX}(\tau )={\overline {\operatorname {K} _{XX}(-\tau )}}} {\displaystyle \operatorname {K} _{XX}(\tau )={\overline {\operatorname {K} _{XX}(-\tau )}}}[3] : p.173 

Linear filtering

[edit ]

The autocovariance of a linearly filtered process { Y t } {\displaystyle \left\{Y_{t}\right\}} {\displaystyle \left\{Y_{t}\right\}}

Y t = k = a k X t + k {\displaystyle Y_{t}=\sum _{k=-\infty }^{\infty }a_{k}X_{t+k},円} {\displaystyle Y_{t}=\sum _{k=-\infty }^{\infty }a_{k}X_{t+k},円}

is

K Y Y ( τ ) = k , l = a k a l K X X ( τ + k l ) . {\displaystyle K_{YY}(\tau )=\sum _{k,l=-\infty }^{\infty }a_{k}a_{l}K_{XX}(\tau +k-l).,円} {\displaystyle K_{YY}(\tau )=\sum _{k,l=-\infty }^{\infty }a_{k}a_{l}K_{XX}(\tau +k-l).,円}

Calculating turbulent diffusivity

[edit ]

Autocovariance can be used to calculate turbulent diffusivity.[4] Turbulence in a flow can cause the fluctuation of velocity in space and time. Thus, we are able to identify turbulence through the statistics of those fluctuations[citation needed ].

Reynolds decomposition is used to define the velocity fluctuations u ( x , t ) {\displaystyle u'(x,t)} {\displaystyle u'(x,t)} (assume we are now working with 1D problem and U ( x , t ) {\displaystyle U(x,t)} {\displaystyle U(x,t)} is the velocity along x {\displaystyle x} {\displaystyle x} direction):

U ( x , t ) = U ( x , t ) + u ( x , t ) , {\displaystyle U(x,t)=\langle U(x,t)\rangle +u'(x,t),} {\displaystyle U(x,t)=\langle U(x,t)\rangle +u'(x,t),}

where U ( x , t ) {\displaystyle U(x,t)} {\displaystyle U(x,t)} is the true velocity, and U ( x , t ) {\displaystyle \langle U(x,t)\rangle } {\displaystyle \langle U(x,t)\rangle } is the expected value of velocity. If we choose a correct U ( x , t ) {\displaystyle \langle U(x,t)\rangle } {\displaystyle \langle U(x,t)\rangle }, all of the stochastic components of the turbulent velocity will be included in u ( x , t ) {\displaystyle u'(x,t)} {\displaystyle u'(x,t)}. To determine U ( x , t ) {\displaystyle \langle U(x,t)\rangle } {\displaystyle \langle U(x,t)\rangle }, a set of velocity measurements that are assembled from points in space, moments in time or repeated experiments is required.

If we assume the turbulent flux u c {\displaystyle \langle u'c'\rangle } {\displaystyle \langle u'c'\rangle } ( c = c c {\displaystyle c'=c-\langle c\rangle } {\displaystyle c'=c-\langle c\rangle }, and c is the concentration term) can be caused by a random walk, we can use Fick's laws of diffusion to express the turbulent flux term:

J turbulence x = u c D T x c x . {\displaystyle J_{{\text{turbulence}}_{x}}=\langle u'c'\rangle \approx D_{T_{x}}{\frac {\partial \langle c\rangle }{\partial x}}.} {\displaystyle J_{{\text{turbulence}}_{x}}=\langle u'c'\rangle \approx D_{T_{x}}{\frac {\partial \langle c\rangle }{\partial x}}.}

The velocity autocovariance is defined as

K X X u ( t 0 ) u ( t 0 + τ ) {\displaystyle K_{XX}\equiv \langle u'(t_{0})u'(t_{0}+\tau )\rangle } {\displaystyle K_{XX}\equiv \langle u'(t_{0})u'(t_{0}+\tau )\rangle } or K X X u ( x 0 ) u ( x 0 + r ) , {\displaystyle K_{XX}\equiv \langle u'(x_{0})u'(x_{0}+r)\rangle ,} {\displaystyle K_{XX}\equiv \langle u'(x_{0})u'(x_{0}+r)\rangle ,}

where τ {\displaystyle \tau } {\displaystyle \tau } is the lag time, and r {\displaystyle r} {\displaystyle r} is the lag distance.

The turbulent diffusivity D T x {\displaystyle D_{T_{x}}} {\displaystyle D_{T_{x}}} can be calculated using the following 3 methods:

  1. If we have velocity data along a Lagrangian trajectory :
    D T x = τ u ( t 0 ) u ( t 0 + τ ) d τ . {\displaystyle D_{T_{x}}=\int _{\tau }^{\infty }u'(t_{0})u'(t_{0}+\tau ),円d\tau .} {\displaystyle D_{T_{x}}=\int _{\tau }^{\infty }u'(t_{0})u'(t_{0}+\tau ),円d\tau .}
  2. If we have velocity data at one fixed (Eulerian) location[citation needed ]:
    D T x [ 0.3 ± 0.1 ] [ u u + u 2 u u ] τ u ( t 0 ) u ( t 0 + τ ) d τ . {\displaystyle D_{T_{x}}\approx [0.3\pm 0.1]\left[{\frac {\langle u'u'\rangle +\langle u\rangle ^{2}}{\langle u'u'\rangle }}\right]\int _{\tau }^{\infty }u'(t_{0})u'(t_{0}+\tau ),円d\tau .} {\displaystyle D_{T_{x}}\approx [0.3\pm 0.1]\left[{\frac {\langle u'u'\rangle +\langle u\rangle ^{2}}{\langle u'u'\rangle }}\right]\int _{\tau }^{\infty }u'(t_{0})u'(t_{0}+\tau ),円d\tau .}
  3. If we have velocity information at two fixed (Eulerian) locations[citation needed ]:
    D T x [ 0.4 ± 0.1 ] [ 1 u u ] r u ( x 0 ) u ( x 0 + r ) d r , {\displaystyle D_{T_{x}}\approx [0.4\pm 0.1]\left[{\frac {1}{\langle u'u'\rangle }}\right]\int _{r}^{\infty }u'(x_{0})u'(x_{0}+r),円dr,} {\displaystyle D_{T_{x}}\approx [0.4\pm 0.1]\left[{\frac {1}{\langle u'u'\rangle }}\right]\int _{r}^{\infty }u'(x_{0})u'(x_{0}+r),円dr,}
    where r {\displaystyle r} {\displaystyle r} is the distance separated by these two fixed locations.

Auto-covariance of random vectors

[edit ]
Main article: Auto-covariance matrix

See also

[edit ]

References

[edit ]
  1. ^ a b Hsu, Hwei (1997). Probability, random variables, and random processes . McGraw-Hill. ISBN 978-0-07-030644-8.
  2. ^ Lapidoth, Amos (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 978-0-521-19395-5.
  3. ^ a b Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3
  4. ^ Taylor, G. I. (1922年01月01日). "Diffusion by Continuous Movements". Proceedings of the London Mathematical Society. s2-20 (1): 196–212. Bibcode:1922PLMS..220S.196T. doi:10.1112/plms/s2-20.1.196. ISSN 1460-244X.

Further reading

[edit ]

AltStyle によって変換されたページ (->オリジナル) /