Jump to content
Wikipedia The Free Encyclopedia

Order statistic

From Wikipedia, the free encyclopedia
Kth smallest value in a statistical sample
This article includes a list of general references, but it lacks sufficient corresponding inline citations . Please help to improve this article by introducing more precise citations. (December 2010) (Learn how and when to remove this message)
Probability density functions of the order statistics for a sample of size n = 5 from an exponential distribution with unit scale parameter

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value.[1] Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

Notation and examples

[edit ]

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6, 9, 3, 7,

the order statistics would be denoted

x ( 1 ) = 3 , x ( 2 ) = 6 , x ( 3 ) = 7 , x ( 4 ) = 9 , {\displaystyle {\begin{aligned}x_{(1)}&=3,&x_{(2)}&=6,\\x_{(3)}&=7,&x_{(4)}&=9,\end{aligned}}} {\displaystyle {\begin{aligned}x_{(1)}&=3,&x_{(2)}&=6,\\x_{(3)}&=7,&x_{(4)}&=9,\end{aligned}}}

where the subscript (i) enclosed in parentheses indicates the ith order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

X ( 1 ) = min { X 1 , , X n } {\displaystyle X_{(1)}=\min\{,円X_{1},\ldots ,X_{n},円\}} {\displaystyle X_{(1)}=\min\{,円X_{1},\ldots ,X_{n},円\}}

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size n, the nth order statistic (or largest order statistic) is the maximum, that is,

X ( n ) = max { X 1 , , X n } . {\displaystyle X_{(n)}=\max\{,円X_{1},\ldots ,X_{n},円\}.} {\displaystyle X_{(n)}=\max\{,円X_{1},\ldots ,X_{n},円\}.}

The sample range is the difference between the maximum and minimum. It is a function of the order statistics:

R a n g e { X 1 , , X n } = X ( n ) X ( 1 ) . {\displaystyle {\rm {Range}}\{,円X_{1},\ldots ,X_{n},円\}=X_{(n)}-X_{(1)}.} {\displaystyle {\rm {Range}}\{,円X_{1},\ldots ,X_{n},円\}=X_{(n)}-X_{(1)}.}

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number n of observations is odd. More precisely, if n = 2m+1 for some integer m, then the sample median is X ( m + 1 ) {\displaystyle X_{(m+1)}} {\displaystyle X_{(m+1)}} and so is an order statistic. On the other hand, when n is even, n = 2m and there are two middle values, X ( m ) {\displaystyle X_{(m)}} {\displaystyle X_{(m)}} and X ( m + 1 ) {\displaystyle X_{(m+1)}} {\displaystyle X_{(m+1)}}, and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

Probabilistic analysis

[edit ]

Given any random variables X1, X2, ..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order.

When the random variables X1, X2, ..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

Cumulative distribution function of order statistics

[edit ]

For a random sample as above, with cumulative distribution F X ( x ) {\displaystyle F_{X}(x)} {\displaystyle F_{X}(x)}, the order statistics for that sample have cumulative distributions as follows[2] (where r specifies which order statistic): F X ( r ) ( x ) = j = r n ( n j ) [ F X ( x ) ] j [ 1 F X ( x ) ] n j {\displaystyle F_{X_{(r)}}(x)=\sum _{j=r}^{n}{\binom {n}{j}}\left[F_{X}(x)\right]^{j}\left[1-F_{X}(x)\right]^{n-j}} {\displaystyle F_{X_{(r)}}(x)=\sum _{j=r}^{n}{\binom {n}{j}}\left[F_{X}(x)\right]^{j}\left[1-F_{X}(x)\right]^{n-j}} The proof of this formula is pure combinatorics: for the r {\displaystyle r} {\displaystyle r}th order statistic to be x {\displaystyle \leq x} {\displaystyle \leq x}, the number of samples that are > x {\displaystyle >x} {\displaystyle >x} has to be between 0 {\displaystyle 0} {\displaystyle 0} and n r {\displaystyle n-r} {\displaystyle n-r}. In the case that X ( j ) {\displaystyle X_{(j)}} {\displaystyle X_{(j)}} is the largest order statistic x {\displaystyle \leq x} {\displaystyle \leq x}, there has to be j {\displaystyle j} {\displaystyle j} samples x {\displaystyle \leq x} {\displaystyle \leq x} (each with an independent probability of F X ( x ) {\displaystyle F_{X}(x)} {\displaystyle F_{X}(x)}) and n j {\displaystyle n-j} {\displaystyle n-j} samples > x {\displaystyle >x} {\displaystyle >x} (each with an independent probability of 1 F X ( x ) {\displaystyle 1-F_{X}(x)} {\displaystyle 1-F_{X}(x)}). Finally there are ( n j ) {\textstyle {\binom {n}{j}}} {\textstyle {\binom {n}{j}}} different ways of choosing which of the n {\displaystyle n} {\displaystyle n} samples are of the x {\displaystyle \leq x} {\displaystyle \leq x} kind.

The corresponding probability density function may be derived from this result, and is found to be

f X ( r ) ( x ) = n ! ( r 1 ) ! ( n r ) ! f X ( x ) [ F X ( x ) ] r 1 [ 1 F X ( x ) ] n r . {\displaystyle f_{X_{(r)}}(x)={\frac {n!}{(r-1)!(n-r)!}}f_{X}(x)\left[F_{X}(x)\right]^{r-1}\left[1-F_{X}(x)\right]^{n-r}.} {\displaystyle f_{X_{(r)}}(x)={\frac {n!}{(r-1)!(n-r)!}}f_{X}(x)\left[F_{X}(x)\right]^{r-1}\left[1-F_{X}(x)\right]^{n-r}.}

Moreover, there are two special cases, which have CDFs that are easy to compute.

F X ( n ) ( x ) = Pr ( max { X 1 , , X n } x ) = [ F X ( x ) ] n {\displaystyle F_{X_{(n)}}(x)=\Pr(\max\{,円X_{1},\ldots ,X_{n},円\}\leq x)=[F_{X}(x)]^{n}} {\displaystyle F_{X_{(n)}}(x)=\Pr(\max\{,円X_{1},\ldots ,X_{n},円\}\leq x)=[F_{X}(x)]^{n}}

F X ( 1 ) ( x ) = Pr ( min { X 1 , , X n } x ) = 1 [ 1 F X ( x ) ] n {\displaystyle F_{X_{(1)}}(x)=\Pr(\min\{,円X_{1},\ldots ,X_{n},円\}\leq x)=1-[1-F_{X}(x)]^{n}} {\displaystyle F_{X_{(1)}}(x)=\Pr(\min\{,円X_{1},\ldots ,X_{n},円\}\leq x)=1-[1-F_{X}(x)]^{n}}

Which can be derived by careful consideration of probabilities.

Probability distributions of order statistics

[edit ]

Order statistics sampled from a uniform distribution

[edit ]

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that X 1 , X 2 , , X n {\displaystyle X_{1},X_{2},\ldots ,X_{n}} {\displaystyle X_{1},X_{2},\ldots ,X_{n}} is a random sample drawn from a continuous distribution with cdf F X {\displaystyle F_{X}} {\displaystyle F_{X}}. Denoting U i = F X ( X i ) {\displaystyle U_{i}=F_{X}(X_{i})} {\displaystyle U_{i}=F_{X}(X_{i})} we obtain the corresponding random sample U 1 , , U n {\displaystyle U_{1},\ldots ,U_{n}} {\displaystyle U_{1},\ldots ,U_{n}} from the standard uniform distribution. Note that the order statistics also satisfy U ( i ) = F X ( X ( i ) ) {\displaystyle U_{(i)}=F_{X}(X_{(i)})} {\displaystyle U_{(i)}=F_{X}(X_{(i)})}.

The probability density function of the order statistic U ( k ) {\displaystyle U_{(k)}} {\displaystyle U_{(k)}} is equal to[3]

f U ( k ) ( u ) = n ! ( k 1 ) ! ( n k ) ! u k 1 ( 1 u ) n k {\displaystyle f_{U_{(k)}}(u)={n! \over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}} {\displaystyle f_{U_{(k)}}(u)={n! \over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}}

that is, the kth order statistic of the uniform distribution is a beta-distributed random variable.[3] [4]

U ( k ) Beta ( k , n + 1 k ) . {\displaystyle U_{(k)}\sim \operatorname {Beta} (k,n+1\mathbf {-} k).} {\displaystyle U_{(k)}\sim \operatorname {Beta} (k,n+1\mathbf {-} k).}

The proof of these statements is as follows. For U ( k ) {\displaystyle U_{(k)}} {\displaystyle U_{(k)}} to be between u and u + du, it is necessary that exactly k − 1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already O ( d u 2 ) {\displaystyle O(du^{2})} {\displaystyle O(du^{2})}, so we have to calculate the probability that exactly k − 1, 1 and n − k observations fall in the intervals ( 0 , u ) {\displaystyle (0,u)} {\displaystyle (0,u)}, ( u , u + d u ) {\displaystyle (u,u+du)} {\displaystyle (u,u+du)} and ( u + d u , 1 ) {\displaystyle (u+du,1)} {\displaystyle (u+du,1)} respectively. This equals (refer to multinomial distribution for details)

n ! ( k 1 ) ! ( n k ) ! u k 1 d u ( 1 u d u ) n k {\displaystyle {n! \over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot (1-u-du)^{n-k}} {\displaystyle {n! \over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot (1-u-du)^{n-k}}

and the result follows.

The mean of this distribution is k / (n + 1).

The joint distribution of the order statistics of the uniform distribution

[edit ]

Similarly, for i < j, the joint probability density function of the two order statistics U(i) < U(j) can be shown to be

f U ( i ) , U ( j ) ( u , v ) = n ! u i 1 ( i 1 ) ! ( v u ) j i 1 ( j i 1 ) ! ( 1 v ) n j ( n j ) ! {\displaystyle f_{U_{(i)},U_{(j)}}(u,v)=n!{u^{i-1} \over (i-1)!}{(v-u)^{j-i-1} \over (j-i-1)!}{(1-v)^{n-j} \over (n-j)!}} {\displaystyle f_{U_{(i)},U_{(j)}}(u,v)=n!{u^{i-1} \over (i-1)!}{(v-u)^{j-i-1} \over (j-i-1)!}{(1-v)^{n-j} \over (n-j)!}}

which is (up to terms of higher order than O ( d u d v ) {\displaystyle O(du,円dv)} {\displaystyle O(du,円dv)}) the probability that i − 1, 1, j − 1 − i, 1 and n − j sample elements fall in the intervals ( 0 , u ) {\displaystyle (0,u)} {\displaystyle (0,u)}, ( u , u + d u ) {\displaystyle (u,u+du)} {\displaystyle (u,u+du)}, ( u + d u , v ) {\displaystyle (u+du,v)} {\displaystyle (u+du,v)}, ( v , v + d v ) {\displaystyle (v,v+dv)} {\displaystyle (v,v+dv)}, ( v + d v , 1 ) {\displaystyle (v+dv,1)} {\displaystyle (v+dv,1)} respectively.

One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:

f U ( 1 ) , U ( 2 ) , , U ( n ) ( u 1 , u 2 , , u n ) = n ! . {\displaystyle f_{U_{(1)},U_{(2)},\ldots ,U_{(n)}}(u_{1},u_{2},\ldots ,u_{n})=n!.} {\displaystyle f_{U_{(1)},U_{(2)},\ldots ,U_{(n)}}(u_{1},u_{2},\ldots ,u_{n})=n!.}

One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region 0 < u 1 < < u n < 1 {\displaystyle 0<u_{1}<\cdots <u_{n}<1} {\displaystyle 0<u_{1}<\cdots <u_{n}<1}. It is also related with another particularity of order statistics of uniform random variables: It follows from the BRS-inequality that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding 0 < s < n / 2 {\displaystyle 0<s<n/2} {\displaystyle 0<s<n/2} is bounded above by 2 s n {\displaystyle {\sqrt {2sn}}} {\displaystyle {\sqrt {2sn}}}, which is thus invariant on the set of all s , n {\displaystyle s,n} {\displaystyle s,n} with constant product s n {\displaystyle sn} {\displaystyle sn}.

Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of U ( n ) U ( 1 ) {\displaystyle U_{(n)}-U_{(1)}} {\displaystyle U_{(n)}-U_{(1)}}, i.e. maximum minus the minimum. More generally, for n k > j 1 {\displaystyle n\geq k>j\geq 1} {\displaystyle n\geq k>j\geq 1}, U ( k ) U ( j ) {\displaystyle U_{(k)}-U_{(j)}} {\displaystyle U_{(k)}-U_{(j)}} also has a beta distribution: U ( k ) U ( j ) Beta ( k j , n ( k j ) + 1 ) {\displaystyle U_{(k)}-U_{(j)}\sim \operatorname {Beta} (k-j,n-(k-j)+1)} {\displaystyle U_{(k)}-U_{(j)}\sim \operatorname {Beta} (k-j,n-(k-j)+1)}From these formulas we can derive the covariance between two order statistics: Cov ( U ( k ) , U ( j ) ) = j ( n k + 1 ) ( n + 1 ) 2 ( n + 2 ) {\displaystyle \operatorname {Cov} (U_{(k)},U_{(j)})={\frac {j(n-k+1)}{(n+1)^{2}(n+2)}}} {\displaystyle \operatorname {Cov} (U_{(k)},U_{(j)})={\frac {j(n-k+1)}{(n+1)^{2}(n+2)}}}The formula follows from noting that Var ( U ( k ) U ( j ) ) = Var ( U ( k ) ) + Var ( U ( j ) ) 2 Cov ( U ( k ) , U ( j ) ) = k ( n k + 1 ) ( n + 1 ) 2 ( n + 2 ) + j ( n j + 1 ) ( n + 1 ) 2 ( n + 2 ) 2 Cov ( U ( k ) , U ( j ) ) {\displaystyle {\begin{aligned}\operatorname {Var} (U_{(k)}-U_{(j)})&=\operatorname {Var} (U_{(k)})+\operatorname {Var} (U_{(j)})-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})\\[1ex]&={\frac {k(n-k+1)}{(n+1)^{2}(n+2)}}+{\frac {j(n-j+1)}{(n+1)^{2}(n+2)}}-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})\end{aligned}}} {\displaystyle {\begin{aligned}\operatorname {Var} (U_{(k)}-U_{(j)})&=\operatorname {Var} (U_{(k)})+\operatorname {Var} (U_{(j)})-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})\\[1ex]&={\frac {k(n-k+1)}{(n+1)^{2}(n+2)}}+{\frac {j(n-j+1)}{(n+1)^{2}(n+2)}}-2\cdot \operatorname {Cov} (U_{(k)},U_{(j)})\end{aligned}}}and comparing that with Var ( U ) = ( k j ) ( n ( k j ) + 1 ) ( n + 1 ) 2 ( n + 2 ) {\displaystyle \operatorname {Var} (U)={\frac {(k-j)(n-(k-j)+1)}{(n+1)^{2}(n+2)}}} {\displaystyle \operatorname {Var} (U)={\frac {(k-j)(n-(k-j)+1)}{(n+1)^{2}(n+2)}}}where U Beta ( k j , n ( k j ) + 1 ) {\displaystyle U\sim \operatorname {Beta} (k-j,n-(k-j)+1)} {\displaystyle U\sim \operatorname {Beta} (k-j,n-(k-j)+1)}, which is the actual distribution of the difference.

Order statistics sampled from an exponential distribution

[edit ]

For X 1 , X 2 , . . , X n {\displaystyle X_{1},X_{2},..,X_{n}} {\displaystyle X_{1},X_{2},..,X_{n}} a random sample of size n from an exponential distribution with parameter λ, the order statistics X(i) for i = 1,2,3, ..., n each have distribution

X ( i ) = d 1 λ ( j = 1 i Z j n j + 1 ) {\displaystyle X_{(i)}{\stackrel {d}{=}}{\frac {1}{\lambda }}\left(\sum _{j=1}^{i}{\frac {Z_{j}}{n-j+1}}\right)} {\displaystyle X_{(i)}{\stackrel {d}{=}}{\frac {1}{\lambda }}\left(\sum _{j=1}^{i}{\frac {Z_{j}}{n-j+1}}\right)}

where the Zj are i.i.d. standard exponential random variables (i.e. with rate parameter 1). This result was first published by Alfréd Rényi.[5] [6]

Order statistics sampled from an Erlang distribution

[edit ]

The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method [clarification needed ].[7]

The joint distribution of the order statistics of an absolutely continuous distribution

[edit ]

If FX is absolutely continuous, it has a density such that d F X ( x ) = f X ( x ) d x {\displaystyle dF_{X}(x)=f_{X}(x),円dx} {\displaystyle dF_{X}(x)=f_{X}(x),円dx}, and we can use the substitutions

u = F X ( x ) {\displaystyle u=F_{X}(x)} {\displaystyle u=F_{X}(x)}

and

d u = f X ( x ) d x {\displaystyle du=f_{X}(x),円dx} {\displaystyle du=f_{X}(x),円dx}

to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X:

f X ( k ) ( x ) = n ! ( k 1 ) ! ( n k ) ! [ F X ( x ) ] k 1 [ 1 F X ( x ) ] n k f X ( x ) {\displaystyle f_{X_{(k)}}(x)={\frac {n!}{(k-1)!(n-k)!}}[F_{X}(x)]^{k-1}[1-F_{X}(x)]^{n-k}f_{X}(x)} {\displaystyle f_{X_{(k)}}(x)={\frac {n!}{(k-1)!(n-k)!}}[F_{X}(x)]^{k-1}[1-F_{X}(x)]^{n-k}f_{X}(x)}

f X ( j ) , X ( k ) ( x , y ) = n ! ( j 1 ) ! ( k j 1 ) ! ( n k ) ! [ F X ( x ) ] j 1 [ F X ( y ) F X ( x ) ] k 1 j [ 1 F X ( y ) ] n k f X ( x ) f X ( y ) {\displaystyle f_{X_{(j)},X_{(k)}}(x,y)={\frac {n!}{(j-1)!(k-j-1)!(n-k)!}}[F_{X}(x)]^{j-1}[F_{X}(y)-F_{X}(x)]^{k-1-j}[1-F_{X}(y)]^{n-k}f_{X}(x)f_{X}(y)} {\displaystyle f_{X_{(j)},X_{(k)}}(x,y)={\frac {n!}{(j-1)!(k-j-1)!(n-k)!}}[F_{X}(x)]^{j-1}[F_{X}(y)-F_{X}(x)]^{k-1-j}[1-F_{X}(y)]^{n-k}f_{X}(x)f_{X}(y)} where x y {\displaystyle x\leq y} {\displaystyle x\leq y}

f X ( 1 ) , , X ( n ) ( x 1 , , x n ) = n ! f X ( x 1 ) f X ( x n ) {\displaystyle f_{X_{(1)},\ldots ,X_{(n)}}(x_{1},\ldots ,x_{n})=n!f_{X}(x_{1})\cdots f_{X}(x_{n})} {\displaystyle f_{X_{(1)},\ldots ,X_{(n)}}(x_{1},\ldots ,x_{n})=n!f_{X}(x_{1})\cdots f_{X}(x_{n})} where x 1 x 2 x n . {\displaystyle x_{1}\leq x_{2}\leq \dots \leq x_{n}.} {\displaystyle x_{1}\leq x_{2}\leq \dots \leq x_{n}.}

Application: confidence intervals for quantiles

[edit ]

An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution.

A small-sample-size example

[edit ]

The simplest case to consider is how well the sample median estimates the population median.

As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is [clarification needed ]

( 6 3 ) ( 1 / 2 ) 6 = 5 16 31 % . {\displaystyle {6 \choose 3}(1/2)^{6}={5 \over 16}\approx 31\%.} {\displaystyle {6 \choose 3}(1/2)^{6}={5 \over 16}\approx 31\%.}

Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability

[ ( 6 2 ) + ( 6 3 ) + ( 6 4 ) ] ( 1 / 2 ) 6 = 25 32 78 % . {\displaystyle \left[{6 \choose 2}+{6 \choose 3}+{6 \choose 4}\right](1/2)^{6}={25 \over 32}\approx 78\%.} {\displaystyle \left[{6 \choose 2}+{6 \choose 3}+{6 \choose 4}\right](1/2)^{6}={25 \over 32}\approx 78\%.}

With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.

Large sample sizes

[edit ]

For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by

U ( n p ) A N ( p , p ( 1 p ) n ) . {\displaystyle U_{(\lceil np\rceil )}\sim AN{\left(p,{\frac {p(1-p)}{n}}\right)}.} {\displaystyle U_{(\lceil np\rceil )}\sim AN{\left(p,{\frac {p(1-p)}{n}}\right)}.}

For a general distribution F with a continuous non-zero density at F −1(p), a similar asymptotic normality applies:

X ( n p ) A N ( F 1 ( p ) , p ( 1 p ) n [ f ( F 1 ( p ) ) ] 2 ) {\displaystyle X_{(\lceil np\rceil )}\sim AN{\left(F^{-1}(p),{\frac {p(1-p)}{n[f(F^{-1}(p))]^{2}}}\right)}} {\displaystyle X_{(\lceil np\rceil )}\sim AN{\left(F^{-1}(p),{\frac {p(1-p)}{n[f(F^{-1}(p))]^{2}}}\right)}}

where f is the density function, and F −1 is the quantile function associated with F. One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946.[8] Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds. The convergence to normal distribution also holds in a stronger sense, such as convergence in relative entropy or KL divergence.[9]

An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the sample mean, by the central limit theorem, is also asymptotically normally distributed, but with variance σ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for X that are normally distributed.

Proof

[edit ]

It can be shown that

B ( k , n + 1 k )   = d   X X + Y , {\displaystyle B(k,n+1-k)\ {\stackrel {\mathrm {d} }{=}}\ {\frac {X}{X+Y}},} {\displaystyle B(k,n+1-k)\ {\stackrel {\mathrm {d} }{=}}\ {\frac {X}{X+Y}},}

where

X = i = 1 k Z i , Y = i = k + 1 n + 1 Z i , {\displaystyle X=\sum _{i=1}^{k}Z_{i},\quad Y=\sum _{i=k+1}^{n+1}Z_{i},} {\displaystyle X=\sum _{i=1}^{k}Z_{i},\quad Y=\sum _{i=k+1}^{n+1}Z_{i},}

with Zi being independent identically distributed exponential random variables with rate 1. Since X/n and Y/n are asymptotically normally distributed by the CLT, our results follow by application of the delta method.

Mutual Information of Order Statistics

[edit ]

The mutual information and f-divergence between order statistics have also been considered.[10] For example, if the parent distribution is continuous, then for all 1 r , m n {\displaystyle 1\leq r,m\leq n} {\displaystyle 1\leq r,m\leq n} I ( X ( r ) ; X ( m ) ) = I ( U ( r ) ; U ( m ) ) , {\displaystyle I(X_{(r)};X_{(m)})=I(U_{(r)};U_{(m)}),} {\displaystyle I(X_{(r)};X_{(m)})=I(U_{(r)};U_{(m)}),} In other words, mutual information is independent of the parent distribution. For discrete random variables, the equality need not to hold and we only have I ( X ( r ) ; X ( m ) ) I ( U ( r ) ; U ( m ) ) , {\displaystyle I(X_{(r)};X_{(m)})\leq I(U_{(r)};U_{(m)}),} {\displaystyle I(X_{(r)};X_{(m)})\leq I(U_{(r)};U_{(m)}),}

The mutual information between uniform order statistics is given by I ( U ( r ) ; U ( m ) ) = T m 1 + T n r T m r + 1 T n {\displaystyle I(U_{(r)};U_{(m)})=T_{m-1}+T_{n-r}-T_{m-r+1}-T_{n}} {\displaystyle I(U_{(r)};U_{(m)})=T_{m-1}+T_{n-r}-T_{m-r+1}-T_{n}} where T k = log ( k ! ) k H k {\displaystyle T_{k}=\log(k!)-kH_{k}} {\displaystyle T_{k}=\log(k!)-kH_{k}} where H k {\displaystyle H_{k}} {\displaystyle H_{k}} is the k {\displaystyle k} {\displaystyle k}-th harmonic number.

Application: Non-parametric density estimation

[edit ]

Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator.[11] Suppose, we want to estimate the density f X {\displaystyle f_{X}} {\displaystyle f_{X}} at the point x {\displaystyle x^{*}} {\displaystyle x^{*}}. Consider the random variables Y i = | X i x | {\displaystyle Y_{i}=|X_{i}-x^{*}|} {\displaystyle Y_{i}=|X_{i}-x^{*}|}, which are i.i.d with distribution function g Y ( y ) = f X ( y + x ) + f X ( x y ) {\displaystyle g_{Y}(y)=f_{X}(y+x^{*})+f_{X}(x^{*}-y)} {\displaystyle g_{Y}(y)=f_{X}(y+x^{*})+f_{X}(x^{*}-y)}. In particular, f X ( x ) = g Y ( 0 ) 2 {\displaystyle f_{X}(x^{*})={\frac {g_{Y}(0)}{2}}} {\displaystyle f_{X}(x^{*})={\frac {g_{Y}(0)}{2}}}.

The expected value of the first order statistic Y ( 1 ) {\displaystyle Y_{(1)}} {\displaystyle Y_{(1)}} given a sample of N {\displaystyle N} {\displaystyle N} total observations yields,

E ( Y ( 1 ) ) = 1 ( N + 1 ) g ( 0 ) + 1 ( N + 1 ) ( N + 2 ) 0 1 Q ( z ) δ N + 1 ( z ) d z {\displaystyle E(Y_{(1)})={\frac {1}{(N+1)g(0)}}+{\frac {1}{(N+1)(N+2)}}\int _{0}^{1}Q''(z)\delta _{N+1}(z),円dz} {\displaystyle E(Y_{(1)})={\frac {1}{(N+1)g(0)}}+{\frac {1}{(N+1)(N+2)}}\int _{0}^{1}Q''(z)\delta _{N+1}(z),円dz}

where Q {\displaystyle Q} {\displaystyle Q} is the quantile function associated with the distribution g Y {\displaystyle g_{Y}} {\displaystyle g_{Y}}, and δ N ( z ) = ( N + 1 ) ( 1 z ) N {\displaystyle \delta _{N}(z)=(N+1)(1-z)^{N}} {\displaystyle \delta _{N}(z)=(N+1)(1-z)^{N}}. This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm,

 Input: A sample of 
 
 
 
 N
 
 
 {\displaystyle N}
 
{\displaystyle N} observations. 
 
 
 
 {
 
 x
 
 
 
 
 
 }
 
 
 =
 1
 
 
 M
 
 
 
 
 {\displaystyle \{x_{\ell }\}_{\ell =1}^{M}}
 
{\displaystyle \{x_{\ell }\}_{\ell =1}^{M}} points of density evaluation. Tuning parameter 
 
 
 
 a
 
 (
 0
 ,
 1
 )
 
 
 {\displaystyle a\in (0,1)}
 
{\displaystyle a\in (0,1)} (usually 1/3).
 Output: 
 
 
 
 {
 
 
 
 
 f
 ^
 
 
 
 
 
 
 
 
 }
 
 
 =
 1
 
 
 M
 
 
 
 
 {\displaystyle \{{\hat {f}}_{\ell }\}_{\ell =1}^{M}}
 
{\displaystyle \{{\hat {f}}_{\ell }\}_{\ell =1}^{M}} estimated density at the points of evaluation.
 1: Set 
 
 
 
 
 m
 
 N
 
 
 =
 round
 
 (
 
 N
 
 1
 
 a
 
 
 )
 
 
 {\displaystyle m_{N}=\operatorname {round} (N^{1-a})}
 
{\displaystyle m_{N}=\operatorname {round} (N^{1-a})}
 2: Set 
 
 
 
 
 s
 
 N
 
 
 =
 
 
 N
 
 m
 
 N
 
 
 
 
 
 
 {\displaystyle s_{N}={\frac {N}{m_{N}}}}
 
{\displaystyle s_{N}={\frac {N}{m_{N}}}}
 3: Create an 
 
 
 
 
 s
 
 N
 
 
 ×
 
 m
 
 N
 
 
 
 
 {\displaystyle s_{N}\times m_{N}}
 
{\displaystyle s_{N}\times m_{N}} matrix 
 
 
 
 
 M
 
 i
 j
 
 
 
 
 {\displaystyle M_{ij}}
 
{\displaystyle M_{ij}} which holds 
 
 
 
 
 m
 
 N
 
 
 
 
 {\displaystyle m_{N}}
 
{\displaystyle m_{N}} subsets with 
 
 
 
 
 s
 
 N
 
 
 
 
 {\displaystyle s_{N}}
 
{\displaystyle s_{N}} observations each.
 4: Create a vector 
 
 
 
 
 
 
 f
 ^
 
 
 
 
 
 {\displaystyle {\hat {f}}}
 
{\displaystyle {\hat {f}}} to hold the density evaluations.
 5: for 
 
 
 
 
 =
 1
 
 M
 
 
 {\displaystyle \ell =1\to M}
 
{\displaystyle \ell =1\to M} do
 6: for 
 
 
 
 k
 =
 1
 
 
 m
 
 N
 
 
 
 
 {\displaystyle k=1\to m_{N}}
 
{\displaystyle k=1\to m_{N}} do
 7: Find the nearest distance 
 
 
 
 
 d
 
 
 k
 
 
 
 
 {\displaystyle d_{\ell k}}
 
{\displaystyle d_{\ell k}} to the current point 
 
 
 
 
 x
 
 
 
 
 
 
 {\displaystyle x_{\ell }}
 
{\displaystyle x_{\ell }} within the 
 
 
 
 k
 
 
 {\displaystyle k}
 
{\displaystyle k}th subset
 8: end for
 9: Compute the subset average of distances to 
 
 
 
 
 x
 
 
 
 
 :
 
 d
 
 
 
 
 =
 
 
 
 k
 =
 1
 
 
 
 m
 
 N
 
 
 
 
 
 
 
 d
 
 
 k
 
 
 
 m
 
 N
 
 
 
 
 
 
 {\displaystyle x_{\ell }:d_{\ell }=\sum _{k=1}^{m_{N}}{\frac {d_{\ell k}}{m_{N}}}}
 
{\displaystyle x_{\ell }:d_{\ell }=\sum _{k=1}^{m_{N}}{\frac {d_{\ell k}}{m_{N}}}}
 10: Compute the density estimate at 
 
 
 
 
 x
 
 
 
 
 :
 
 
 
 
 f
 ^
 
 
 
 
 
 
 
 =
 
 
 1
 
 2
 (
 1
 +
 
 s
 
 N
 
 
 )
 
 d
 
 
 
 
 
 
 
 
 
 {\displaystyle x_{\ell }:{\hat {f}}_{\ell }={\frac {1}{2(1+s_{N})d_{\ell }}}}
 
{\displaystyle x_{\ell }:{\hat {f}}_{\ell }={\frac {1}{2(1+s_{N})d_{\ell }}}}
 11: end for
 12: return 
 
 
 
 
 
 
 f
 ^
 
 
 
 
 
 {\displaystyle {\hat {f}}}
 
{\displaystyle {\hat {f}}}

In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.[12]

Dealing with discrete variables

[edit ]

Suppose X 1 , X 2 , , X n {\displaystyle X_{1},X_{2},\ldots ,X_{n}} {\displaystyle X_{1},X_{2},\ldots ,X_{n}} are i.i.d. random variables from a discrete distribution with cumulative distribution function F ( x ) {\displaystyle F(x)} {\displaystyle F(x)} and probability mass function f ( x ) {\displaystyle f(x)} {\displaystyle f(x)}. To find the probabilities of the k th {\displaystyle k^{\text{th}}} {\displaystyle k^{\text{th}}} order statistics, three values are first needed, namely p 1 = Pr ( X < x ) = F ( x ) f ( x ) , p 2 = Pr ( X = x ) = f ( x ) ,  and  p 3 = Pr ( X > x ) = 1 F ( x ) . {\displaystyle {\begin{aligned}p_{1}&=\Pr(X<x)=F(x)-f(x),\\p_{2}&=\Pr(X=x)=f(x),{\text{ and }}\\p_{3}&=\Pr(X>x)=1-F(x).\end{aligned}}} {\displaystyle {\begin{aligned}p_{1}&=\Pr(X<x)=F(x)-f(x),\\p_{2}&=\Pr(X=x)=f(x),{\text{ and }}\\p_{3}&=\Pr(X>x)=1-F(x).\end{aligned}}}

The cumulative distribution function of the k th {\displaystyle k^{\text{th}}} {\displaystyle k^{\text{th}}} order statistic can be computed by noting that

Pr ( X ( k ) x ) = Pr ( there are at least  k  observations less than or equal to  x ) , = Pr ( there are at most  n k  observations greater than  x ) , = j = 0 n k ( n j ) p 3 j ( p 1 + p 2 ) n j . {\displaystyle {\begin{aligned}\Pr(X_{(k)}\leq x)&=\Pr({\text{there are at least }}k{\text{ observations less than or equal to }}x),\\&=\Pr({\text{there are at most }}n-k{\text{ observations greater than }}x),\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}p_{3}^{j}(p_{1}+p_{2})^{n-j}.\end{aligned}}} {\displaystyle {\begin{aligned}\Pr(X_{(k)}\leq x)&=\Pr({\text{there are at least }}k{\text{ observations less than or equal to }}x),\\&=\Pr({\text{there are at most }}n-k{\text{ observations greater than }}x),\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}p_{3}^{j}(p_{1}+p_{2})^{n-j}.\end{aligned}}}

Similarly, P ( X ( k ) < x ) {\displaystyle P(X_{(k)}<x)} {\displaystyle P(X_{(k)}<x)} is given by

Pr ( X ( k ) < x ) = Pr ( there are at least  k  observations less than  x ) , = Pr ( there are at most  n k  observations greater than or equal to  x ) , = j = 0 n k ( n j ) ( p 2 + p 3 ) j ( p 1 ) n j . {\displaystyle {\begin{aligned}\Pr(X_{(k)}<x)&=\Pr({\text{there are at least }}k{\text{ observations less than }}x),\\&=\Pr({\text{there are at most }}n-k{\text{ observations greater than or equal to }}x),\\&=\sum _{j=0}^{n-k}{n \choose j}(p_{2}+p_{3})^{j}(p_{1})^{n-j}.\end{aligned}}} {\displaystyle {\begin{aligned}\Pr(X_{(k)}<x)&=\Pr({\text{there are at least }}k{\text{ observations less than }}x),\\&=\Pr({\text{there are at most }}n-k{\text{ observations greater than or equal to }}x),\\&=\sum _{j=0}^{n-k}{n \choose j}(p_{2}+p_{3})^{j}(p_{1})^{n-j}.\end{aligned}}}

Note that the probability mass function of X ( k ) {\displaystyle X_{(k)}} {\displaystyle X_{(k)}} is just the difference of these values, that is to say

Pr ( X ( k ) = x ) = Pr ( X ( k ) x ) Pr ( X ( k ) < x ) , = j = 0 n k ( n j ) [ p 3 j ( p 1 + p 2 ) n j ( p 2 + p 3 ) j ( p 1 ) n j ] , = j = 0 n k ( n j ) [ ( 1 F ( x ) ) j F ( x ) n j ( 1 F ( x ) + f ( x ) ) j ( F ( x ) f ( x ) ) n j ] . {\displaystyle {\begin{aligned}\Pr(X_{(k)}=x)&=\Pr(X_{(k)}\leq x)-\Pr(X_{(k)}<x),\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}\left[p_{3}^{j}(p_{1}+p_{2})^{n-j}-(p_{2}+p_{3})^{j}(p_{1})^{n-j}\right],\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}\left[\left(1-F(x)\right)^{j}F(x)^{n-j}-\left(1-F(x)+f(x)\right)^{j}\left(F(x)-f(x)\right)^{n-j}\right].\end{aligned}}} {\displaystyle {\begin{aligned}\Pr(X_{(k)}=x)&=\Pr(X_{(k)}\leq x)-\Pr(X_{(k)}<x),\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}\left[p_{3}^{j}(p_{1}+p_{2})^{n-j}-(p_{2}+p_{3})^{j}(p_{1})^{n-j}\right],\\&=\sum _{j=0}^{n-k}{\binom {n}{j}}\left[\left(1-F(x)\right)^{j}F(x)^{n-j}-\left(1-F(x)+f(x)\right)^{j}\left(F(x)-f(x)\right)^{n-j}\right].\end{aligned}}}

Computing order statistics

[edit ]

The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n).

Applications

[edit ]

Order statistics have a lot of applications in areas as reliability theory, financial mathematics, survival analysis, epidemiology, sports, quality control, actuarial risk, etc. There is an extensive literature devoted to studies on applications of order statistics in these fields.

For example, a recent application in actuarial risk can be found in,[13] where some weighted premium principles in terms of record claims and kth record claims are provided.

See also

[edit ]

Examples of order statistics

[edit ]

References

[edit ]
  1. ^ David, H. A.; Nagaraja, H. N. (2003). Order Statistics. Wiley Series in Probability and Statistics. doi:10.1002/0471722162. ISBN 9780471722168.
  2. ^ Casella, George; Berger, Roger (2002). Statistical Inference (2nd ed.). Cengage Learning. p. 229. ISBN 9788131503942.
  3. ^ a b Gentle, James E. (2009), Computational Statistics, Springer, p. 63, ISBN 9780387981444 .
  4. ^ Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages", Statistical Methodology, 6 (1): 70–81, doi:10.1016/j.stamet.200804001, As is well known, the beta distribution is the distribution of the m 'th order statistic from a random sample of size n from the uniform distribution (on (0,1)).
  5. ^ David, H. A.; Nagaraja, H. N. (2003), "Chapter 2. Basic Distribution Theory", Order Statistics, Wiley Series in Probability and Statistics, p. 9, doi:10.1002/0471722162.ch2, ISBN 9780471722168
  6. ^ Rényi, Alfréd (1953). "On the theory of order statistics". Acta Mathematica Hungarica . 4 (3): 191–231. doi:10.1007/BF02127580 .
  7. ^ Hlynka, M.; Brill, P. H.; Horn, W. (2010). "A method for obtaining Laplace transforms of order statistics of Erlang random variables". Statistics & Probability Letters. 80: 9–18. doi:10.1016/j.spl.200909006.
  8. ^ Mosteller, Frederick (1946). "On Some Useful "Inefficient" Statistics". Annals of Mathematical Statistics . 17 (4): 377–408. doi:10.1214/aoms/1177730881 . Retrieved February 26, 2015.
  9. ^ M. Cardone, A. Dytso and C. Rush, "Entropic Central Limit Theorem for Order Statistics," in IEEE Transactions on Information Theory, vol. 69, no. 4, pp. 2193-2205, April 2023, doi: 10.1109/TIT.2022.3219344.
  10. ^ A. Dytso, M. Cardone and C. Rush, "Measuring Dependencies of Order Statistics: An Information Theoretic Perspective," in 2020 IEEE Information Theory Workshop, 2021, doi: 10.1109/ITW46852.2021.9457617.
  11. ^ Garg, Vikram V.; Tenorio, Luis; Willcox, Karen (2017). "Minimum local distance density estimation". Communications in Statistics - Theory and Methods. 46 (1): 148–164. arXiv:1412.2851 . doi:10.1080/03610926.2014.988260. S2CID 14334678.
  12. ^ David, H. A.; Nagaraja, H. N. (2003), "Chapter 3. Expected Values and Moments", Order Statistics, Wiley Series in Probability and Statistics, p. 34, doi:10.1002/0471722162.ch3, ISBN 9780471722168
  13. ^ Castaño-Martínez, A.; López-Blázquez, F.; Pigueiras, G.; Sordo, M.A. (2020). "A method for constructing and interpreting some weighted premium principles". ASTIN Bulletin. 50 (3): 1037–1064. doi:10.1017/asb.2020.15.
[edit ]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis (see also Template:Least squares and regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics

AltStyle によって変換されたページ (->オリジナル) /