Jump to content
Wikipedia The Free Encyclopedia

Kumaraswamy distribution

From Wikipedia, the free encyclopedia
Family of continuous probability distributions
For other uses, see Kumaraswamy (disambiguation).
Kumaraswamy
Probability density function
Probability density function
Cumulative distribution function
Cumulative distribution function
Parameters a > 0 {\displaystyle a>0,円} {\displaystyle a>0,円} (real)
b > 0 {\displaystyle b>0,円} {\displaystyle b>0,円} (real)
Support x ( 0 , 1 ) {\displaystyle x\in (0,1),円} {\displaystyle x\in (0,1),円}
PDF a b x a 1 ( 1 x a ) b 1 {\displaystyle abx^{a-1}(1-x^{a})^{b-1},円} {\displaystyle abx^{a-1}(1-x^{a})^{b-1},円}
CDF 1 ( 1 x a ) b {\displaystyle 1-(1-x^{a})^{b}} {\displaystyle 1-(1-x^{a})^{b}}
Quantile ( 1 ( 1 u ) 1 b ) 1 a {\displaystyle (1-(1-u)^{\frac {1}{b}})^{\frac {1}{a}}} {\displaystyle (1-(1-u)^{\frac {1}{b}})^{\frac {1}{a}}}
Mean b Γ ( 1 + 1 a ) Γ ( b ) Γ ( 1 + 1 a + b ) {\displaystyle {\frac {b\Gamma (1+{\tfrac {1}{a}})\Gamma (b)}{\Gamma (1+{\tfrac {1}{a}}+b)}},円} {\displaystyle {\frac {b\Gamma (1+{\tfrac {1}{a}})\Gamma (b)}{\Gamma (1+{\tfrac {1}{a}}+b)}},円}
Median ( 1 2 1 / b ) 1 / a {\displaystyle \left(1-2^{-1/b}\right)^{1/a}} {\displaystyle \left(1-2^{-1/b}\right)^{1/a}}
Mode ( a 1 a b 1 ) 1 / a {\displaystyle \left({\frac {a-1}{ab-1}}\right)^{1/a}} {\displaystyle \left({\frac {a-1}{ab-1}}\right)^{1/a}} for a 1 , b 1 , ( a , b ) ( 1 , 1 ) {\displaystyle a\geq 1,b\geq 1,(a,b)\neq (1,1)} {\displaystyle a\geq 1,b\geq 1,(a,b)\neq (1,1)}
Variance (complicated-see text)
Skewness (complicated-see text)
Excess kurtosis (complicated-see text)
Entropy ( 1 1 b ) + ( 1 1 a ) H b ln ( a b ) {\displaystyle \left(1\!-\!{\tfrac {1}{b}}\right)+\left(1\!-\!{\tfrac {1}{a}}\right)H_{b}-\ln(ab)} {\displaystyle \left(1\!-\!{\tfrac {1}{b}}\right)+\left(1\!-\!{\tfrac {1}{a}}\right)H_{b}-\ln(ab)}

In probability and statistics, the Kumaraswamy's double bounded distribution is a family of continuous probability distributions defined on the interval (0,1). It is similar to the beta distribution, but much simpler to use especially in simulation studies since its probability density function, cumulative distribution function and quantile functions can be expressed in closed form. This distribution was originally proposed by Poondi Kumaraswamy [1] for variables that are lower and upper bounded with a zero-inflation. In this first article of the distribution, the natural lower bound of zero for rainfall was modelled using a discrete probability, as rainfall in many places, especially in tropics, has significant nonzero probability. This discrete probability is now called zero-inflation. This was extended to inflations at both extremes [0,1] in the work of Fletcher and Ponnambalam.[2] A good example for inflations at extremes are the probabilities of full and empty reservoirs and are important for reservoir design.

Characterization

[edit ]

Probability density function

[edit ]

The probability density function of the Kumaraswamy distribution without considering any inflation is

f ( x ; a , b ) = a b x a 1 ( 1 x a ) b 1 ,     where     x ( 0 , 1 ) , {\displaystyle f(x;a,b)=abx^{a-1}{(1-x^{a})}^{b-1},\ \ {\mbox{where}}\ \ x\in (0,1),} {\displaystyle f(x;a,b)=abx^{a-1}{(1-x^{a})}^{b-1},\ \ {\mbox{where}}\ \ x\in (0,1),}

and where a and b are non-negative shape parameters.

Cumulative distribution function

[edit ]

The cumulative distribution function is

F ( x ; a , b ) = 0 x f ( ξ ; a , b ) d ξ = 1 ( 1 x a ) b .   {\displaystyle F(x;a,b)=\int _{0}^{x}f(\xi ;a,b)d\xi =1-(1-x^{a})^{b}.\ } {\displaystyle F(x;a,b)=\int _{0}^{x}f(\xi ;a,b)d\xi =1-(1-x^{a})^{b}.\ }

Quantile function

[edit ]

The inverse cumulative distribution function (quantile function) is

F 1 ( y ; a , b ) = ( 1 ( 1 y ) 1 b ) 1 a .   {\displaystyle F^{-1}(y;a,b)=(1-(1-y)^{\frac {1}{b}})^{\frac {1}{a}}.\ } {\displaystyle F^{-1}(y;a,b)=(1-(1-y)^{\frac {1}{b}})^{\frac {1}{a}}.\ }

Generalizing to arbitrary interval support

[edit ]

In its simplest form, the distribution has a support of (0,1). In a more general form, the normalized variable x is replaced with the unshifted and unscaled variable z where:

x = z z min z max z min , z min z z max . {\displaystyle x={\frac {z-z_{\text{min}}}{z_{\text{max}}-z_{\text{min}}}},\qquad z_{\text{min}}\leq z\leq z_{\text{max}}.,円\!} {\displaystyle x={\frac {z-z_{\text{min}}}{z_{\text{max}}-z_{\text{min}}}},\qquad z_{\text{min}}\leq z\leq z_{\text{max}}.,円\!}

Properties

[edit ]

The raw moments of the Kumaraswamy distribution are given by:[3] [4]

m n = b Γ ( 1 + n / a ) Γ ( b ) Γ ( 1 + b + n / a ) = b B ( 1 + n / a , b ) {\displaystyle m_{n}={\frac {b\Gamma (1+n/a)\Gamma (b)}{\Gamma (1+b+n/a)}}=bB(1+n/a,b),円} {\displaystyle m_{n}={\frac {b\Gamma (1+n/a)\Gamma (b)}{\Gamma (1+b+n/a)}}=bB(1+n/a,b),円}

where B is the Beta function and Γ(.) denotes the Gamma function. The variance, skewness, and excess kurtosis can be calculated from these raw moments. For example, the variance is:

σ 2 = m 2 m 1 2 . {\displaystyle \sigma ^{2}=m_{2}-m_{1}^{2}.} {\displaystyle \sigma ^{2}=m_{2}-m_{1}^{2}.}

The Shannon entropy (in nats) of the distribution is:[5]

H = ( 1 1 a ) + ( 1 1 b ) H b ln ( a b ) {\displaystyle H=\left(1\!-\!{\tfrac {1}{a}}\right)+\left(1\!-\!{\tfrac {1}{b}}\right)H_{b}-\ln(ab)} {\displaystyle H=\left(1\!-\!{\tfrac {1}{a}}\right)+\left(1\!-\!{\tfrac {1}{b}}\right)H_{b}-\ln(ab)}

where H i {\displaystyle H_{i}} {\displaystyle H_{i}} is the harmonic number function.

Relation to the Beta distribution

[edit ]

The Kumaraswamy distribution is closely related to Beta distribution.[6] Assume that Xa,b is a Kumaraswamy distributed random variable with parameters a and b. Then Xa,b is the a-th root of a suitably defined Beta distributed random variable. More formally, Let Y1,b denote a Beta distributed random variable with parameters α = 1 {\displaystyle \alpha =1} {\displaystyle \alpha =1} and β = b {\displaystyle \beta =b} {\displaystyle \beta =b}. One has the following relation between Xa,b and Y1,b.

X a , b = Y 1 , b 1 / a , {\displaystyle X_{a,b}=Y_{1,b}^{1/a},} {\displaystyle X_{a,b}=Y_{1,b}^{1/a},}

with equality in distribution.

P { X a , b x } = 0 x a b t a 1 ( 1 t a ) b 1 d t = 0 x a b ( 1 t ) b 1 d t = P { Y 1 , b x a } = P { Y 1 , b 1 / a x } . {\displaystyle \operatorname {P} \{X_{a,b}\leq x\}=\int _{0}^{x}abt^{a-1}(1-t^{a})^{b-1}dt=\int _{0}^{x^{a}}b(1-t)^{b-1}dt=\operatorname {P} \{Y_{1,b}\leq x^{a}\}=\operatorname {P} \{Y_{1,b}^{1/a}\leq x\}.} {\displaystyle \operatorname {P} \{X_{a,b}\leq x\}=\int _{0}^{x}abt^{a-1}(1-t^{a})^{b-1}dt=\int _{0}^{x^{a}}b(1-t)^{b-1}dt=\operatorname {P} \{Y_{1,b}\leq x^{a}\}=\operatorname {P} \{Y_{1,b}^{1/a}\leq x\}.}

One may introduce generalised Kumaraswamy distributions by considering random variables of the form Y α , β 1 / γ {\displaystyle Y_{\alpha ,\beta }^{1/\gamma }} {\displaystyle Y_{\alpha ,\beta }^{1/\gamma }}, with γ > 0 {\displaystyle \gamma >0} {\displaystyle \gamma >0} and where Y α , β {\displaystyle Y_{\alpha ,\beta }} {\displaystyle Y_{\alpha ,\beta }} denotes a Beta distributed random variable with parameters α {\displaystyle \alpha } {\displaystyle \alpha } and β {\displaystyle \beta } {\displaystyle \beta }. The raw moments of this generalized Kumaraswamy distribution are given by:

m n = Γ ( α + β ) Γ ( α + n / γ ) Γ ( α ) Γ ( α + β + n / γ ) . {\displaystyle m_{n}={\frac {\Gamma (\alpha +\beta )\Gamma (\alpha +n/\gamma )}{\Gamma (\alpha )\Gamma (\alpha +\beta +n/\gamma )}}.} {\displaystyle m_{n}={\frac {\Gamma (\alpha +\beta )\Gamma (\alpha +n/\gamma )}{\Gamma (\alpha )\Gamma (\alpha +\beta +n/\gamma )}}.}

Note that we can re-obtain the original moments setting α = 1 {\displaystyle \alpha =1} {\displaystyle \alpha =1}, β = b {\displaystyle \beta =b} {\displaystyle \beta =b} and γ = a {\displaystyle \gamma =a} {\displaystyle \gamma =a}. However, in general, the cumulative distribution function does not have a closed form solution.

[edit ]
  • If X Kumaraswamy ( 1 , 1 ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,1),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,1),円} then X U ( 0 , 1 ) {\displaystyle X\sim U(0,1),円} {\displaystyle X\sim U(0,1),円} (Uniform distribution)
  • If X U ( 0 , 1 ) {\displaystyle X\sim U(0,1),円} {\displaystyle X\sim U(0,1),円} then ( 1 X 1 b ) 1 a Kumaraswamy ( a , b ) {\displaystyle {\left(1-X^{\tfrac {1}{b}}\right)}^{\tfrac {1}{a}}\sim {\textrm {Kumaraswamy}}(a,b),円} {\displaystyle {\left(1-X^{\tfrac {1}{b}}\right)}^{\tfrac {1}{a}}\sim {\textrm {Kumaraswamy}}(a,b),円}[6]
  • If X Beta ( 1 , b ) {\displaystyle X\sim {\textrm {Beta}}(1,b),円} {\displaystyle X\sim {\textrm {Beta}}(1,b),円} (Beta distribution) then X Kumaraswamy ( 1 , b ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,b),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,b),円}
  • If X Beta ( a , 1 ) {\displaystyle X\sim {\textrm {Beta}}(a,1),円} {\displaystyle X\sim {\textrm {Beta}}(a,1),円} (Beta distribution) then X Kumaraswamy ( a , 1 ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,1),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,1),円}
  • If X Kumaraswamy ( a , 1 ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,1),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,1),円} then ( 1 X ) Kumaraswamy ( 1 , a ) {\displaystyle (1-X)\sim {\textrm {Kumaraswamy}}(1,a),円} {\displaystyle (1-X)\sim {\textrm {Kumaraswamy}}(1,a),円}
  • If X Kumaraswamy ( 1 , a ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,a),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,a),円} then ( 1 X ) Kumaraswamy ( a , 1 ) {\displaystyle (1-X)\sim {\textrm {Kumaraswamy}}(a,1),円} {\displaystyle (1-X)\sim {\textrm {Kumaraswamy}}(a,1),円}
  • If X Kumaraswamy ( a , 1 ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,1),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,1),円} then log ( X ) Exponential ( a ) {\displaystyle -\log(X)\sim {\textrm {Exponential}}(a),円} {\displaystyle -\log(X)\sim {\textrm {Exponential}}(a),円}
  • If X Kumaraswamy ( 1 , b ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,b),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(1,b),円} then log ( 1 X ) Exponential ( b ) {\displaystyle -\log(1-X)\sim {\textrm {Exponential}}(b),円} {\displaystyle -\log(1-X)\sim {\textrm {Exponential}}(b),円}
  • If X Kumaraswamy ( a , b ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,b),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,b),円} then X GB1 ( a , 1 , 1 , b ) {\displaystyle X\sim {\textrm {GB1}}(a,1,1,b),円} {\displaystyle X\sim {\textrm {GB1}}(a,1,1,b),円}, the generalized beta distribution of the first kind
  • If X Kumaraswamy ( a , b ) {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,b),円} {\displaystyle X\sim {\textrm {Kumaraswamy}}(a,b),円} then { 1 1 X } MK ( a , b ) {\displaystyle \left\{{\frac {1}{1-X}}\right\}\sim {\textrm {MK}}(a,b),円} {\displaystyle \left\{{\frac {1}{1-X}}\right\}\sim {\textrm {MK}}(a,b),円}, the Modified Kumaraswamy distribution.


Example

[edit ]

An example of the use of the Kumaraswamy distribution is the storage volume of a reservoir of capacity z whose upper bound is zmax and lower bound is 0, which is also a natural example for having two inflations as many reservoirs have nonzero probabilities for both empty and full reservoir states.[2]

References

[edit ]
  1. ^ Kumaraswamy, P. (1980). "A generalized probability density function for double-bounded random processes". Journal of Hydrology. 46 (1–2): 79–88. Bibcode:1980JHyd...46...79K. doi:10.1016/0022-1694(80)90036-0. ISSN 0022-1694.
  2. ^ a b Fletcher, S.G.; Ponnambalam, K. (1996). "Estimation of reservoir yield and storage distribution using moments analysis". Journal of Hydrology. 182 (1–4): 259–275. Bibcode:1996JHyd..182..259F. doi:10.1016/0022-1694(95)02946-x. ISSN 0022-1694.
  3. ^ Lemonte, Artur J. (2011). "Improved point estimation for the Kumaraswamy distribution". Journal of Statistical Computation and Simulation. 81 (12): 1971–1982. doi:10.1080/00949655.2010.511621. ISSN 0094-9655.
  4. ^ CRIBARI-NETO, FRANCISCO; SANTOS, JÉSSICA (2019). "Inflated Kumaraswamy distributions" (PDF). Anais da Academia Brasileira de Ciências. 91 (2) e20180955. doi:10.1590/0001-3765201920180955 . ISSN 1678-2690. PMID 31141016. S2CID 169034252.
  5. ^ Michalowicz, Joseph Victor; Nichols, Jonathan M.; Bucholtz, Frank (2013). Handbook of Differential Entropy. Chapman and Hall/CRC. p. 100. ISBN 978-1-4665-8317-7.
  6. ^ a b Jones, M.C. (2009). "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages". Statistical Methodology. 6 (1): 70–81. doi:10.1016/j.stamet.200804001. ISSN 1572-3127.


Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families

AltStyle によって変換されたページ (->オリジナル) /