Uniform integrability
In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.
Uniform integrability is also known as equi-integrability.[1] : Definition 2.23
Measure-theoretic definition
[edit ]Uniform integrability is an extension to the notion of a family of functions being dominated in {\displaystyle L_{1}} which is central in dominated convergence. Several textbooks on real analysis and measure theory use the following definition:[2]
Definition A: Let {\displaystyle (X,{\mathfrak {M}},\mu )} be a positive measure space. A set {\displaystyle \Phi \subset L^{1}(\mu )} is called uniformly integrable if {\displaystyle \sup _{f\in \Phi }\|f\|_{L_{1}(\mu )}<\infty }, and to each {\displaystyle \varepsilon >0} there corresponds a {\displaystyle \delta >0} such that
- {\displaystyle \int _{E}|f|,円d\mu <\varepsilon }
whenever {\displaystyle f\in \Phi } and {\displaystyle \mu (E)<\delta .}
Definition A is rather restrictive for infinite measure spaces. A more general definition[3] of uniform integrability that works well in general measure spaces was introduced by G. A. Hunt.
Definition H: Let {\displaystyle (X,{\mathfrak {M}},\mu )} be a positive measure space. A set {\displaystyle \Phi \subset L^{1}(\mu )} is called uniformly integrable if and only if
- {\displaystyle \inf _{g\in L_{+}^{1}(\mu )}\sup _{f\in \Phi }\int _{\{|f|>g\}}|f|,円d\mu =0}
where {\displaystyle L_{+}^{1}(\mu )=\{g\in L^{1}(\mu ):g\geq 0\}}.
Since Hunt's definition is equivalent to Definition A when the underlying measure space is finite (see Theorem 2 below), Definition H is widely adopted in Mathematics.
The following result[4] provides another equivalent notion to Hunt's. This equivalency is sometimes given as definition for uniform integrability.
Theorem 1: If {\displaystyle (X,{\mathfrak {M}},\mu )} is a (positive) finite measure space, then a set {\displaystyle \Phi \subset L^{1}(\mu )} is uniformly integrable if and only if
- {\displaystyle \inf _{g\in L_{+}^{1}(\mu )}\sup _{f\in \Phi }\int (|f|-g)^{+},円d\mu =0}
If in addition {\displaystyle \mu (X)<\infty }, then uniform integrability is equivalent to either of the following conditions
1. {\displaystyle \inf _{a>0}\sup _{f\in \Phi }\int (|f|-a)_{+},円d\mu =0}.
2. {\displaystyle \inf _{a>0}\sup _{f\in \Phi }\int _{\{|f|>a\}}|f|,円d\mu =0}
When the underlying space {\displaystyle (X,{\mathfrak {M}},\mu )} is {\displaystyle \sigma }-finite, Hunt's definition is equivalent to the following:
Theorem 2: Let {\displaystyle (X,{\mathfrak {M}},\mu )} be a {\displaystyle \sigma }-finite measure space, and {\displaystyle h\in L^{1}(\mu )} be such that {\displaystyle h>0} almost everywhere. A set {\displaystyle \Phi \subset L^{1}(\mu )} is uniformly integrable if and only if {\displaystyle \sup _{f\in \Phi }\|f\|_{L_{1}(\mu )}<\infty }, and for any {\displaystyle \varepsilon >0}, there exits {\displaystyle \delta >0} such that
- {\displaystyle \sup _{f\in \Phi }\int _{A}|f|,円d\mu <\varepsilon }
whenever {\displaystyle \int _{A}h,円d\mu <\delta }.
A consequence of Theorems 1 and 2 is that equivalence of Definitions A and H for finite measures follows. Indeed, the statement in Definition A is obtained by taking {\displaystyle h\equiv 1} in Theorem 2.
Probability definition
[edit ]In probability theory, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables.,[5] [6] [7] that is,
1. A class {\displaystyle {\mathcal {C}}} of random variables is called uniformly integrable if:
- There exists a finite {\displaystyle M} such that, for every {\displaystyle X} in {\displaystyle {\mathcal {C}}}, {\displaystyle \operatorname {E} (|X|)\leq M} and
- For every {\displaystyle \varepsilon >0} there exists {\displaystyle \delta >0} such that, for every measurable {\displaystyle A} such that {\displaystyle P(A)\leq \delta } and every {\displaystyle X} in {\displaystyle {\mathcal {C}}}, {\displaystyle \operatorname {E} (|X|I_{A})\leq \varepsilon }.
or alternatively
2. A class {\displaystyle {\mathcal {C}}} of random variables is called uniformly integrable (UI) if for every {\displaystyle \varepsilon >0} there exists {\displaystyle K\in [0,\infty )} such that {\displaystyle \operatorname {E} (|X|I_{|X|\geq K})\leq \varepsilon \ {\text{ for all }}X\in {\mathcal {C}}}, where {\displaystyle I_{|X|\geq K}} is the indicator function {\displaystyle I_{|X|\geq K}={\begin{cases}1&{\text{if }}|X|\geq K,\0円&{\text{if }}|X|<K.\end{cases}}}.
Tightness and uniform integrability
[edit ]Another concept associated with uniform integrability is that of tightness. In this article tightness is taken in a more general setting.
Definition: Suppose measurable space {\displaystyle (X,{\mathfrak {M}},\mu )} is a measure space. Let {\displaystyle {\mathcal {K}}\subset {\mathfrak {M}}} be a collection of sets of finite measure. A family {\displaystyle \Phi \subset L_{1}(\mu )} is tight with respect to {\displaystyle {\mathcal {K}}} if
- {\displaystyle \inf _{K\in {\mathcal {K}}}\sup _{f\in \Phi }\int _{X\setminus K}|f|,円\mu =0}
A tight family with respect to {\displaystyle \Phi ={\mathfrak {M}}\cap L_{1}(,円u)} is just said to be tight.
When the measure space {\displaystyle (X,{\mathfrak {M}},\mu )} is a metric space equipped with the Borel {\displaystyle \sigma } algebra, {\displaystyle \mu } is a regular measure, and {\displaystyle {\mathcal {K}}} is the collection of all compact subsets of {\displaystyle X}, the notion of {\displaystyle {\mathcal {K}}}-tightness discussed above coincides with the well known concept of tightness used in the analysis of regular measures in metric spaces
For {\displaystyle \sigma }-finite measure spaces, it can be shown that if a family {\displaystyle \Phi \subset L_{1}(\mu )} is uniformly integrable, then {\displaystyle \Phi } is tight. This is capture by the following result which is often used as definition of uniform integrabiliy in the Analysis literature:
Theorem 3: Suppose {\displaystyle (X,{\mathfrak {M}},\mu )} is a {\displaystyle \sigma } finite measure space. A family {\displaystyle \Phi \subset L_{1}(\mu )} is uniformly integrable if and only if
- {\displaystyle \sup _{f\in \Phi }\|f\|_{1}<\infty }.
- {\displaystyle \inf _{a>0}\sup _{f\in \Phi }\int _{\{|f|>a\}}|f|,円d\mu =0}
- {\displaystyle \Phi } is tight.
When {\displaystyle \mu (X)<\infty }, condition 3 is redundant (see Theorem 1 above).
Uniform absolute continuity
[edit ]There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in probability and measure theory, and which does not require random variables to have a finite integral[8]
Definition: Suppose {\displaystyle (\Omega ,{\mathcal {F}},P)} is a probability space. A class {\displaystyle {\mathcal {C}}} of random variables is uniformly absolutely continuous with respect to {\displaystyle P} if for any {\displaystyle \varepsilon >0}, there is {\displaystyle \delta >0} such that {\displaystyle E[|X|I_{A}]<\varepsilon } whenever {\displaystyle P(A)<\delta }.
It is equivalent to uniform integrability if the measure is finite and has no atoms.
The term "uniform absolute continuity" is not standard,[citation needed ] but is used by some authors.[9] [10]
Related corollaries
[edit ]The following results apply to the probabilistic definition.[11]
- Definition 1 could be rewritten by taking the limits as {\displaystyle \lim _{K\to \infty }\sup _{X\in {\mathcal {C}}}\operatorname {E} (|X|,円I_{|X|\geq K})=0.}
- A non-UI sequence. Let {\displaystyle \Omega =[0,1]\subset \mathbb {R} }, and define {\displaystyle X_{n}(\omega )={\begin{cases}n,&\omega \in (0,1/n),\0,円&{\text{otherwise.}}\end{cases}}} Clearly {\displaystyle X_{n}\in L^{1}}, and indeed {\displaystyle \operatorname {E} (|X_{n}|)=1\ ,} for all n. However, {\displaystyle \operatorname {E} (|X_{n}|I_{\{|X_{n}|\geq K\}})=1\ {\text{ for all }}n\geq K,} and comparing with definition 1, it is seen that the sequence is not uniformly integrable.
- By using Definition 2 in the above example, it can be seen that the first clause is satisfied as {\displaystyle L^{1}} norm of all {\displaystyle X_{n}}s are 1 i.e., bounded. But the second clause does not hold as given any {\displaystyle \delta } positive, there is an interval {\displaystyle (0,1/n)} with measure less than {\displaystyle \delta } and {\displaystyle E[|X_{m}|:(0,1/n)]=1} for all {\displaystyle m\geq n}.
- If {\displaystyle X} is a UI random variable, by splitting {\displaystyle \operatorname {E} (|X|)=\operatorname {E} (|X|I_{\{|X|\geq K\}})+\operatorname {E} (|X|I_{\{|X|<K\}})} and bounding each of the two, it can be seen that a uniformly integrable random variable is always bounded in {\displaystyle L^{1}}.
- If any sequence of random variables {\displaystyle X_{n}} is dominated by an integrable, non-negative {\displaystyle Y}: that is, for all ω and n, {\displaystyle |X_{n}(\omega )|\leq Y(\omega ),\ Y(\omega )\geq 0,\ \operatorname {E} (Y)<\infty ,} then the class {\displaystyle {\mathcal {C}}} of random variables {\displaystyle \{X_{n}\}} is uniformly integrable.
- A class of random variables bounded in {\displaystyle L^{p}} ({\displaystyle p>1}) is uniformly integrable.
Relevant theorems
[edit ]In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of {\displaystyle L^{1}(\mu )}.
- Dunford–Pettis theorem[12] [13] A class[clarification needed ] of random variables {\displaystyle X_{n}\subset L^{1}(\mu )} is uniformly integrable if and only if it is relatively compact for the weak topology {\displaystyle \sigma (L^{1},L^{\infty })}.[clarification needed ][citation needed ]
- de la Vallée-Poussin theorem[14] [15] The family {\displaystyle \{X_{\alpha }\}_{\alpha \in \mathrm {A} }\subset L^{1}(\mu )} is uniformly integrable if and only if there exists a non-negative increasing convex function {\displaystyle G(t)} such that {\displaystyle \lim _{t\to \infty }{\frac {G(t)}{t}}=\infty {\text{ and }}\sup _{\alpha }\operatorname {E} (G(|X_{\alpha }|))<\infty .}
Uniform integrability and stochastic ordering
[edit ]A family of random variables {\displaystyle \{X_{i}\}_{i\in I}} is uniformly integrable if and only if[16] there exists a random variable {\displaystyle X} such that {\displaystyle EX<\infty } and {\displaystyle |X_{i}|\leq _{\mathrm {icx} }X} for all {\displaystyle i\in I}, where {\displaystyle \leq _{\mathrm {icx} }} denotes the increasing convex stochastic order defined by {\displaystyle A\leq _{\mathrm {icx} }B} if {\displaystyle E\phi (A)\leq E\phi (B)} for all nondecreasing convex real functions {\displaystyle \phi }.
Relation to convergence of random variables
[edit ]A sequence {\displaystyle \{X_{n}\}} converges to {\displaystyle X} in the {\displaystyle L_{1}} norm if and only if it converges in measure to {\displaystyle X} and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable.[17] This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem.
Citations
[edit ]- ^ Fonseca, Irene; Leoni, Giovanni (2007). Modern Methods in the Calculus of Variations: Lp Spaces. New York, NY: Springer New York Springer e-books. ISBN 978-0387690063.
- ^ Royden, H.L. & Fitzpatrick, P.M. (2010). Real Analysis (4 ed.). Boston: Prentice Hall. p. 93. ISBN 978-0-13-143747-0.
- ^ Hunt, G. A. (1966). Martingales et Processus de Markov. Paris: Dunod. p. 254.
- ^ Klenke, A. (2008). Probability Theory: A Comprehensive Course. Berlin: Springer Verlag. pp. 134–137. ISBN 978-1-84800-047-6.
- ^ Williams, David (1997). Probability with Martingales (Repr. ed.). Cambridge: Cambridge Univ. Press. pp. 126–132. ISBN 978-0-521-40605-5.
- ^ Gut, Allan (2005). Probability: A Graduate Course. Springer. pp. 214–218. ISBN 0-387-22833-0.
- ^ Bass, Richard F. (2011). Stochastic Processes. Cambridge: Cambridge University Press. pp. 356–357. ISBN 978-1-107-00800-7.
- ^ Bass 2011, p. 356.
- ^ Benedetto, J. J. (1976). Real Variable and Integration. Stuttgart: B. G. Teubner. p. 89. ISBN 3-519-02209-5.
- ^ Burrill, C. W. (1972). Measure, Integration, and Probability. McGraw-Hill. p. 180. ISBN 0-07-009223-0.
- ^ Gut 2005, pp. 215–216.
- ^ Dunford, Nelson (1938). "Uniformity in linear spaces". Transactions of the American Mathematical Society. 44 (2): 305–356. doi:10.1090/S0002-9947-1938-1501971-X . ISSN 0002-9947.
- ^ Dunford, Nelson (1939). "A mean ergodic theorem". Duke Mathematical Journal. 5 (3): 635–646. doi:10.1215/S0012-7094-39-00552-1. ISSN 0012-7094.
- ^ Meyer, P.A. (1966). Probability and Potentials, Blaisdell Publishing Co, N. Y. (p.19, Theorem T22).
- ^ De La Vallée Poussin, C. (1915). "Sur L'Integrale de Lebesgue". Transactions of the American Mathematical Society. 16 (4): 435–501. doi:10.2307/1988879. hdl:10338.dmlcz/127627 . JSTOR 1988879.
- ^ Leskelä, L.; Vihola, M. (2013). "Stochastic order characterization of uniform integrability and tightness". Statistics and Probability Letters. 83 (1): 382–389. arXiv:1106.0607 . doi:10.1016/j.spl.2012年09月02日3.
- ^ Bogachev, Vladimir I. (2007). "The spaces Lp and spaces of measures". Measure Theory Volume I. Berlin Heidelberg: Springer-Verlag. p. 268. doi:10.1007/978-3-540-34514-5_4. ISBN 978-3-540-34513-8.
References
[edit ]- Shiryaev, A.N. (1995). Probability (2 ed.). New York: Springer-Verlag. pp. 187–188. ISBN 978-0-387-94549-1.
- Diestel, J. and Uhl, J. (1977). Vector measures, Mathematical Surveys 15, American Mathematical Society, Providence, RI ISBN 978-0-8218-1515-1