Exercise 1.5.19 (Convergence in distribution)

Let (X,,P) be a probability space. Given any real-valued function f : X we define the cumulative distribution function of f to be the function

F : [0,1],F(λ) := P ( {x X : f(x) λ}).

Given a sequence (f )n : X of real valued measurable functions, we can say that (f )n converges in distribution to f if the cumulative distribution functions F of (f )n converges pointwise to the cumulative distribution F of f at all λ for which F(λ) is continuous.

(i)
Show that if (f )n converges to f in any of the seven senses discussed in this chapter, then it converges in distrubition to f.
(ii)
Give an example in which (f )n converges to f in distribution, but not in any of the above seven senses.
(iii)
Give an example in which (f )n converges to f in distribution, and (g)n converges to g in distribution, but (fn + gn)n does not converge to f + g.
(iv)
Give an example in which a sequence (f )n can converge in distribution to two different limits f,g which are not equal almost everywhere.

Answers

(i)
We will demonstrate this for the weakest modes of convergence: a.e. pointwise convergence and convergence in measure. In case of the measure convergence we have for any λ : lim n|Fn(λ) F(λ)| = lim n|P ( {fn λ}) P ( {f λ}) Depending on which of the both probabilities is bigger, we can apply  the rule P(AB) = P(A) P(A B) P(A) P(B), and look at  the quantities P({fn λ,f > λ}) = P({fn λ,f λ + c}), or  P({f λ,fn > λ}) = P({f λ,fn λ + c}) for some c [0,+). In both  cases we have: lim nP ( {|f fn| c}) = 0

Now consider the a.e. pointwise convergence. On a finite measure space, pointwise convergence is equivalent to almost uniform convergence by Exercise 1.5.18, and almost uniform convergence implies convergence in measure by Exercise 1.5.2, and we are done.

(ii)
Consider the unit Lebesgue measure space ([0,1],B,m), and define fn : [0,1] [0,1],fn(x) = { x, if n|2 1 x,if n 2

We then have

Fn(λ) = P ( {x [0,1] : fn(x) λ}) = { P ( {x [0,1] : x λ}) = λ P ( {x [0,1] : 1 x λ}) = 1 P ( {x [0,1] : x 1 λ}) = λ

Yet when it comes to the convergence in probability, then the two subsequences diverge from each other:

P ( {x [0,1] : |f2n(x) f2n+1(x)| 𝜖}) = P ( {x [0,1] : |x (1 x)| 𝜖}) = P ( {x [0,1] : 2x 𝜖}) = 1𝜖 2↛𝜖 00.

Thus (f )n does not converge to a single limit in probability.

(iii)
Consider a measurable function f(x) = x and f on ([1,1],B,m). It is easy to verify that both functions have the distribution function F(λ) = P({f λ}) = (λ + 1)2 symmetric around the 0. Their sum f + f = 0, however, has the distribution F0(λ) = P({0 λ}) = 0 if λ < 0 and = 1, if λ 0.
(iv)
As in the example above, the constant sequence (f )n = f can converge in distribution both to f and f, both not equal anywhere except at 0.
User profile picture
2020-12-31 00:00
Comments