STAT 330 Lecture, June 28, 2012
1. Convergence in distribution
2. Convergence in probability
(a) De▯nition of convergence in probability to a random variable
(b) Theorem: convergence in probability implies convergence in distribution
(c) De▯nition of convergence in probability to a constant
(d) Theorem: convergence in probability to a constant = convergence in distribution
to a degenerate distribution.
1 Example 2. (Example 1 Revisited.)
Example 3. Suppose X ;▯▯▯;X are iid random variable with pdf
f(x) = e ; x > ▯:
Let X (1)= min(X ;:1:;X ). nhow that X (1)! p.
2 An important inequality: Markov Inequality
Suppose X is a random variable. Then for any k > 0 and c > 0, we have
P(jXj ▯ c) ▯ E(jXj ) :
Proof. See Page 16 of the supplementary notes for the proof.
A most commonly used result.
P(jXj ▯ c) ▯ 2 :
Example 4. (Weak law of large numbers (WLLN)) Suppose X ;▯▯▯;X are 1ndepenn
dent random variable with the same mean ▯ and same variance ▯ . Show that
▯ 1 X
X = Xi! ▯p
3 Example 5. (Example 1 revisited fo(n)
Recall that Xhas
> 0 x < 0