Class Notes (1,100,000)
CA (650,000)
Queen's (10,000)
STAT (200)
Lecture 15

STAT 464 Lecture Notes - Lecture 15: Orthogonality Principle, VxxExam


Department
Statistics
Course Code
STAT 464
Professor
David Riegert
Lecture
15

This preview shows page 1. to view the full 4 pages of the document.
STAT 464/864 - Lecture Notes
Thursday, January 29th, 2019
Consider the weakly-stationary process, X= (Xn)T
nTDwith TDfinite. For the
Durbin-Levinson algorithm, the starting point was to impose a condition on the
coefficient solutions of the Yule-Walker equations, where andepends on an1through
the iteration equation we saw in Step iii)b). This was a rather ad hoc approach, but it
gave us a good estimator whose mean-squared error converges to σ2when X=X(ARp)
and the innovations correspond to X(W N ). The innovations algorithm is a more
intuitive way to obtain a recursive algorithm for solving ˆ
X(lin)
nbecause, essentially,
the way to obtain the equations in the algorithm is to employ the orthogonality
principle for the best linear predictor a number of times.
The best linear predictor of Xnbased on (X0, X1,· · · , Xn1)Tcan be written as
ˆ
X(lin)
n=
X
j=−∞
Ψ(n)
jˆ
X(P E)
nj
for all nTD. The coefficients, {Ψj}jZ, are defined as follows.
n= 0:
Ψ(n)
j= 0
for all jZ.
n6= 0:
Ψ(n)
j=
θ(n)
j, j ∈ {0,1,· · · , n 1}
0 otherwise.
This choice of parameterization yields
ˆ
X(lin)
n=
0n= 0
n1
X
j=1
θ(n)
jXnjn6= 0,
although we must not forget that there are, in fact, infinitely terms in the sum defining
ˆ
X(lin)
n
The innovations algorithm:
1. Initialization: v0=γXX (0).
1
You're Reading a Preview

Unlock to view full version