Class Notes (1,100,000)

CA (650,000)

Queen's (10,000)

STAT (200)

STAT 464 (20)

David Riegert (20)

Lecture 15

This

**preview**shows page 1. to view the full**4 pages of the document.**STAT 464/864 - Lecture Notes

Thursday, January 29th, 2019

Consider the weakly-stationary process, X= (Xn)T

n∈TDwith TDﬁnite. For the

Durbin-Levinson algorithm, the starting point was to impose a condition on the

coeﬃcient solutions of the Yule-Walker equations, where andepends on an−1through

the iteration equation we saw in Step iii)b). This was a rather ad hoc approach, but it

gave us a good estimator whose mean-squared error converges to σ2when X=X(ARp)

and the innovations correspond to X(W N ). The innovations algorithm is a more

intuitive way to obtain a recursive algorithm for solving ˆ

X(lin)

nbecause, essentially,

the way to obtain the equations in the algorithm is to employ the orthogonality

principle for the best linear predictor a number of times.

The best linear predictor of Xnbased on (X0, X1,· · · , Xn−1)Tcan be written as

ˆ

X(lin)

n=

∞

X

j=−∞

Ψ(n)

jˆ

X(P E)

n−j

for all n∈TD. The coeﬃcients, {Ψj}j∈Z, are deﬁned as follows.

•n= 0:

Ψ(n)

j= 0

for all j∈Z.

•n6= 0:

Ψ(n)

j=

θ(n)

j, j ∈ {0,1,· · · , n −1}

0 otherwise.

This choice of parameterization yields

ˆ

X(lin)

n=

0n= 0

n−1

X

j=1

θ(n)

jXn−jn6= 0,

although we must not forget that there are, in fact, inﬁnitely terms in the sum deﬁning

ˆ

X(lin)

n

The innovations algorithm:

1. Initialization: v0=γXX (0).

1

###### You're Reading a Preview

Unlock to view full version