ECN 110A Study Guide - Quiz Guide: Symmetric Matrix, Dependent And Independent Variables, Fiat Cr.42
![](https://new-preview-html.oneclass.com/rBRgqlv0YP2XQykxB64Yj5V9M6zyxZLD/bg1.png)
Statistics 108
Spring 2018
Handout 4
Multiple linear regression
A response variable Yis linearly related to p−1 different explanatory variables X1, . . . , Xp−1
(where p≥2). The regression model is given by
Yi=β0+β1Xi1+· · · +βpXi(p−1) +εi, i = 1, . . . , n, (1)
where εihave mean zero, variance σ2and are uncorrelated. The equation (1) can be expressed
in matrix notations as
Y=Xβ+ε, where Y=
Y1
Y2
· · ·
Yn
, ε =
ε1
ε2
· · ·
εn
,
X=
1X11 X12 · · · X1(p−1)
1X21 X22 · · · X2(p−1)
· · · · · · ·
1Xn1Xn2· · · Xn(p−1)
,and β=
β0
β1
·
βp−1
.
So Xis an n×pmatrix.
Estimation problem
Note that βis estimated by the least squares procedure. That is minimizing the sum of squared
errors n
X
i=1
(Yi−β0−β1Xi1− · · · − βp−1Xi(p−1))2.
If the p×pmatrix XTXis invertible (as we shall assume) then the least squares estimate of β
is given by ˆ
β= (XTX)−1XTY.
Expected value and variance of random vectors
For an m×1 vector Z, with coordinates Z1, . . . , Zm, the expected value (or mean), and variance
of Zare defined as
E(Z) = E
Z1
Z2
· · ·
Zm
=
E(Z1)
E(Z2)
· · ·
E(Zm)
and Var(Z) =
Var(Z1) Cov(Z1, Z2)· · · Cov(Z1, Zm)
Cov(Z2, Z1) Var(Z2)· · · Cov(Z2, Zm)
· · · · · · · · · · · ·
Cov(Zm, Z1) Cov(Zm, Z2)· · · Var(Zm)
.
Observe that Var(Z) is an m×mmatrix. Also, since Cov(Zi, Zj) = Cov(Zj, Zi) for all 1 ≤
i, j ≤m, Var(Z) is a symmetric matrix. Moreover, it can be checked, using the relationship
1
find more resources at oneclass.com
find more resources at oneclass.com
Document Summary
A response variable y is linearly related to p 1 di erent explanatory variables x1, . Yi = 0 + 1xi1 + + pxi(p 1) + i, i = 1, . , n, (1) where i have mean zero, variance 2 and are uncorrelated. The equation (1) can be expressed in matrix notations as. So x is an n p matrix. Note that is estimated by the least squares procedure. That is minimizing the sum of squared errors nxi=1 (yi 0 1xi1 p 1xi(p 1))2. If the p p matrix xt x is invertible (as we shall assume) then the least squares estimate of is given by. For an m 1 vector z, with coordinates z1, . , zm, the expected value (or mean), and variance of z are de ned as. Observe that var(z) is an m m matrix. Also, since cov(zi, zj) = cov(zj, zi) for all 1 i, j m, var(z) is a symmetric matrix.