CMDA 4654 Study Guide - Final Guide: Design Matrix, Identity Matrix, Maximum Likelihood Estimation

39 views5 pages

Document Summary

Yi = 0 + 1xi + + dxd + i, The above can also be written in a vectorized multivariate normal expression. Nn(x , in 2) (1) (2: y = (y1, . , yn)t: is a p = (d + 1)-vector ( 0, 1, . The 1s column is for the intercept. 1. 1 assumptions: the conditional mean of y is linear in the xj variables, the additive errors (deviations from the line) are (a) normally distributed, (b) independent from each other, and (c) identically distributed (i. e. constant variance). Holding all other variables constant, j is the average change in y per unit change in xj. Least squares is unchanged: fitted values yi = b0 + b1xi1 + + bdxid, residuals ei = yi yi, minimize the residual sum of squares p e2 i take the gradient and solve for the b-vector. You will obtain the same result from maximizing the log likelihood.