STAT444 Lecture Notes - Lecture 4: Maximum Likelihood Estimation, Robust Regression, Diagonal Matrix
Document Summary
Xi a real valued loss function is the ith row of design matrix. 25 31 = en p ri ) ( t. 4 ( r ) =p ( r ) is the derivative function. = zia llri where lilf: e . li (1) is the ith observation contribution to the likelihood logf lyiixii o under ordinary linear model. ~ n 10,1 ) and llr ) = %2. Then minimizing llfr ) is equivalent to minimizing s ( i ) with plr ) Zy plri ) l ( r ) with. Compare to ( 0 ) it is easy to see that the solution is wls i. e. ~z= ( xtwx ) However the weights depend on residuals , which in turn depend on. Given an initial estimate of h , we could iteratively update residuals and. Step 4 felt " ) xtw x ) xtwbsy j=jtl go to step 1 if convergence condition not satisfied.