IND ENG 160 Lecture Notes - Lecture 10: Convex Optimization, Unimodality, Tangent Space

88 views3 pages

Document Summary

Numerical algorithms begin with a guess, updating and making it more accurate with each iteration. We start with x(0), then update it by some rule. The com- plexity (the number of iterations) is roughly log( 1. ) for the gradient algorithm, instead of log(log( 1. If the function is unimodal, there is no issue with picking an initial point. If you choose any point on the curve, you can converge to the optimal solution. However, consider the case where a saddle point is taken as the initial point. In this case, the gradient algorithm would never move. Additionally, the algo- rithm may get stuck on a saddle point because the 0 gradient will satisfy the stopping condition. For all this algorithms, the best you can converge to is a local min or a saddle point. For almost all problems, it is too di cult to do e ciently. Many problems cannot be solved due to this exponential time constraint.

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers

Related Documents