CS486 Lecture Notes - Lecture 12: Supervised Learning, Overfitting, Unsupervised Learning

45 views2 pages

Document Summary

Learning 11. 12. 18: we want agents to learn from experiences to improve their performance over time, problem from a collection of input-output pairs, learn a function that predicts the output for new inputs. This means we do better in training but worse in test. History of deep learning 11. 21. 18: a perceptron takes binary inputs (either data or another perceptron"s output) and models a neuron in our brain. The brain learns by changing the strength of the synapses: outputs depends only on inputs (1 if weighted sum is above threshold, 0 otherwise) Simple algorithm to learn a perception: start with random weights in a perceptron, for a training example, compute the output of the perceptron. If the output does not match the correct input: If the correct output was 0 but the actual was 1, decrease the weights that has an input of 1.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents