CGSC170 Chapter Notes - Chapter 8.2 pt 4: Hebbian Theory, Linear Separability, Activation Function

16 views4 pages

Document Summary

Looking for a learning rule that would permit a network with random weights and a random threshold to settle on a configuration of weights and thresholds that would allow it to solve a given problem. Solving it would mean that the correct output for every input is produced. If it was wrong, there was a weight or threshold issue. Learning in neural networks means that the weights alter as a response to error. Successful when these changes in weights/threshold converge upon a configuration that always makes the desired output for an input. Relies on basic principles that changes in weight are determined only by what happens locally. What happens at the input and the output. Required feedback about the correct solution to the problem the network is working to solve. They have binary threshold activation function so they only output 1 or 0. The supervisors know the correct solution to the problem. Therefore, try to train the networks to compute.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents