CGSC170 Chapter Notes - Chapter 8.2 pt 4: Hebbian Theory, Linear Separability, Activation Function
Document Summary
Looking for a learning rule that would permit a network with random weights and a random threshold to settle on a configuration of weights and thresholds that would allow it to solve a given problem. Solving it would mean that the correct output for every input is produced. If it was wrong, there was a weight or threshold issue. Learning in neural networks means that the weights alter as a response to error. Successful when these changes in weights/threshold converge upon a configuration that always makes the desired output for an input. Relies on basic principles that changes in weight are determined only by what happens locally. What happens at the input and the output. Required feedback about the correct solution to the problem the network is working to solve. They have binary threshold activation function so they only output 1 or 0. The supervisors know the correct solution to the problem. Therefore, try to train the networks to compute.