Class Notes (923,814)
CA (543,272)
UTSG (45,887)
PSY (3,659)
PSY370H1 (21)
Lecture

LEC 10 – Creativity and How We Learn Nov 19 2009

4 Pages
125 Views

Department
Psychology
Course Code
PSY370H1
Professor
John Vervaeke

This preview shows page 1. Sign up to view the full 4 pages of the document.
PSY370: Thinking and Reasoning
LEC 10 Creativity & How we Learn
Nov, 19th, 2009
How do neuronetworks learn?
! parallel process
! the weight of connections are altered as we learn
! this is done through back propagation of error
! target value performance value = error
! learning algorithm use statistical contribution of each connection to assign blame for the error
! then the weight of connection is altered slightly to the degree to which it had blame, you do not
turn it all off, because the blame calculation is very probabilistic
! this is back propagation of error, and we do this many times until the network is able to do the
task well
! but this theory requires the ability to independently learn, because it is presupposing the thing it
is trying to explain
! presumption that we know the target value
How do we get unsupervised learning?
Gefford Hinton: Internalization
Terms used below are not from Hinto but from Vervaeke
! network break down into 2 parts
! letter for parts, number for stages
A
B
1 [Wake]
weak reinforcement of world,
because it does not know how big
the sample is for the population
WORLD
2 (a)
[Sleep]
B's sample act as A's
Population.
B will provide back propagation
for A, because it has the target
value of what the world is in A's
eyes
2 (b)
A is really good at solving
the problem, and it teaches
B how to model now.
1
Go back to stage 1 now,
the loop occurs again,
indefinitely until B has a
good model of the world
! but the criticism might be that the machine will never truly achieve a real representation of the
world but really just a good model
! bad criticism b/c humans don't have that either (we are not God, and its unfair to ask the
machine to be one)
Ito et. al
! cerebellum is structurally dense
www.notesolution.com

Loved by over 2.2 million students

Over 90% improved by at least one letter grade.

Leah — University of Toronto

OneClass has been such a huge help in my studies at UofT especially since I am a transfer student. OneClass is the study buddy I never had before and definitely gives me the extra push to get from a B to an A!

Leah — University of Toronto
Saarim — University of Michigan

Balancing social life With academics can be difficult, that is why I'm so glad that OneClass is out there where I can find the top notes for all of my classes. Now I can be the all-star student I want to be.

Saarim — University of Michigan
Jenna — University of Wisconsin

As a college student living on a college budget, I love how easy it is to earn gift cards just by submitting my notes.

Jenna — University of Wisconsin
Anne — University of California

OneClass has allowed me to catch up with my most difficult course! #lifesaver

Anne — University of California
Description
PSY370: Thinking and Reasoning LEC 10 Creativity & How we Learn Nov, 19th, 2009 How do neuronetworks learn? parallel process the weight of connections are altered as we learn this is done through back propagation of error target value performance value = error learning algorithm use statistical contribution of each connection to assign blame for the error then the weight of connection is altered slightly to the degree to which it had blame, you do not turn it all off, because the blame calculation is very probabilistic this is back propagation of error, and we do this many times until the network is able to do the task well but this theory requires the ability to independently learn, because it is presupposing the thing it is trying to explain presumption that we know the target value How do we get unsupervised learning? Gefford Hinton: Internalization Terms used below are not from Hinto but from Vervaeke network break down into 2 parts letter for parts, number for stages A B 1 [Wake] weak reinforcement of world, WORLD because it does not know how big the sample is for the population 2 (a) Bs sample act as As B will provide back propagation [Sleep] Population. for A, because it has the target value of what the world is in As eyes 2 (b) A is really good at solving the problem, and it teaches B how to model now. 1 Go back to stage 1 now, the loop occurs again, indefinitely until B has a good model of the world but the criticism might be that the machine will never truly achieve a real representation of the world but really just a good model bad criticism bc humans dont have that either (we are not God, and its unfair to ask the machine to be one) Ito et. al cerebellum is structurally dense www.notesolution.com
More Less
Unlock Document


Only page 1 are available for preview. Some parts have been intentionally blurred.

Unlock Document
You're Reading a Preview

Unlock to view full version

Unlock Document

Log In


OR

Don't have an account?

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit