Class Notes (836,997)
Canada (510,028)
Psychology (3,525)
PSY260H1 (59)
Lecture 5

PSY260H1F Lecture 5 (Summer)

6 Pages
164 Views
Unlock Document

Department
Psychology
Course
PSY260H1
Professor
Daniela Bellicoso
Semester
Summer

Description
PSY260H1F L5; May 30, 13  Generalization Gradient: graph showing how physical Generalization & Discrimination Learning: Ch. 6 changes in stimuli (x-axis) correspond to changes in bhvr’al response (y-axis)  Generalization: transfer of past learning to novel events & o Key feature: rapid exponential decline on either side problems of the peak (…target stimulus response point) o Application of past knowledge to new info o Stimulus similarity determines how quickly the o Core issues of generalization: need to find an graph tapers off appropriate balance btwn specificity & generality o Physical traits influence degree of perceived similarity  Discrimination: recognizing that 2 stimuli or situations are different, and knowing which to prefer o Easy to judge what animal saw as similar or dissimilar o Depends on criteria, rules  Experience (learning) determines our ability to generalize & or discriminate btwn stimuli o Learning predisposes us to look at stimuli in dif ways Bhvr’al Processes Same Outcome Different Outcome Similar Similar stimuli  Similar stimuli  Dif Stimuli Same outcome outcomes Broccoli & Broccoli  nasty Cauliflower  nasty Cauliflower  tasty -same cruciferous -might dislike green veggie grp veggies & like white  If flat line at top: lots of generalization; many response to all veggies like potatoes stimuli since expect outcome from each Dissimilar Dissimilar stimuli  Dissimilar stimuli  Dif  If flat line at bottom: lots of generalization; no responses to all stimuli since expect no outcome from any of them Stimuli Same outcome outcomes Broccoli & Red Broccoli  nasty pepper  nasty Red pepper  tasty Generalization as a Search for Similar Conseqs -maybe you just  Generalized responding suggests that the responder dislike veggies overall (person, animal, etc) is a good estimator of future event  All vegetables probability  Key issue for generalization: identifying an inclusive set or Same Outcome Different Outcome range of stimuli w the same consequences as the training or Similar Similar stimuli  Same Similar stimuli  Dif target stimulus o The consequential region: how far we generalize from Stimuli outcome outcomes Malibu & Impala (4-door Malibu & Impala the target (to have the same outcome) Chevrolet sedans)  bad (mid-size vs full-size)  Previous ex. Yellow-green & yellow-orange  Not over-including, not over-discriminating since not 2-doors Dissimilar Dissimilar stimuli  Same Dissimilar stimuli   Balancing generality & specificity Stimuli outcome Dif outcomes  Generalization gradients represent an organisms’ attempt to predict the likelihood of a connection existing btwn a target Malibu & Corvet Malibu & Corvet (Utility vehicle & (more functional vs stimulus & similar stimuli based on experience Sportscar, but both are GM more sporty)  The Challenge of Incorporating Similarity into Learning Models cars) trusts GM so thinks wants the Corvet both cars are equally good since doesn’t need  Rescorla-Wagner model – used to show simple associations in fn’ality classical conditioning paradigms  Rescorla-Wagner model predicts a discrete-component  Cars for a bachelor & his decisions representation model for representing stimulus-outcome  All from General Motors Chevrolet associations o Discrete-Component Representation: representation When Similar Stimuli Predict Similar Conseqs where each indiv stimulus (or stimulus feature) Guttman & Kalish (1956)  Phase 1: Trained pigeons to peck at a yellow light for food corresponds to 1 element (node) of the model  This model contains 1 output node for the reinforcement response  Phase 2: Tested pigeons on dif trials showing single coloured lights & measured pecking  Good for specific types of targets that need high level of discrimination to pick out from a o Up to the bird what colour they could peck at crowd  Result: Most pecking occurred for trained (yellow) colour;  No generalization – don’t account for / learn 2nd best pecking to similar colours o Pigeons generalized due to seeing similarity btwn about similarities  Rescorla-Wagner rule indicates that weights from input colours nodes are modifiable by learning  Considering the pigeon paradigm… o Good for showing associations btwn highly  Training for various light pecking–outcome associations distinguished/dissimilar stimuli produces dif input node values  Looking for details o Don’t generalize well  Not good for showing associations btwn different but closely related stimuli  Discrete-component representations only work for understanding certain learning paradigms where stimuli have little shared similarity  Discrete component representations – each stimulus corresponds to 1 outcome  After Training: R-W says... o Thru experience  strong weight of Yellow o Strong association btwn yellow & food; all others weak by default  R-W generalization gradient – doesn’t have the usually o The other inputs can be perceived, but produce a “No Response” type of bhvr exponential curve  To get into the model, Input Nodes must be coded The Limitations of Discrete-Component Representations of Stimuli o Dif input patterns produce dif kinds of activity in the model  Discrete-component representations are best for predicting response rates to very different stimuli o They are not effective or accurate models for describing highly related sets of stimuli  Keep these important points in mind: o Representations of stimuli are chosen to suit a particular need  Ex. Call person’s house & ask for “Name”, but at school you’re a # (since within a large group of ppl) o Representations are context-specific  Representations might vary in their appropriateness depending on the situation or context  Ex. Students call teacher “Mrs. Smith”, friends call her “Slugger”, husband calls her “Hunny”  Dif representations in dif contexts yield dif patterns of similarity  Ex. 2 employees – Katherine & Catherine  Yellow node to output node: weight 1.0 o Association not active because yellow stimulus light w many similarities, similar hobbies & employment #s  better to have a not present Discrete Component Model to say that  Yellow-orange node is active: weight 0.0 o No activity will occur in the output node they are different ppl even tho they are similar  According to the R-W model: 100% responding to yellow  So sometimes good to have a high level of light, 0% responding to both similar yellow-based stimuli & completely dissimilar stimuli discrimination ability  But the original results of the pecking study prove this is Shared Elements & Distributed Representations actually not the case  Law of Effect – conseqs of action will affect if you repeat action o Thorndike proposed that stimulus generalization  Resulting generalization gradient from this overly simplified is due to the shared elements of stimuli model does not reflect Guttman & Kalish’s actual findings  Suggests simple models based on the R-W model have limited o Ppl might still produce the same response to dif stimuli if they think the same outcome will occur scope  Distributed Representations: representation where info is coded as a pattern of activation distributed across many dif nodes o What is learned from 1 stimulus will likely be generalized to other stimuli activating the same nodes  Suggests there is overlap in how the nodes are activated o Will respond to stimuli similar to the target stimulus  The model requires a representational transformation of a stimulus btwn input node & internal representation nodes o 0: not activated o 1: activated  Yellow light representation at input node level: 0 0 1 0 0  Model has 3 node layers, 2 weight layers o At this level: Discrete component representation  Yellow light representation at internal representation  Each stimulus activates 1 input node  connects to an node level: 0 0 1 1 1 0 0 internal representation layer via fixed nonmodifiable weights o At this level: Distributed representation o Now aware that similar things can signal we should o Internal representation nodes are connected to the respond to get the same outcome single output node via modifiable weights o A single input node can activate up to 3 internal o Things similar to yellow are also activated  If a particular stimulus is always given a reward, the representation nodes network learns to generate an appropriate response  Once processed, overlapping nodes can be activated  The layout of these nodes represents a topographic o Strengthening in the network o We realize this stimulus leads to the strongest representation outcome (is most likely to produce a food response) o Topographic Representation: stimulus representation where nodes that respond to physically similar stimuli in the world are placed in close proximity to each other in the model o Physical info is coded  Possibly less responding, but still a res
More Less

Related notes for PSY260H1

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit