PSYC 325 Lecture Notes - Lecture 9: Electronic Component, Artificial Neural Network, Fokker E.Ii

50 views6 pages
Published on 1 Oct 2016
Generalization and Discrimination Learning
1. Generalization: transfer of past learning to new situations, problems, and stimuli
2. Perception of differences between similar stimuli will be reflected in b
Same outcome Different outcomes
Similar stimuli Broccoli + cauliflower→ nasty Broccoli → nasty
Cauliflower→ yummy
Dissimilar stimuli Broccoli + red peppers→ nasty Broccoli → nasty
Red peppers→ yummy
Similar S, Similar O
Generalization gradient
1. A curve showing how changes in physical properties of stimuli correspond to changes in
2. Responding change in graded fashion that depends on the degree of similarity between a
test stimulus and the original training stimulus
3. After training in which a single stimulus has been reinforcement repeatedly,
generalization gradients around that trained stimulus show a peak, or point of max responding,
corresponding to the original stimulus on which the animal was trained
4. This responding drops off rapidly as the test stimuli
5. What degree animals expect similar O for S that vary in some physical property
6. A measure of animal's’ or person’s perception of similarity
a. If 2 S are perceived as being highly similar, there will be sig generalization between them
7. They decline rapidly on either side of the peak
Consequential Region
1. The set of all stimuli that have the same consequence as the training stimulus
2. Reflects the best estimate of the probability that novel stimuli will have the same O as a
training stimulus
a. Pigeons are ont confusing a yellow-orange 600 nm light with the original yellow 580 nm
b. They respond at 50% of original rate to the novel 600 nm light are showing that it expects,
based on what it learned from pecking the yellow light (which always resulted in food),
that there is an even chance that pecking the yellow orange light will yield the same food
3. Shape of gradient suggests that animals consistently expect the chance that 2 stimuli will hv the
same consequence drops off sharply as the stimuli become more distinct
4. Gradient = an attempt to predict, based on past experience, the likelihood that the consequences of
one stimulus will also be the same as that of other, similar stimuli
Discrete component vs Distributed representations
1. Discrete-component representation
a. Stimuli are categorical, with unique mental input-nodes
b. No response generalization
find more resources at
find more resources at
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 6 pages and 3 million more documents.

Already have an account? Log in
c. 2 nodes and 1 weight
c.i. Input nodes
c.ii. Modifiable weights
c.iii. Output nodes
d. Rescorla-Wagner model
d.i. One layer network links from various cues to the possible O
d.ii. Each presentation of trial was modulated by activating the corresponding input
node for each stimulus cue while being multiplied by the associative weight of each
d.iii. When activation from a given input node reaches an output node, added to the
incoming activations (multiplied) by weights) of all the other activated stimulus
cues on the trial
d.iv. Learning resulted when the associative weights of the links were modified at the
end of each trial→ to reduce the likelihood of a future mismatch between the
network’s prediction for an outcome ( the activation in the output nodes) and the
actual outcome that was presented feedback on that trial by the experimenter
d.v. P learn to minimize the difference between what actually happens and their
expectation of that O
e. Limitation
e.i. Useful for describing understanding and predicting how organisms learn about
highly dissimilar stimuli
e.ii. Does not work as well with stimuli that have some inherent similarity
2. Distributed representation
a. Stimuli share elements, with overlapping mental input nodes
b. Linear response generalization
c. What is learned about 1 stimulus will transfer or generalize to other stimuli that activate
some of the same nodes
d. 2 layer network: 3 layers of nodes and 2 layers of weights
d.i. Input nodes
d.ii. Fixed weights: do not change during learning
d.iii. Internal representation nodes
d.iv. Modifiable weights
d.v. Output nodes
e. Guttman and Kalish operant pigeon paradigm: additional input-node layer
e.i. Need 5 input nodes for each of 5 discrete colors that might be presented
e.ii. Each possible stimulus is presented by its own unique node in the model
e.iii. Single output node that are modifiable to learning
e.iv. Activate may be evoked in the output node, strong activity in this output will create
the model to general response
f. Topographic representation
f.i. Nodes responding to physically similar stimuli are placed next to each other in the
f.ii. The degree of overlap between the representation of 2 simili reflects their physical
find more resources at
find more resources at
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 6 pages and 3 million more documents.

Already have an account? Log in

Get OneClass Notes+

Unlimited access to class notes and textbook notes.

YearlyBest Value
75% OFF
$8 USD/m
$30 USD/m
You will be charged $96 USD upfront and auto renewed at the end of each cycle. You may cancel anytime under Payment Settings. For more information, see our Terms and Privacy.
Payments are encrypted using 256-bit SSL. Powered by Stripe.