Chapter 1: History and Approaches
Most of the information early on will not necessarily be tested material (there are a lot of bonus slides in
the “full slides” section).
Conscious experience is a reconstructive process. What we sense from the world is not an always an
accurate account of what is in the real world. We miss out seeing different wavelengths and cannot
always hear what frequencies are around us.
Cognitive psychologists use a lot of models (we do not need to explain or memorize this model).
Psychology is unique that is an empirical based study of many sub-units. Science is not just facts; we
need more than a compilation of facts. We need a systematic approach to organize and find facts which
is done through methodology.
There is a paradigm shift when the new addresses the limitations of the old. Understanding the mind
has been a long process ever since the mind became a topic of discussion.
Antecedent Philosophies and Traditions
Intuitions were used to explain thoughts. This is also known as folk-psychology which is based on
subjective ideas about how our mind works. The problem is that there are many different ideas and
there was no testing over time; we could not tell who was right and who was wrong.
Logic was used quite widely for a period. The problem here is that the under-lying assumptions are
usually based on intuition and were usually wrong.
Reliance on (cultural) authority was predominant. This includes Galileo with whom people would not
share his findings because they were afraid the findings would be wrong according to their religion and
culture. Even Cambridge University would not have a psychophysics lab because they believed it would
insult religion by putting the human soul on a scale. If culture provides a framework that does not allow
evidence it can impede the progress of science.
Empiricism was founded by Locke, Hume, and Stuart Mill. They thought we could only understand the
mind by getting people to tell them about their experiences. They focused on the association between
experiences and these are how we learn. Therefore they saw people as blank slates. This position relates
to observation and believes in learning over lifetime.
Nativism on the other hand believed that people are innately born with something and these
characteristics just take time to appear, regardless of life experiences. These included Plato, Descartes,
and Kant. What you can psychologically do is already there.
Now the focus is on looking at particular behaviours and testing to see if it is born or innate.
Individual differences were the basis another philosophy. Galton was a nativist and believed that
intelligence, morality, and personality is innate. He studied mental imagery in both the lab and natural
settings. Then we measured the individual differences in these cognitive faculties. He found that people
who were born of smart parents, for example, were also smart and concluded it was hereditary.
However, he did not consider culture influences. The important thing he did was evaluate people with
tests and questionnaires in order to assess mental abilities which were the start of those that followed.
Structuralism is the start of what we consider modern and scientific psychology. The name most
associated with this is Wundt. They focussed on the elemental components of the mind. This was a
novel thing; most people believed it to be a unitary structure. Their main method was introspection
which was problematic. It was not internal perception but experimental self-observation. They believed
that studying must be done in the lab under controlled conditions. What he had right was that the brain
can be compartmentalized according to the processes and that controlled settings are useful. What he
did incorrect was the reflection of one’s own consciousness.
Functionalism wanted to address the limitations of structuralism. So instead of focusing on the small
parts they thought about why our minds should work a certain way (function not content). William
James is most known with this paradigm. He thought the mission of psychology was to explain our experiences based on “Why does our mind work the way it does?” He used introspection in natural
settings; he believed that they needed to study the whole organism in real-life situations and that
psychology must get out of the laboratory.
Gestalt psychology also wanted to do the opposite of structuralism. They wanted to look at a big picture
because my decomposing the mind what we find does not necessarily add up. They looked at holistic
aspects of consciousness (what order is imposed on perception). Again, they used introspection. The
whole is different than the sum of the parts. Looking at parts of a bike separately and decomposed is not
necessarily a bike. So even in modern psychology the question still arises about how much we should
actually break things down into constitutes.
Behaviourism was hugely influential to “the real world”. The founder was John Watson. They moved
toward a more objective and experimental branch of research instead of focussing on the introspection
that the previous paradigms did. They made “behaviour, not consciousness, the objective point of our
attack”. Skinner was a major influence here as well. He viewed learning as the relation between inputs
and outputs. They stay out of the “mind” and didn’t discuss mental representations or consciousness
because they had no ways of studying them (this may have been different if they had access to brain
imaging techniques). Mostly they cleaned up the mess of introspection (even if introspection was
correct in its results the methods were flawed).
The problem with paradigms is that they are one-sided; we now know that there are things going on in
our mind and consciousness that the behaviourists did not account for. Cognitive psychology decided to
address this limitation.
The Cognitive Revolution
Eventually after a couple decades of behaviourism being the dominant perspective people discovered
things that it just couldn't account for. What was kept is the idea that objective science is a necessary
method for gathering data. The problems with behaviourism include:
1. Human factors engineering presented new problems. During WWII they needed to create
equipment that would work the way the human mind does so that the machine would be
compatible. We learned that humans are limited compatibility processors and by understanding
cognition we can understand the best way to design a machine.
2. Behaviourism failed to adequately explain language. Skinner said that children learn language by
imitation and reinforcement but Chomsky questioned this operant conditioning way of learning
language. He said that children say sentences they have never heard before and they often use
incorrect grammar which no one says or reinforces. Therefore he believed that children are taking
what they heard and processing it into other things. He proposed the idea of a universal language.
3. Localization of function in the brain forced discussion of the mind. We learned that some functions
are based on cell assemblies and there was the demonstrated importance of early experience on
the development of the nervous system.
4. Development of computers and artificial intelligence gave a very strong metaphor of thinking about
the mind. By looking at the interaction between how the input was interacting with memory
already in the mind we got a new perspective on how we behave (output).
Behaviourism is still so predominant because it still has so much practical evidence for controlling
behaviour; it just lacked the ability to explain the mind in its entirety.
Paradigms of Cognitive Psychology
Information processing is the current dominant perspective in cognitive psychology. There are a few
current models within this perspective.
Localized: This is typically a serial process where a single node corresponds for something symbolic. Connectionism (parallel distributed processing): This is a paradigm describes how our connections can
be modified as we learned. This one says that knowledge is distributed and as we weight something that
depends on what will be activated or not.
We can see that these developed the way that computers developed; at first they were simple and
slower. So then the analogy has changed over time.
In the lab, cognition will be understood best by uncovering the basic mechanisms or processes
underlying cognition. There is another belief that mechanisms are stable across situations and can only
be revealed under rigorously controlled conditions. We want to hold everything constant except for a
few things to see what their effect is on our learning.
Out of the lab, we know that cognitive activities re shaped by culture and situation. These are more
prone to natural observation in an everyday context. An example of this is a line-tracking experiment.
Convergent validity is the idea that by only having one study, we do not know much about a
phenomenon. This describes the fact that we want to see convergence between lab and out of lab study
findings of cognition.
Chapter 2: The Brain
We do not have to know the details of the anatomy of the brain and we want to instead appreciate the
ways in which the brain knowledge can branch into the theory of cognition.
The Structure of the Brain
There are three planes we use when looking at the brain. The coronal plane splits front from back. The
horizontal plane separates top from bottom. The sagittal plane splits the sides from left to right; the mid
sagittal looks into the brain.
Major brain structures are not of much concern because we ar focussing on high-order complex
systems. However, the hindbrain has the vital biological functions such as breathing. As we move
forward to the front part of the brain we are more concerned with cognition. Medulla oblongata
(beneath the pons) transmits from body to brain and regulates life support. The pons are responsible for
neural relay from left to right. The cerebellum works to coordinate muscular activity, general motor
behaviour, and balance. The midbrain is responsible for neural relay from cerebellum to forebrain. The
corpus callosum connects the two hemispheres.
We are mostly interested in the cerebral cortex.
The occipital lobe is concerned with vision. It holds the visual cortex, visuospatial relations, colour, and
The temporal lobe processes auditory information (primary is the reception of info and secondary is the
processing of the info), higher-order visual functions, language, memory encoding and retention, and
has some aspects of emotion (the hippocampus).
The parietal lobe is known for the somatosensory cortex (pain, pressure, and touch), homunculus,
integration of sensory information, language, and spatial sense such as navigation.
The frontal lobe is concerned with cognition (prefrontal), higher order thinking, attention, executive
functions (planning and decision making), and the motor cortex.
Different activities are always occurring and become convergent to make a unified whole; nothing is in
isolation even though we are presenting the lobes separately.
Localization of Function
The beginning of this was linked to the idea of faculty psychology and phrenology. Phrenology has since
been disproven, but they studied that difference in size of a certain area meant there was a difference in
functioning. This is not true, but the localization obviously has merit. There are sub-components that
serve certain tasks: Different mental abilities were independent and autonomous functions carried out
by different parts of the brain. They were correct in the fundamental ideas but out to lunch on these
two ideas: 1. Brain location size = power. They thought there was a direct linkage between the size of an area
and the abilities related in that area.
2. Independence of functions was another faulty assumption. There are interactions between each
We can now return to the homunculus idea. There is the goofy looking little man that demonstrates that
area enlarged on to body are those that have more sensory or motor real estate in the brain. For
example, there is more area dedicated to the hands so they are enlarged on the picture.
Cognitive neuropsychology is focused on the mind meaning the general notion that we can map
function whereas a neuropsychologist looks at the physical mapping of the brain. We can do this by
looking at damage. Cognitive neuropsychologists study the deficits in brain-damaged individuals (stroke
patients or accidents).
There are two occurrences of lesions (natural or surgical) and we can study them using two approaches:
1. What function is supported by a given brain region? Look at damage and then behaviour.
2. What brain region supports a given function? Look at behaviour and find location.
This means we can make different inferences based on what kind of patients we are studying.
Associations: two impairments are both present in the same patient; this means we can make
inferences about the location and relationship between two impairments
Dissociation: one impairment is present and the other impairment is absent in one patient; the
inference we can make here is that the neural substrate is separate for each task (for example we have a
problem comprehending but not producing language). If they were the same there would be two
Double dissociations: Typically shows two patients, both with dissociation but each in the opposite
direction. Then we can look at what functions are independent. P1 has damage in area X so impairment
to A not B. P2 has damage in area Y so impairment in B not A. The inference we can make here is similar
to the dissociation but this provides stronger evidence because we can ensure that there are separate
Like Gage, by knowing that he had damage to the frontal regions we can know what areas are
responsible for personality by observing his changes and infer which are connected and which
cognitive processes (like speech production) are not related to the frontal lobes.
Damage to Broca’s area (frontal) impairs speech production (fluidity and intonation).
Damage to Wernicke’s area (by the central fissure and more anterior) impairs comprehension.
Cognitive Neuropsychology: Delusional Belief
Max Coltheart had a 2 factor theory of delusional belief (incorrect inference about the external world
despite evidence against it). So he said that there must be content specific brain damage and right
hemisphere frontal lobe damage (believed to be the belief evaluation system). It will not be held
without the other.
“This is not my arm” has damage that precludes the movement of the arm (specific) and the belief
Alien control delusion relates to a failure of correspondence between expected and obtained feedback
and belief system damage.
Mirrored-self misidentification relates to a deficit in face processing regions (they cannot recognize
other people either), mirror agnosia, and cannot match the self to the image. Capgras delusions (people think friends and family have been replaced by imposters) come from the
system that disconnects the face recognition from the autonomic nervous system.
Fregoli delusions (people are I know are following my around but in disguise so I cannot recognize them)
occur because they have autonomic hyper-reactivity to faces so they feel like they know even strangers.
Cotard delusion (I’m dead) is the result of pure autonomic failure. There is no emotional or visceral gut
response to anything so they logically conclude that they must be dead.
Modern research says that we actually do have the ability to regrow certain areas of the brain even after
extreme damage. Monkeys were able to regain motor control of a damaged arm that they couldn’t feel
after restraining the good one.
Brain imagined techniques
Transcranial magnetic stimulation involves externally applying a magnetic current to “turn off” regions
of the brain. TMS has been linked to the ability that we are all savants and just need to knock out other
Neuroanatomical images are those that are static pictures. The ones we should focus on are:
1. CAT scans- cranial axial tomography
2. MRI- magnetic resonance imaging
Functional neuroimaging allows us to see the brain in motion. These techniques include:
1. fMRI- functional magnetic resonance imaging uses the idea that areas in action are using
more blood for and so by tracking the blood flow we can tell what is being stimulated.
2. ERP- event related potentials
3. EEG- electroencephalography studies states of consciousness
If we subtract the time taken for the simple activity away from the decision we get what time was left
for the decision. The same applies for brain imaging. If we look at blood activity when someone is at rest
subtract it from the blood activity recorded in doing an activity we can look at what activation was
unique to the activity.
This is an avenue we use to study the brain; it involves studying the difference in structures between
human and animal brains.
To take away: We have developed a lot more in the way of brain imaging but we must be careful in how
we evaluate these new measures.
Chapter 3: Perception
Conscious experience is an active participation of the individual. It involves the distal stimulus (the
things out in the universe) and the proximal stimulus (in the retina) and finally the percept (the
recognition of the object). The distal stimulus does not necessarily have anything to do with the other
Bottom up Processes These are also known as stimulus or data-driven processes. This system is one directional (going from
stimulus to output). This involves building percepts from small perceptual units. There are several
different classes that look at the way you could go about this:
1. Template matching: This is a matching process between external stimulus and a stored
pattern in memory. So incoming pattern is compared to templates stored and identified by
the template that best matches it. The problem with template matching is that it doesn’t
2. where we get templates from and there would have to be so many templates stored that it
seems impossible. Finally, it does not deal with surface variation very well (for example
3. Feature analysis: This says that objects are a combination of features and features are all
small, local templates that can be combined in many different ways. So we recognize the
features and then recognize the combination. Pandemonium discusses letter identification
with “voice demons” and Recognition by Component (RBC). Advantages include a greater
flexibility and a reduced number of stored templates. Evidence consistent with this class
includes visual search and cortical regions. Visual search describes how it is not more
difficult to locate a specific object in a bunch of other ones if there are no shared features (x
and o). However, it becomes increasingly difficult when an object shares features and there
are more distractors (O and Q). Shared features mean things are harder; so this would not
be accounted for in the template model. Also, in the frog retina there are different cortical
areas that activate due to specific features like borders and bug detectors.
4. Prototype matching: This involves matching a pattern to a representation (idealized
prototype). A prototype is an idealized representation of a class of objects. The more
features it shares with a prototype the more likely it is a match. As we experience things we
store representations and later when we want to decide you draw on an average. This
model is much more flexible than the others.
Top down Processes
This is also called theory driven or conceptually-driven processes. This highlights the fact that a person’s
knowledge, theories, and expectations influence perception. In broad terms, perception can be based
on experiences and context. Change blindness is an example; we leave out information that is not
essential and base our perception on expected outcomes. Illusions cannot be accounted for if we were
purely bottom up processors.
START OF MIDTERM NUMBER TWO!
Chapter 4: Attention
Now we start learning theories and seeing experimental accumulation of evidence that shapes these
Pashler went the opposite direction of James and said that we do not even know what attention s; there
may not even be an “it” that we can know about.
At any given time, there are numerous things we could or could not pick up on. The way we can think of
this is an air-traffic controller. We have to allocate our resources appropriately to get the best response.
We are only focussing on auditory senses, but the main ideas can apply to all senses. The analogy here is a bottleneck because we are limited in our attention capacity. The selective process
changes in temporal space between theories. An early selection model says that we select the message
prior to getting meaning. Late selection says that we identify meaning pre-attentively.
Early selection theories:
1. Broadbent’s Filter Model: This was studied with dichotic listening. There is information coming into
both ears and the participant will be told to listen only to one ear (shadow by repeating, count verbs
or nouns, and comprehension tests after). They look how much meaning is being pulled out before
attention. Cherry found that people follow attended message but miss much about the unattended
message (if there was backward speech in the unattended they reported that it may have been
slightly odd). Moray looked at the idea of early selection by having the unattended ear repeating a
list of words over and over again. The idea is that we had everything and then selected message for
meaning then we would have heard the list. This caused them to push to the far extreme and say we
hear nothing that we are not attending to. The early filtering out of everything accounts for the idea
that they don’t hear anything in the unattended. However, later findings demonstrated that people
can say the gender of the unattended ear, but not semantic meaning (so superficial level is
observed, semantics is not). So filter selects what will be processed and the filtration occurs prior to
extraction of meaning. The limit is on the amount of information that can be processed
simultaneously (Broadbent said that filters protect us from information overload).
2. Cocktail Party Effect: This started to be studied when we wanted to test the limits of the filter;
because evidently there is some semantic meaning and superficial meaning that gets through. So
shadowing is disrupted when one’s own name is embedded in either the attended or unattended
message. So important information can be processed pre-attentively. The explanation that they only
notice their name during an attention lapse is not feasible because then they should be able to hear
other words by chance as much as well, but because the name is more often attended to than other
words it suggests it has to do with semantics.
3. Treisman did studies where the attended information is switched between ears part way through
the study. The fact that people continued on with the attended information despite being switched
implies that the unattended message is seeping through and the person is receiving some meaning
about it in order to know the difference between messages. There is some information that is
semantic and processed pre-attentively. There was little subjective awareness of the switch further
suggesting that the meaning was processed pre-attenively.
4. Corteen and Wood would pair the shock with a city name in the unattended ear. Then present them
with new city names and those that were old and received the shock as unattended information in a
shadowing task. Results: all city names elicited a GSR although unattended. This shows that
associations are learned unconsciously.
5. Treisman’s Attentuation Model: This model has two critical stages. The first stage was an attenuator
instead of a filter. The attenuator analyzes for physical characteristics, language, and meaning. It
only analyzes a necessary level so we can identify which message is the one to pay attention to. This
information is then turned down if it is not to be attended to and up if it is attended. The second
stage was a dictionary unit. It contains stored words and important items. There is the idea of
thresholds that certain words have depending on their importance and frequency of use. Our name
is very important so a low threshold, boat would be high, and boat may be somewhere in between.
Late selection theories include the ideas that we process all information (attended and unattended) for
meaning. The selection of what to pay attention to happens during the response output stage and the
human limitation of processing two streams lies in making a conscious response to each stimulus.
Therefore, all information activates corresponding representation in LTM. So keep in mind there are three approaches: far to the side that we process nothing, in the middle that
we process some, and far to the side that we process everything.
Selective attention from the bottom up approach says that powerful stimuli can alter the focus of
attention involuntarily (attention capture). This is exogenous. Top-down attentional selection says that
we have strategic efforts that help us decide ways to seek out information. This is called endogenous. So
choose what to attend to by long term goals or by over-arching themes.
Inattention Blindness describes the idea that when we are focusing on something else we will miss quite
noticeable things. This effect is increased when the task that is being focused is more difficult (aerial
versus bounce passes is more difficult than just passes in general). Illusion of attention is the idea that
we cannot pay attention to everything. A trainer can use this to show how we miss infractions and many
Over time, the attentional capacity required for a given task decreases. This is the idea of practice.
The downside of automatic processing means that we have increased susceptibility to certain types of
inferences. The Stroop task highlights this. So the interference is the automatic reading of words when
trying to focus on the colours. This is evident when children who do not know how to read are very fast
at naming the colours but by high school when fluent they mess up between colour and word citation.
There are exceptions like the hypnotic suggestion and the modulation of the Stroop effect. The
implication that hypnosis can wipe out the Stroop effect is that even what is considered automatic can
be influenced by top-down processes.
The differences between automatic and controlled processing:
1. Controlled processing is: serial, requires attention, capacity is limited, and under conscious
2. Automatic processing is: parallel, without conscious awareness, without intention, does not
interfere with other mental activities, and does not have any capacity limitations.
Evolved automaticity describes abilities that are not learned but have evolved over time to be innate. An
example of this is face recognition.
Disorders of Attention
Damage in the posterior parietal lobe is often comorbid with frontal lobe damage. This results in visual
neglect. It is characterized by right hemisphere parietal lesions, attention deficit rather than sensory,
neglect contralateral hemi-space (the left side), and sometimes contralateral neglect on the body as
well. A way to tell if a patient is suffering from this is line bisection where the individual looks to draw a
line through the middle of another line. They draw it skewed. They do not have conscious/explicit
knowledge of information in the neglected field but do have unconscious/implicit knowledge. So a word
in the neglected field will not be noticed but the word can prime responses to words in an attended
Attention in the Real World
Cell phones are the most common. They conducted a pursuit tracking task which involved keep a cursor
on a moving target. The target also flashed either red (push a button) or green (keep going). One
condition included the tracking alone and one condition had a dual task that was listening to the radio
and another dual task that involved talking on the cell phone. The cell phone produced more errors and
decreased reaction times over and above the control group and even listening to the radio. There are actually some people who are not affected by this and are extreme multi-taskers.
Chapter 5: Memory Structures
We often discuss the different types of memory by sensory, short-term, and long-term.
Three definitions of memory:
1. A location where information is kept (a memory store/place).
2. A record that holds the contents of experience (a memory trace)
3. The mental processes used to acquire, store, and retrieve previous experiences (a memory
We blur these ideas throughout our study.
How do we go about studying memory?
Antecedent philosophies: For a long time questions about the mind were largely the domain of
philosophy because psychology is relatively new. They looked at the level of associations (physical and
temporal ideas connect to something else we have learned about).
Then there was a need for systematic and empirical approach by changing the way we think about the
Ebbinghaus used a lot of self-experimentation to determine how long and how well he could remember
lists of syllables. This was one of the first attempts to measure memory systematically. He learned a list
of serial (in order) and nonsense syllables. Then record time or number of trials required to list the
syllables in their entirety. He also studied the ability to relearn. He developed his famous forgetting
curve. He discovered that the fastest learning occurs in the first pass. There are remote associations
(separated by more than one item). Serial position effects like primacy and recency effects were
discovered from his research. He looked at savings as well, which is the idea that we can learn
something faster the second time. The key points are that he emphasized learning in the laboratory
under tightly controlled conditions and tried to eliminate all meaning from his experiments to eliminate
confounding errors in his research. He used a bottom up approach; how we learn something novel and
store the pieces into a meaningful whole.
Bartlett’s work contrasts Ebbinghaus in major ways. He emphasized learning under natural conditions
without tight controls. He emphasized looking at meaning by using real-world text materials and stories.
This meant it was a top-down and semantic approach to memory by looking at how we modify what
applies to our life in order to get the most out of it. So he had people perform repeated reproduction.
He noticed that people left out details: they dropped off mood and details in order to preserve overall
meaning. People also engaged in rationalization: people would increase sense where the story maybe
jumped. Dominant detail was the focus, people transformed minor details, and changed order
(especially in descriptions). So he found that literal recall is really unimportant in everyday life; people
modify memories so they get the most functional use out of them. So memory is not pure reproduction
as Ebbinghaus suggested.
Again there is the trade-off. Ebbinghaus couldn’t tie his findings to the real world and Bartlett couldn’t
find specific details of memory.
Distinct Types of Memory
William James promoted the distinction between primary and secondary memory before there were
ways to lay it out. Primary memory is related to sensory memory and secondary is the knowledge of
something we have let out of consciousness but can retrieve it. The modal model discusses how information flows through different types of processes depending on
what we want to do with the information. It focuses on stages.
Sensory memory is very brief and basic. It operates at the perceptual level and it must be altered before
it enters short term memory. There is a sensory store for each sensory modality. The two we focus on
are the visual and the auditory. In the visual domain it is iconic and auditory is echoic. We want to look
at quantity, duration, and content.
Iconic memory: Quantity research was done by Sperling. He briefly presented people with a matrix of
twelve letters and was interested in what people could actually remember. There were two recall
conditions: whole report and partial report. With the partial report, he told them that after it flashed
they were only going to have to record one line. In the full report people could report about four items
(ie. one of the lines). Yet in the partial report they remembered ¾ items in any row which is kind of
equivalent to 9 items out of the 12. This indicates a performance advantage for the partial report. He
them manipulated the time between stimulus and response and then examined how performance was
affected in the partial report. In the original experiments, they reported the items immediately after
viewing them. As time between stimulus and response is increased, their ability to recall partial reports
declined as well. This indicates that sensory memory is fleeting and will dissipate over time. Remember
whole report and partial report and when only needed to recall one line they seemed to be able to recall
more items but this is because as recalling more in the whole report the after-image was already going
away. So unless we act on it or spit it out right away we lose sensory memory.
Iconic memory: is less than a second, quantity is the visual field, and contains physical features.
Echoic: is 4-5 seconds, quantity is less than memory, and contains general info but not details.
Some people say that there are no functions, some say there is a role in integration of information,
some say it allows us to process entire visual field to direct for attention, and also that it ensures the
system has time to process some information.
Short Term Memory
The span of short term memory is seven plus or minus two accordingly to Miller. The maximum number
of items correctly recalled with the tone discrimination, spatial discrimination, and letter or digit span is
this seven. We can increase this capacity by chunking (re-organizing into meaningful units). The ability to
chunk depends on what is in the LTM (this is where they interact). NFLCBCFBIMTV can be remembered
by NFL-CBC-FBI-MTV because of the meaning we have behind the acronyms.
How did SF get up to remembering 80 digits?
He was not coached in anyway. He was highly motivated and constantly tried different methods to
improve his span. Therefore, his skill was self-taught. In the fifth session he noticed that some groups
reminded him of running times. As soon as he thought of them as running times, his ability to remember
digits significantly improved. He used three or four digits to create groups. These groups were combined
into super groups and super groups were combined into higher-level groups. However, at the end of all
of his sequences, he kept the last five or seven units and rehearsed them normally (not putting them
into groups) until they asked him.
This does not increase the store of a short term memory but reveal an efficient jumping from LTM to
Forgetting in short term memory consists of two theories:
Trace decay theory: This discusses the idea that there is automatic fading of memory trace. They use an
intervening task so that the person cannot use rehearsal to memorize the words and then cannot
remember what they learned before the second task. The longer the second task, the less memory for
the previous material is available. However, this didn’t seem a pure index of decay because the second
task could be considered new information. Interference theory: the memories are not naturally fading away but are being pushed away by recent
things. The more similar the interfering material is, the greater the loss of memory. Proactive
interference is when you are trying to remember something you just learned but something you learned
in the past is interfering. So you learned B, then learn A and must recall B but A interferes. Retroactive
interference occurs when you are trying to remember what you learned in the past but the information
you’ve learned after is similar and hampering your ability. So you learned A and then B and must recall
B, but A interferes with B. Look at grid and remember that for test!
Studying release from proactive interference used the difference between have four sets of numbers
and intervals, versus having letters on the last trial. So it is not the case that memory is slowly degrading;
because the amount of letters (those with four complete ones) degraded over trials but when changing
the last trial to something new to remember the memory goes back up (having the last one be number
digits instead of letters) which showed that similarity in new information is interfering with the old. This
indicated the interference theory and moved us away from the trace decay theory.
Over time, people started to look at short term memory differently; there seems to be “Work going on
in there”. There was a dual task methodology where they memorized numbers, were quizzed on a piece
of logic and then recalled numbers. It didn’t mess people up near as badly as they thought. Therefore, it
doesn’t appear that short term memory is one unified system. So they were led to believe that the digit
task uses one subset of memory and the other tasks can use another. Hypothesized that we have a
visuo-spatial pad and phonological loop both and both are controlled by the central executive.
Therefore, the numbers were in the sketchpad and then the logic could use the loop.
They were found to be distinct spaces for verbal and for visual information because saying something
over and over (articulatory suppression) while memorizing letters they could not remember letters but
could remember pictures because they didn’t interfere.
Working memory is a great predictor of general intelligence, educational achievement, and even
income. See the book for more examples.
Long Term Memory
We break down long term memory down in several ways:
Episodic is the autobiographical memories for life events while semantic is the memory for factual
Procedural is concerned with motor movements while declarative is more singular.
Explicit is the conscious memory and implicit is the unconscious memory.
The long term memory is a type of storage system that saves information for later.
Capacity: The capacity of long term memory is huge and maybe even infinite (there is debate).
Coding: We organize long term memory in semantic space. So apple is connected to red, pie, fruit, and
food. Notice that fruit would connect to seeds and salad and food would connect to pizza and processor.
This is the idea of connectionism. So the more you can bind what you are learning into what you already
know the better we can remember the new information.
Duration: The length of time that memory stays in the LTS is debatable. Permastore is the idea that large
portions of originally acquired information can remain accessible for over fifty years. The general trend
is that we lose or change the most in the first 3-6 years and then there is not much forgetting for the
next three decades. The final decline is more that the people are getting old and not necessarily memory
loss due to content.
Forgetting: The processes of losing information in the LTS are the same processes hypothesized for STS.
Interference is more supported than decay. So we are more likely to forget where we parked our car every day for ten years than in a new parkade. This is because as we park more in the same parkade the
more events can lead to similarity interference.
Encoding: There are different depths of encoding processes. Maintenance rehearsal is repetition to
maintain or hold information without transferring it into a deeper code. Then there is elaborative
rehearsal where we connect the information to meaning. This helps transfer the information into a
deeper code because it provides more multi-modal codes that are connected. Then people wanted to
know how to encode better! The generation effect describes that reading word pairs is not has good as
making people generate associative terms themselves increases their ability to remember.
Retrieval: Context dependent memory states that information learned in a certain context is retrieved
better in the same context. The effect can be seen with location (recall is effected by where there is
encoding), physiological state (info learn while intoxicated is remembered best when intoxicated but
sober is always best!), and even personality (people with dissociative identity disorder remember words
better when in the same personality).
Why do we find it necessary to divide memory from a unitary structure into a separation of short term
and long term systems?
Neuropsychological evidence and the two component tasks are proof for this.
Receny effects and primacy effects are cumulatively serial position effects. In primacy we have better
memory because we are rehearsing them at first when we have lots of memory storage and time to do
so in order to put them into long term memory. Recency comes into effect when near the end of the list
we may not be able to put it in the long term store but because it is still available in the short term or
Further studies looked at how both functions can be manipulated. In order to test the limitations of
recency effects we can eliminate by decay (leave a time period in between). Amnesic patients do not
show primacy effects because they do not encode things into LTM (Clive Wearing).
These lectures may seem all over the place but try to remember how the past lectures are related in
these things that are more applicable.
Schemata: In the reconstructive memory topic, we come back to Bartlett’s top down approach.
Remember how the participants in his study changed details to suit their culture? Well he said we
organize information based on schemata in the mind and so make information more applicable to our
life. He said we deal with the present based on what we have learned in the past. So repeating in a
culturally consistent manner was increased over time with indicates that we organize information
Can memories be recreated?
Roediger gave a 12 word list then a recall test followed by the recognition test. In the recognition test
they rated 1-4 with 1 indicating that it was a new word and 4 indicating that it was an old word. So they
were looking at how we don’t need to memorize everything to have a good memory but rather know
the overarching themes of what we learn. In recall testing, people get .65 of the words they studied but
make a lot of mistakes when identifying words related to what they saw (0.4) indicating that we get the
“jist” of words in order to be efficient with our memory. This is compared with the little amount of
mistakes in the non-studied and unrelated words. This has been related to more studies that have real
Eye witness testimony
In many cases, eye witness testimony is the only evidence against the accused. Eye witness testimony is
very persuasive but not accurate. People testifying can easily be influenced by the types of (misleading) questions asked and memory can integrate suggested details. An example of misleading questions is
found in a lab study asking “How fast were the cars going when they smashed into each other?” versus
“How fast were the cars going when they contacted each other?” This changed the estimated speed of
the car but also the perception of the entire situation! A week later they asked if there was broken glass
found in the scene. More people responded yes if they were asked the original question with a more
heavily weighted verb. Another study found that those given consistent information answered around
.80, when there was neutral information they were .60, but with inconsistent information they were less
Repressed memories of sexual assault
Loftus showed that pretty much anyone could be misled into believing that they were a victim of child
abuse. This was very controversial because peoples’ memories that may have been false were so strong.
The term was coined by Kulik. They are memories for circumstances in which one first learns of a very
surprising and consequential or emotionally arousing event. People all claim to remember with extreme
clarity where they were and what they were doing. In terms of details recalled, the flashbulb memories
are a bit lower than everyday memories. Notice the upward trend in inconsistent memories and the
downward trend of consistent memories over time. They noted that neither visceral nor emotional
ratings were related to consistency. People report it as being more vivid and are more confident but
most likely it involves reconstruction of overarching themes.
False memory and the Brain
Different areas of the brain become activated in a word recognition task for true or false words (words
that were not presented but are semantically related to the true words that were presented). The
parahippocampal gyrus was more active for true words rather than false words. So this suggests that
memories have a different neural signature based on their past presentation.
Damage to the hippocampal system (the hippocampus and the amygdala) and/or the midline
diencephalic region cause this disorder. It is usually caused by head trauma, stroke, or disease.
Anterograde amnesia is the inability to form new memories. So it affects long term but not working
memory. Memory for general knowledge and skill performance is intact. Clive Wearing and H.M. are
good case studies to look at. Retrograde amnesia is the inability to recall old memories from before the
damage. It is almost always present with anterograde amnesia. It varies in time lost and duration (some
people get it back) and it doesn’t affect skill performance. Studies comparing explicit (directly querying
memory) and implicit (indirectly assessing memory) memory tasks between normal and amnesic
individuals. The amnesiacs in the explicit tests are much lower but are on par with the normal
individuals on implicit memory.
Episodic and Semantic Memory
Tulving believed that LTM consists of two distinct yet interacting systems. One is the episodic memory
which is information about one’s personal experience. These memories have a time and date. The
second is semantic memory which is general knowledge of language and world knowledge (more like
facts). They can go together in the sense that you can have an episodic memory for when you learned a
A case study we will look at is KC. He had damage to frontal and temporal lobes including the left
hippocampus. He preserved intellectual functioning, had normal vocabulary and linguistic storage as
well as a functioning short term memory. This means he has intact semantic memory. However, he
didn’t remember anything episodic. He didn’t remember breaking his arm or his brother drowning.
There was no way to bring it back. This shows they are distinct because it is a dissociation.
We get stronger evidence by another example. She had damage to front temporal lobes. She couldn’t
recall normal and common meanings of words and could not recall basic attributes of objects. Yet she can produce lots of details about wedding, honeymoon, and father’s death. So she manages to preserve
semantic memory for things she has episodic memory for (they are integrated).
So the double dissociation represents strong evidence that these two types of long term memory are
Cabeza and Nyberg conducted a meta-analysis and found that for language and semantic retrieval the
regions of prefrontal and temporal lobes. For episodic memory retrieval it has specific prefrontal, medial
temporal, and posterior midline regions. So episodic retrieval is associated with the right frontal lobe
activity more than semantic retrieval. (ON EXAM)
Tulving wanted to map consciousness with memory systems. So episodic is mapped with autonoetic
(remembering a personal event that allows them to be aware of it as a veridical part of their own past
existence) and semantic was mapped to noetic (an organism is aware of and can object on things that
are not there and flexibly act on symbols).
Hierarchical semantic network model consisted of the cognitive economy which minimizes redundancy
by having properties and facts stored at the highest possible level. They think about semantic knowledge
as a network which is a series of nodes connected to each other. So the superordinate level stores things
that apply to all beneath it so that no every specific animal needs to reintegrate the information. The
subordinate level has specific details. This model predicts that people should be faster at verifying
statements whose representation spanned one level instead of two. So a canary can sing should be take
less time because it is one connection away from canary but longer to say that a canary has skin because
it is at the highest level of animals (super ordinate).
Problems include the typicality effect (we are more likely to say that a robin is a bird instead of a turkey
is a bird because a robin is discussed more often) which says that even there the same span there are
other facts that come into it.
It was also highly based on logic, so they didn’t talk about weighting and how knowledge is spread.
Spreading activation theory by Loftus had no hierarchy. Instead, the concepts which are in a web fashion
are connected. The connections between nodes (the concepts) can be activated and other nodes related
to that node by a connection will be activated as well.
Evidence comes from priming experiments. People were shown two items on a trial and asked to decide
if the second item spells a word. If shown bread they were more likely to say “cbumrs” is a word like
Memory Research and Studying
1. The working memory model: There are three functions. The central executive controls the
other ones. This central executive directs the flow of attention which depends on if the
information will go into LTM memory! The visuospatial sketch pad rehearses memory from the
visual system and the phonological loop is responsible for auditory information. Putting things in
a visual and auditory code is double helpful!
2. Levels of processing: We must engage in elaborative rehearsal which will engage the facts with
overarching perspective, theory and our own experiences. It is better if we make up our own
connections. This takes up more attention (central executive). Maintenance rehearsal is not as
effective for LTM encoding.
3. The generation effect: This was the finding that instead of reading something by someone else
over and over again we should create our own items.
4. The encoding specificity: The context effects apply here. So we should mirror our studying with
that of the testing environment.
5. Distribution: Distributing your practice far out-performs any type of mass practice. 6. Testing effects: We should test ourselves instead of studying and rereading. Testing yourself
allows you to reactive things easier in long term memory.
START OF MIDTERM #3
Chapter 7: Concepts and Categorization
Concepts and categories are blurry in their definitions. So we won’t be tested exactly on these, but
they are useful. Concepts are mental representations that have relevant information for the
particular object or event. Categorization is a process of placing things into categories and a
category is a class of similar things (concepts that go together).
We dynamically group things according to function; it depends on what we want to accomplish with
the classification process. The way we categorize things shapes the way society sees things and will
influence our perception (we categorize people where they’re from and this changes what sorts of
things are allowed and what we think of them).
The categorization depends on both short term and long term memory.
Functions of Categorization:
Understand individual cases you have not seen before and make inferences about them. This
requires less learning and memorization and can guide appropriate courses of action. Using
categorization helps reduce the complexity of the environment.
Psychological essentialism: People act as if things have underlying natures that make them what
they are but then often use superficial cues. The cues used depend on the type of concept.
Nominal-kind is those with clear definitions of necessary and sufficient features (triangle).
Natural-kind are those that exist in nature (biology or activity)
Artifact kind are those used for different tasks (a toothbrush can brush your teeth or polish
There are a few dominant views of categorization processes.
Everyone assumed that we used this for a very long period of time. Here, category membership is
defined by a set of defining features that are necessary and sufficient. Necessary means they must
have all of them and sufficient means by having all of them is enough to warrant inclusion in the