Psychology 2134A/B Lecture Notes - Special Functions, Lexical Decision Task, Context-Dependent Memory
This preview shows half of the first page. to view the full 2 pages of the document.
LECTURE 2.3 – Theories of Language Processing
The Modularity Hypothesis...
o Modularity theories believe that our mental operations are divided into separate
o Modules have 3 characteristics...
Domain Specific Each module only processes certain types of information.
Information Encapsulation Data is processed bottom-up only.
Localization of Function Each module is in a specific brain region.
Modular Theories hypothesize that language is a “special” function and is processed only
When considering top-down versus bottom-up processing, it is important to distinguish between
low level processing (perception of stimuli) and high level processing (use of LTM and context).
Bottom-up processing relies solely on low level information whereas top-down processing is an
interaction between high level and low level processing.
The Connectionist/PDP Theories are sit contra to modular theories. They hold that speech
processing, like other functions, is distributed across the brain with neurons as its basic unit of
o There are processing units called nodes (representation of neurons).
o The nodes are connected in a network; these connections become stronger the more
frequently they are used.
o This model supports top-down processing.
How do we recognize spoken words? Words are stored individually, we recognize auditory input
and use fast mapping to facilitate lexical access. This is a fast and largely automatic process.
Language processing and lexical access is tested primarily through measurements of reaction
times in lexical decision tasks. There have been many interesting effects which we will consider.
Kucera & Francis studied the relationship between the commonality of a word and its reaction
time. They found that the more common a word is, the faster a person could name it. This
finding is called the Word Frequency Effect.
Another interesting finding is something called the Neighbourhood Effect. This is difficulty in
recognizing a word because many other words sound similar to it.
The Cohort Model...
o Argues that we recognize words phoneme by phoneme and that a lexical decision is
made when we reach the uniqueness point.
o Very literal approach and doesn’t incorporate semantic or phonological knowledge.
How does this theory account for the word frequency effect and the neighbourhood effect? Well
it assumes that our cohort “lists” are organized in terms of frequency or expectations. As for the
neighbourhood effect it argues that words with many neighbours will have a later uniqueness
You're Reading a Preview
Unlock to view full version