PS366 Chapter Notes - Chapter 3: Spreading Activation, Syntactic Category, Affix

39 views16 pages
6 Feb 2016
School
Department
Course
Professor
CHAPTER 3-PS366-WORD PROCESSING
-One principle that guides traditional linguistic theories is that language consists of 2
components;
1) lexicon-that captures information about words, their components and their meanings
2) grammar-that lays out the principles governing how words can be combined into phrases
-a great deal of word-processing theory has been built on the basis of English and other analytic
languages
-to understand how words are represented/processed, we need to subject them to several different
kinds of analysis (separate kinds of analysis are required because we represent info about words
in atleast 2 ways):
1) we mentally represent the form that words take, the way they sound and the way they look
way they sound is reflected in phonetic/phonological coding, and the way they look is
represented in an orthographic code
2) we also represent the meaning that words convey, which is referred to as a semantic coding
system.
-words may be related because they sound similar, look similar, or have similar meaning
-Prominent accounts of word processing propose that word forms are represented in lexical
networks and word meanings are stored in a separate, but linked, semantic memory or
conceptual store.
-NOTE: to understand how words are represented and processed, we have to be clear whether we
are talking about form or meaning.
THE ANATOMY OF A WORD: HOW WE MENTALLY REPRESENT WORD FORM
-analysis of word form starts with an analysis of subcomponents (its parts).
-Classical linguistic approaches to word form representation view words as involving a
hierarchical arrangement of components
-In speech, the lowest level is the phonetic feature (like place and manner of articulation) which
combine to produce next level
-next level is the phoneme-phonemes can be combined to make up bigrams (pairs of phonemes)
and trigrams (triplets), or these combinations of phonemes can be thought of as composing
syllables.
-these syllables go consonant-vowel or consonant-vowel-consonant because when we talk, we
open and close our jaws, starting and stopping the flow of air
-Syllables themselves can be divided into onsets (the initial CV combination-like spa in spam)
and rimes (the ending VC combination-like am in spam)
-One or more speech sounds can combine to produce a morpheme-the smallest unit in a
language that can be assigned a meaning. One or more morphemes can be combined to produce
a word. Cat, is a monomorphemic (one morpheme) word because there is only ONE morpheme
that makes up that word.
-blackboard would be an example of polymorphemic (more than one morpheme)
-We can alter the meaning of cat by adding a bound morpheme,-s, resulting in the
polymorphemic word cats (because s is its own morpheme too!)
LEXICAL SEMANTICS
-2 different definitions of the term meaning:
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 16 pages and 3 million more documents.

Already have an account? Log in
-when we talk about word meanings, we can differentiate between sense and reference
1) sense: refers to dictionary-like knowledge that we have about words (i.e. cat-cats are
mammals, the have fur, they are kept as pets etc…)
2) reference: another form of meaning that words are involved in
when we use words to refer to people, objects or ideas, the words themselves have senses (like
she is lazy=slow, not hardworking), but their specific meaning in a given context depends on
what the words point to (what they refer to)
-different expressions that have the same sense can have different referents in different contexts
(to fully understand-read pg.81 mid paragraph)
-rest of chapter revolves solely on SENSES meanings of words.
-one approach to investigating word meaning relies on introspection: thinking about word
meanings and drawing conclusions from subjective experience. Based on introspection, it seems
plausible that entries in the mental lexicon are close analogs to dictionary entries. If this IS true,
the lexical representation of a given word would incorporate info about its grammatical function
(what category does it belong to; verbs), which determines how it can combine with other words.
^Using words in this sense involves the assumption that individual words refer to types-meaning
the core meaning of the word is a pointer to a completely interchangeable set of objects
-each individual example of a category is a token (example-team is a type, Yankees are a token)
-when deciding what words define something in the mental dictionary, we could just chose its
core or essential properties, BUT this runs into problems quickly, because many easy to
understand concepts do not have consistent core properties across various versions of the
concept. And some tokens are better than others.
-These ^ are the kinds of problems that have led many language scientists to abandon the
defining/core features approach to lexical semantics. Thus, dictionary definitions do NOT seem
to be a good way of explaining how words meanings are represented in the mental lexicon.
-one way to avoid these problems is to operationalize word meaning as reflecting collections of
associated concepts- thus it would be whatever comes to mind when someone says the word.
This approach, exemplified by semantic network theory, has been the dominant theory in
artificial intelligence approaches to semantics for the past 30 years. The GOAL of semantic
network theory is to explain how word meaning are encoded in the mental lexicon and to explain
certain patterns of behaviour that people exhibit when responding to words.
-Semantic network theory proposes that a word’s meaning is represented by a set of nodes and
the links between them. (fig.3.3)
-The nodes represent concepts whose meaning the network is trying to capture, and the links
represent relationships between concepts. Ex. The concept goose would be represented as an
address in memory (a node) connected to other addresses in memory by different kinds of links
-one of the important kinds of links in semantic network theory is the “is a” type, which encodes
relationships between general categories and the concepts that fall within the category. (i.e. goose
is a waterfowl)
-according to this view, subordinate concepts like goose, inherit the properties of superordinate
nodes via transitive interference (a goose is a waterfowl, a waterfowl is a bird, therefore a goose
is a bird). This means that you can conserve memory resources since there is no need to directly
connect the concept goose to the more general concept bird.
-In early work, Collins and Quillian showed that statements such as “A canary can fly” primed
responses to statements such as “A canary is a bird”. The explanation for this is that the first
statement caused activation to spread from “canary is a bird” to “a bird can fly”.
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 16 pages and 3 million more documents.

Already have an account? Log in
-other links, such as has or can connect concepts to components (has feathers). The meaning of
a word is captured by the pattern of activated nodes and links.
-The idea of spreading activation is used to explain how information represented in the
semantic network is accessed, and why words that are related to one another facilitate access to
one another. Spreading activation is only a hypothetical process that takes place when one of the
nodes in the network is activated.
-Spreading Activation has 2 important properties:
1) It is automatic, happens very fast and we cannot control it
2) It diminishes the further it has to go (closer connected nodes are strongly and quickly
activated, and more distant ones are slower and weaker) beyond a couple degrees of separation,
no changes in activation should occur.
-these ^ two properties of SA help explain how people respond during priming tasks. Priming
occurs when presenting one stimulus at time 1 helps people respond to another stimulus at time
2.
-In classic work on word processing, people respond faster in lexical decision and naming
experiments when a target word like duck is preceded by a related word like goose in
comparison to control condition where it is preceded by unrelated word.
-^this kind of priming is referred to as semantic priming, which is explained through semantic
network theory as resulting from the spread of activation in the semantic network. But if you
hear horse before you hear duck, the pattern of activation representing the meaning of the word
duck starts from 0 (or normal resting activation), and your behavioural response is slower.
-Faster response time to primed words is also associated with decreased neural activity when a
target word is preceded by a related prime word compared to an unrelated word.
-Spreading activation is thought to diminish substantially beyond one or two links in the
network. Evidence for this come from mediated priming studies involving pairs of words like
lion-stripes (they are related through the mediating word tiger).
-According to semantic network theory, what prevents activation from spreading all over the
network is that the total amount of activation that can be spread is limited (lion already primes
stripe much less than it primes tiger).
-Neely’s study (1977), people were told to expect a particular kind of word after they heard a
category label. The category might be body parts, but then the word would be bird. Neely found
there was no priming immediately after, which shows that this is an unconscious process, not one
we can control. However, if there was a short delay between knowing they had to think of a bird
when a body part was said, priming did occur (as if they were thinking of different types of
birds). This pattern of response is consistent with two processes; fast, automatic activation
spreading from the cue to related concepts AND slower, non-automatic (strategic) attention shift
to a short list of likely bird names. This is also supported by data showing that some aphasic
patients appear to have intact automatic priming, but impaired strategic priming.
-According to semantic network theory, words are related to one another by virtue of having
links to shared nodes (duck and goose both connect to the bird node).
-Two words can ALSO be related to one another, whether they share nodes or not, if the two
words co-occur in the language. (i.e. police and jail will prime one another, not because they
resemble one another, but because they appear together often, so the presence of 1 may be used
to predict the presence of the other in the near future).
-priming is harder to detect when pairs of words share elements of meaning, but are not
associated, especially when the semantic relationship consists of belonging to the same general
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 16 pages and 3 million more documents.

Already have an account? Log in

Get access

Grade+
$10 USD/m
Billed $120 USD annually
Homework Help
Class Notes
Textbook Notes
40 Verified Answers
Study Guides
1 Booster Class
Class+
$8 USD/m
Billed $96 USD annually
Homework Help
Class Notes
Textbook Notes
30 Verified Answers
Study Guides
1 Booster Class