PSY493H1 Lecture 4: Lecture 4 Audition & Language
![](https://new-preview-html.oneclass.com/Exbq3r4gwdYONPXbe2PxNy1MLBo2plvz/bg1.png)
Auditory system
Receive input:
Light -> sound
○
Photoreceptors -> auditory receptors:
○
-
Differentiate stimuli:
Brightness, color -> loudness, pitch
○
-
Allow us to interact with environment
-
Neural signals are sent from the inner ear to
Brain stem nuclei
○
Medial geniculate nucleus (MGN) - thalamus
○
Auditory cortex
Auditory receptors in cochlea -> brain stem neurons -> MGN -> auditory
cortex
§
○
-
The ear
Sound is a bunch of pressure waves
-
The Middle Ear
Ossicles: amplification of sound pressure onto oval window
Oval window is much smaller than tympanic membrane (ear drum)
§
Ossicles act like levers to increase pressure onto smaller space
§
Malleus (hammer) -> incus (anvil) -> stapes (stirrup)
§
○
The Cochlea (inner ear)
Spiral shape, unwound forms a hollow tube
○
Two membrane covered holes at the base: oval window (contact with ossicles)
and round window
○
Cross section revels 3 fluid-filled chambers: scala vestibuli, scala media, scala
tympani. Scala vestibuli and tympani are continuous
Have different fluid in scala vestibuli/tympani vs. media
§
Apex: high frequency
§
Base: low frequency
§
○
Unusual ionic conc. In the endolymph are generated by transport process in the
stria vascularis. These ionic con. are responsible for the endocochlear potential.
Stria vascularis: pumps lots of K+
§
Endolymph: high K+, low Na+
§
Perilymph: low K+, high Na+
§
○
Organ of Corti
Auditory receptors: hair cells
Arranged into a single row of inner hair cells and 3 rows of outer
hair cells. 3 times more outer hair than inner hair cells.
□
Makes synapses onto spiral ganglion cells, whose cell bodies form
the spiral ganglion. Axons from the spiral ganglion form the
auditory nerve.
□
§
Movement of the basilar membrane due to sound results in the bending
of the stereocilia. (leaning towards the longest stereocilia)
Changes in the membrane potential of the hair cells are the result
of opening K+ channels located at the tip of the stereocilia.
□
K+ influx into the cell from the surrounding endolymph results in
depolarization, the opening of Ca2+ channels and the release of
neurotransmitters onto spiral ganglion neurites.
□
§
Bending in one direction (to long): depolarize. Bending in the opposite
direction: hyperpolarize
Movements of stereocilia are very small (10^-9m). Movement of 0.3
nm is sufficient to produce the perception of sound
□
Hair cell membrane potential "tracks" variations in sound pressure □
§
Spiral ganglion neurons make synaptic contacts with hair cells
Inner hair cells connect to 95% spiral ganglion cells □
Outer hair cells connect to 5% spiral ganglion cells
Most spiral ganglion cells receive input from a single inner
hair cell at a particular location on the basilar membrane
®
They generation AP in the response to the sound of a specific
frequency: the neuron's characteristic frequency
®
Tuning curves
More narrowly tuned more specific
◊
®
□
§
Movement of the oval window is accompanied by a corresponding
movement at the round window
Outer hair cells contain motor protein that actively contribute to
the movement of the basilar membrane
Motor proteins help us become more sensitive to sound,
amplify the movement by expand or contrast
(depolarization)
®
□
Otoacoustic emissions are sounds produced by movements of the
basilar membrane in the absence of an external auditory stimulus
□
§
2 main properties determine how the basilar membrane responds to
sound: width and stiffness
The basilar membrane is organized according to place code for
frequency (tonotopic map)
□
High frequency: base, narrow, stiff□
Low frequency: apex, wide, floppy □
§
○
Auditory pathway
Each ear is projection to both cerebral hemispheres
-
-
Cells in bran stem respond to specific time differences
These cells are called coincidence detectors (localize sounds by comparing
sounds from 2 ears with the delay, we can detect the localization of sound)
○
Superior olivary nucleus
First point where both ears compared!
§
§
○
-
Sound localization
Horizontal plane: interaural time delays provide an important cue
For sudden sounds, the interaural time delay ranges b/w 0 msec (sounds is
coming from a location straight ahead) to 0.6 (sound comes from a location to
one side)
§
○
-
The head casts an acoustic shadow or block some of the sound intensity that reaches
an ear when the sound source is located on the opposite side
These interaural intensity differences are greater for higher frequencies than for
lower frequencies, since lower frequency waves can diffract around the head
more easily due to their longer wavelength
To localize:
Low frequency sounds (20-2000Hz): interaural time delay□
High frequency sounds (2000-20000Hz): interaural intensity
difference
□
§
These two processes constitute the duplex theory of sound localization
§
○
-
Auditory cortex
Medial geniculate nucleus cells project to the primary auditory cortex (A1, Brodmann
area 41, Heschl's gyrus)
Cortical neurons in A1 are sharply frequency-tuned
○
They form a tonotopic map and are organized into columns.
○
Receptive fields: not defined by space but frequency!
○
-
Hierarchy within auditory cortex
Left auditory cortex -> language
○
Right auditory cortex -> music environment sounds (non-speech sound)
○
Core -> simple sounds (one wave)
○
Parabelt -> complex patterns (combinations of different sine waves)
○
Causal -> where
○
Rostral -> what
○
-
Some neurons in A1 have complex response characteristics, such as responses to
vocalizations
Wernicke's area: language comprehension; loss of this area results in serious
disruption linguistic function
○
Deafness can result from
Damage to the ears or cochlea (bilateral): peripheral hearing loss
§
Damage to the auditory cortex (bilateral): central hearing loss
§
Unilateral lesion of auditory cortex DO NOT result in deafness.
§
○
-
How do we separate sounds?
Sound originate from different sources, but all sum up to one pressure wave
-
We can decompose complex sounds into individual frequency components using
Fourier analysis
A similar process happens in the brain for initially separating sounds
○
-
Gestalt: good continuation
-
Auditory system summer:
Sound processing starts at the cochlea
-
Uses hair cells to translate mechanic vibrations to electrochemical stimulus
-
Hair cells inner and outer project to spiral ganglion cells (from auditory nerve)
-
Auditory nerve -> brainstem -> auditory cortex
-
Localization begins processing in the brain stem uses interaural time delay and
intensity differences
-
Auditory cortex retains tonotopic map
-
Auditory cortex processes simple and complex sounds
-
Speech-in-noise processing
One of the ways we can figure things out is our knowledge of the conversation of
language - context
-
Language:
Goal of language
Communicate ideas (abstract or concrete) to others
○
Requires coherent encoding/decoding of semantic info into
verbalized/written/signed units for language production and understanding
○
-
3 main components
Phonology -> sound
○
Syntax -> grammar
○
Semantics -> meaning
○
-
Phonology
Rules of sounds of language
Related to motor control
○
-
Phonemes
Smallest unit of sound that can signal meaning
○
Pill vs. bill
○
-
Phonetics
How is the unit produced?
○
Pill vs. spill
○
-
Syntax
Rules of language organization
Grammar
○
-
In English:
Subject-verb-object
○
The student got a b
○
The student got a bee
○
-
Semantics
Meaning of language
-
Control of speech output determined by Wada technique:
Right handed - left hemisphere - speech representation
-
Left handed - lateralization not as strong as right handed, some are at right
hemisphere
-
Wada procedure
"putting one brain hemisphere to sleep" by injecting sodium amytal into the carotid
artery. The left hemisphere is found to be dominant for language in most but not all
subjects. Right handed subjects: 95% LH-dominant. Left handed subjects: 19% RH-
dominant
-
Is lateralization of function present only in adult humans?
No! evidence of lateralization in variety of species
-
It's evident early in development
Hemispheres are not equipotent at birth
○
Neuroanatomical asymmetries present at birth
○
As early as 1 week old
ERPs greater over LH to speech than nonverbal material (might take
longer to produce sounds)
§
○
Children with early LH hemispherectomy
Regain language in RH, but not as fluent
§
○
Genetic basis
Need to consider prenatal factors (positioning, etc.)
§
○
-
Right hemisphere contributions to language
Evidence from split-brain patient suggests that right hemisphere contributes to
written and auditory language processing
But
Poor understanding of complex syntax
§
Cannot produce speech output
§
Cannot process phonological info
§
Vocabulary restricted to concrete words
§
○
-
RH seems to specialized for aspects of language not processed well in LH
Prosody (melody)
○
Narrative (story line)
○
Inferences (filling in blanks)
○
-
Prosody
Role for conveying emotional tone
-
Additional role for conveying syntactically helpful info
Difference in voice path (intonation) at end of declarative statement vs.
question
This class is awesome! Vs. This class is awesome?
§
○
-
Perception / comprehension of prosody
Aphasic patients with LH lesion can distinguish b/w questions and statements
using prosody
○
Patients with RH lesion show impairment in perception of prosodic cues related
to tonal aspects of verbal stimuli (often together with poor tonal memory)
○
-
Production of prosody:
May be supported by RH homolog of "Broca's" region
○
Lesion to right frontal cortex can lead to complete lack of prosody, i.e. aprosodic
speech (similar to monotonous voice of sales person of telephone)
○
-
Inference and Narrative
Patients with right hem. Damage can have difficulty with integration of heard or
written text
Making inferences based on incomplete info
○
Following thread of a story
○
Understanding non-literal aspects of language (jokes and metaphors)
○
-
Examples of problems that can be observed in patients with right hem. Lesions:
Determining whether sentence is relevant to gist of story
○
Making inferences using incomplete info
John walked In the water near some glass. He grabbed his foot and called
the lifeguard for help
§
○
-
Difficulty with metaphors, comprehending jokes and indirect requests (when info
needs to be interpreted non-literally)
-
Generally speaking:
RH language problems less apparent in daily language use than aphasias after
LH lesions
○
-
How does the brain process language?
Broca's area
Speech production
○
Lesion: Broca's aphasia (non-fluent aphasia)
Patient know what they want to say but they cannot get it out
§
Speech quality
Paucity of output (not related to motor problems) - not talking
much
□
Telegraphic, agrammatical speech
Has content words (noun)
®
Limited function words (but, behind, and)
®
Limited word endings (ing, s)
®
□
Great difficulty naming objects □
Difficulties repeating words □
Limited writing abilities □
§
Motor functions intact
No paralysis of face, vocal musculature, can blow out candles □
Utter non-linguistic sounds w/ ease □
§
Relatively spared comprehension
§
Broca: Deficit in programming speech output
§
○
-
Wernicke's area
Speech comprehension
○
Lesion: Wernicke's aphasia (fluent aphasia)
Continuous talking but doesn’t make sense
§
Impaired comprehension
Difficulty following simple commands
"Pick up spoon", "Show me the red square"
®
□
§
Speech quality
Sounds are well formed □
All part of speech present□
But makes little sense" "word salad"□
§
Reading and writing often severely impaired
§
Motor functions intact
§
Also exhibit paraphasia
Errors in producing specific word even during simple repetition □
Instead words are replaced with similar words
Phonemic paraphasia: similar sounds (fable for table)
®
Semantic paraphasia: similar meaning (barn for house)
®
Neologisms: follow structure of language, but not in lexicon
(paffle, contrap) making up new words
®
□
§
Wernicke's aphasia: inability to link "sound image" to meaning
§
○
-
Double Dissociation
Broca's area -> impaired speech production, intact comprehension
-
Wernicke's area -> intact speech production, impaired comprehension
-
Syntactic & phonological processing in Broca's aphasia
Patients with Broca's aphasia insensitive to grammatical markers
-
Patients use basic subject-verb-object to ordering rule to figure out who chases whom
-
Impaired at matching action with picture
-
But not all deficits seen in Broca's aphasia can be explained as syntax processing
impairment
Syntactic and phonological deficits likely co-occur in patients with Broca's
aphasia due to large lesion (usually stroke)
Deficits in both domains may rely on neighbouring but not identical
regions in anterior parts of left hemisphere (inferior frontal, anterior
superior temporal, insular cortex
§
○
General conclusion
Broca's aphasia can be decomposed into different more specific
impairments each caused by damage to neighbouring but distinct brain
regions
Accounts for variability b/w different patients with Broca's aphasia □
§
○
-
Anterior aphasias (Broca')
Impaired
Phonological production
§
Syntax comprehension
§
○
Intact
Semantic comprehension
§
○
-
Posterior aphasia (Wernicke's)
Impaired
Phonological comprehension (phonological paraphasias)
§
Semantic comprehension
§
○
Intact
Syntax comprehension
§
○
-
Arcuate fasciculus - bundle of axons from Wernicke's to Broca's
Conduction Aphasia
Lesion location
Speech output and comprehension intact
Not Broca's, not Wernicke's
§
○
Difficulty repeating what was just heard: sound images could not be conducted
forward to be produced
Severed connection b/w Broca's area and Wernicke's area
§
Arcuate fasciculus
§
○
Disconnection syndrome: no direct route from sound image to speech output
○
-
Global aphasia
Damage to multiple components of system
-
Damage to sound image and output
-
No ability to comprehend or produce speech
-
Extensive lesion to left hemisphere
-
Brain network of language
What? Ventral stream: link phonological and semantic info
-
Where? Dorsal stream: sensorimotor interface, articulation
-
ERPs of semantics and syntax
N400: index of semantic retrieval
-
P600: ongoing sentence-level integration
Syntactic violation
○
Garden path sentence: where you think it's gonna end but it doesn't
○
-
P600s were initially seen as markers of syntactic violation, and N400s indices of semantic
integration
-Two streams of processing
Recent models have moved to integrate these 2 markers as part of the same ongoing
processing of a sentence
-Functionally integrated but still anatomically distinct
Memory of tunes (semantics) vs. whether a note sounds right or not (syntax)
-In key violation (memory violation): semantic should elicit N400
-Out of key violation (memory and rule violation): syntax should elicit P600
Melodic Intonation Therapy (MIT)
-Patient have been observed to have good production of words when singing
-LH lateralization means that left hemisphere stokes severely impair speech
production, but right hemisphere is spared
-MIT combines an exaggerated prosody using musical tones and the rhythmic tapping
of the left hand to engage right hemisphere analogs of the left hemisphere language
systems
-Measured structural changes in connections b/w right hemisphere analogs using
Diffusion tensor imaging (DTI) looking at arcuate fasciculus
○Right arcuate fasciculus is smaller than left because less engage
-Summary: tap left hand, engage right hemisphere (engage in melody) so compensate
damage in left hemisphere
Turn sound into electrical signals
Lecture 4
Saturday, May 26, 2018
1:13 PM
![](https://new-preview-html.oneclass.com/Exbq3r4gwdYONPXbe2PxNy1MLBo2plvz/bg2.png)
Auditory system
Receive input:
Light -> sound
○
Photoreceptors -> auditory receptors:
○
-
Differentiate stimuli:
Brightness, color -> loudness, pitch
○
-
Allow us to interact with environment
-
Neural signals are sent from the inner ear to
Brain stem nuclei
○
Medial geniculate nucleus (MGN) - thalamus
○
Auditory cortex
Auditory receptors in cochlea -> brain stem neurons -> MGN -> auditory
cortex
§
○
-
The ear
Sound is a bunch of pressure waves
-
The Middle Ear
Ossicles: amplification of sound pressure onto oval window
Oval window is much smaller than tympanic membrane (ear drum)
§
Ossicles act like levers to increase pressure onto smaller space
§
Malleus (hammer) -> incus (anvil) -> stapes (stirrup)
§
○
The Cochlea (inner ear)
Spiral shape, unwound forms a hollow tube
○
Two membrane covered holes at the base: oval window (contact with ossicles)
and round window
○
Cross section revels 3 fluid-filled chambers: scala vestibuli, scala media, scala
tympani. Scala vestibuli and tympani are continuous
Have different fluid in scala vestibuli/tympani vs. media
§
Apex: high frequency
§
Base: low frequency
§
○
Unusual ionic conc. In the endolymph are generated by transport process in the
stria vascularis. These ionic con. are responsible for the endocochlear potential.
Stria vascularis: pumps lots of K+
§
Endolymph: high K+, low Na+
§
Perilymph: low K+, high Na+
§
○
Organ of Corti
Auditory receptors: hair cells
Arranged into a single row of inner hair cells and 3 rows of outer
hair cells. 3 times more outer hair than inner hair cells.
□
Makes synapses onto spiral ganglion cells, whose cell bodies form
the spiral ganglion. Axons from the spiral ganglion form the
auditory nerve.
□
§
Movement of the basilar membrane due to sound results in the bending
of the stereocilia. (leaning towards the longest stereocilia)
Changes in the membrane potential of the hair cells are the result
of opening K+ channels located at the tip of the stereocilia.
□
K+ influx into the cell from the surrounding endolymph results in
depolarization, the opening of Ca2+ channels and the release of
neurotransmitters onto spiral ganglion neurites.
□
§
Bending in one direction (to long): depolarize. Bending in the opposite
direction: hyperpolarize
Movements of stereocilia are very small (10^-9m). Movement of 0.3
nm is sufficient to produce the perception of sound
□
Hair cell membrane potential "tracks" variations in sound pressure
□
§
Spiral ganglion neurons make synaptic contacts with hair cells
Inner hair cells connect to 95% spiral ganglion cells
□
Outer hair cells connect to 5% spiral ganglion cells
Most spiral ganglion cells receive input from a single inner
hair cell at a particular location on the basilar membrane
®
They generation AP in the response to the sound of a specific
frequency: the neuron's characteristic frequency
®
Tuning curves
More narrowly tuned more specific
◊
®
□
§
Movement of the oval window is accompanied by a corresponding
movement at the round window
Outer hair cells contain motor protein that actively contribute to
the movement of the basilar membrane
Motor proteins help us become more sensitive to sound,
amplify the movement by expand or contrast
(depolarization)
®
□
Otoacoustic emissions are sounds produced by movements of the
basilar membrane in the absence of an external auditory stimulus
□
§
2 main properties determine how the basilar membrane responds to
sound: width and stiffness
The basilar membrane is organized according to place code for
frequency (tonotopic map)
□
High frequency: base, narrow, stiff
□
Low frequency: apex, wide, floppy
□
§
○
Auditory pathway
Each ear is projection to both cerebral hemispheres
-
-
Cells in bran stem respond to specific time differences
These cells are called coincidence detectors (localize sounds by comparing
sounds from 2 ears with the delay, we can detect the localization of sound)
○
Superior olivary nucleus
First point where both ears compared!
§
§
○
-
Sound localization
Horizontal plane: interaural time delays provide an important cue
For sudden sounds, the interaural time delay ranges b/w 0 msec (sounds is
coming from a location straight ahead) to 0.6 (sound comes from a location to
one side)
§
○
-
The head casts an acoustic shadow or block some of the sound intensity that reaches
an ear when the sound source is located on the opposite side
These interaural intensity differences are greater for higher frequencies than for
lower frequencies, since lower frequency waves can diffract around the head
more easily due to their longer wavelength
To localize:
Low frequency sounds (20-2000Hz): interaural time delay□
High frequency sounds (2000-20000Hz): interaural intensity
difference
□
§
These two processes constitute the duplex theory of sound localization
§
○
-
Auditory cortex
Medial geniculate nucleus cells project to the primary auditory cortex (A1, Brodmann
area 41, Heschl's gyrus)
Cortical neurons in A1 are sharply frequency-tuned
○
They form a tonotopic map and are organized into columns.
○
Receptive fields: not defined by space but frequency!
○
-
Hierarchy within auditory cortex
Left auditory cortex -> language
○
Right auditory cortex -> music environment sounds (non-speech sound)
○
Core -> simple sounds (one wave)
○
Parabelt -> complex patterns (combinations of different sine waves)
○
Causal -> where
○
Rostral -> what
○
-
Some neurons in A1 have complex response characteristics, such as responses to
vocalizations
Wernicke's area: language comprehension; loss of this area results in serious
disruption linguistic function
○
Deafness can result from
Damage to the ears or cochlea (bilateral): peripheral hearing loss
§
Damage to the auditory cortex (bilateral): central hearing loss
§
Unilateral lesion of auditory cortex DO NOT result in deafness.
§
○
-
How do we separate sounds?
Sound originate from different sources, but all sum up to one pressure wave
-
We can decompose complex sounds into individual frequency components using
Fourier analysis
A similar process happens in the brain for initially separating sounds
○
-
Gestalt: good continuation
-
Auditory system summer:
Sound processing starts at the cochlea
-
Uses hair cells to translate mechanic vibrations to electrochemical stimulus
-
Hair cells inner and outer project to spiral ganglion cells (from auditory nerve)
-
Auditory nerve -> brainstem -> auditory cortex
-
Localization begins processing in the brain stem uses interaural time delay and
intensity differences
-
Auditory cortex retains tonotopic map
-
Auditory cortex processes simple and complex sounds
-
Speech-in-noise processing
One of the ways we can figure things out is our knowledge of the conversation of
language - context
-
Language:
Goal of language
Communicate ideas (abstract or concrete) to others
○
Requires coherent encoding/decoding of semantic info into
verbalized/written/signed units for language production and understanding
○
-
3 main components
Phonology -> sound
○
Syntax -> grammar
○
Semantics -> meaning
○
-
Phonology
Rules of sounds of language
Related to motor control
○
-
Phonemes
Smallest unit of sound that can signal meaning
○
Pill vs. bill
○
-
Phonetics
How is the unit produced?
○
Pill vs. spill
○
-
Syntax
Rules of language organization
Grammar
○
-
In English:
Subject-verb-object
○
The student got a b
○
The student got a bee
○
-
Semantics
Meaning of language
-
Control of speech output determined by Wada technique:
Right handed - left hemisphere - speech representation
-
Left handed - lateralization not as strong as right handed, some are at right
hemisphere
-
Wada procedure
"putting one brain hemisphere to sleep" by injecting sodium amytal into the carotid
artery. The left hemisphere is found to be dominant for language in most but not all
subjects. Right handed subjects: 95% LH-dominant. Left handed subjects: 19% RH-
dominant
-
Is lateralization of function present only in adult humans?
No! evidence of lateralization in variety of species
-
It's evident early in development
Hemispheres are not equipotent at birth
○
Neuroanatomical asymmetries present at birth
○
As early as 1 week old
ERPs greater over LH to speech than nonverbal material (might take
longer to produce sounds)
§
○
Children with early LH hemispherectomy
Regain language in RH, but not as fluent
§
○
Genetic basis
Need to consider prenatal factors (positioning, etc.)
§
○
-
Right hemisphere contributions to language
Evidence from split-brain patient suggests that right hemisphere contributes to
written and auditory language processing
But
Poor understanding of complex syntax
§
Cannot produce speech output
§
Cannot process phonological info
§
Vocabulary restricted to concrete words
§
○
-
RH seems to specialized for aspects of language not processed well in LH
Prosody (melody)
○
Narrative (story line)
○
Inferences (filling in blanks)
○
-
Prosody
Role for conveying emotional tone
-
Additional role for conveying syntactically helpful info
Difference in voice path (intonation) at end of declarative statement vs.
question
This class is awesome! Vs. This class is awesome?
§
○
-
Perception / comprehension of prosody
Aphasic patients with LH lesion can distinguish b/w questions and statements
using prosody
○
Patients with RH lesion show impairment in perception of prosodic cues related
to tonal aspects of verbal stimuli (often together with poor tonal memory)
○
-
Production of prosody:
May be supported by RH homolog of "Broca's" region
○
Lesion to right frontal cortex can lead to complete lack of prosody, i.e. aprosodic
speech (similar to monotonous voice of sales person of telephone)
○
-
Inference and Narrative
Patients with right hem. Damage can have difficulty with integration of heard or
written text
Making inferences based on incomplete info
○
Following thread of a story
○
Understanding non-literal aspects of language (jokes and metaphors)
○
-
Examples of problems that can be observed in patients with right hem. Lesions:
Determining whether sentence is relevant to gist of story
○
Making inferences using incomplete info
John walked In the water near some glass. He grabbed his foot and called
the lifeguard for help
§
○
-
Difficulty with metaphors, comprehending jokes and indirect requests (when info
needs to be interpreted non-literally)
-
Generally speaking:
RH language problems less apparent in daily language use than aphasias after
LH lesions
○
-
How does the brain process language?
Broca's area
Speech production
○
Lesion: Broca's aphasia (non-fluent aphasia)
Patient know what they want to say but they cannot get it out
§
Speech quality
Paucity of output (not related to motor problems) - not talking
much
□
Telegraphic, agrammatical speech
Has content words (noun)
®
Limited function words (but, behind, and)
®
Limited word endings (ing, s)
®
□
Great difficulty naming objects □
Difficulties repeating words □
Limited writing abilities □
§
Motor functions intact
No paralysis of face, vocal musculature, can blow out candles □
Utter non-linguistic sounds w/ ease □
§
Relatively spared comprehension
§
Broca: Deficit in programming speech output
§
○
-
Wernicke's area
Speech comprehension
○
Lesion: Wernicke's aphasia (fluent aphasia)
Continuous talking but doesn’t make sense
§
Impaired comprehension
Difficulty following simple commands
"Pick up spoon", "Show me the red square"
®
□
§
Speech quality
Sounds are well formed □
All part of speech present□
But makes little sense" "word salad"□
§
Reading and writing often severely impaired
§
Motor functions intact
§
Also exhibit paraphasia
Errors in producing specific word even during simple repetition □
Instead words are replaced with similar words
Phonemic paraphasia: similar sounds (fable for table)
®
Semantic paraphasia: similar meaning (barn for house)
®
Neologisms: follow structure of language, but not in lexicon
(paffle, contrap) making up new words
®
□
§
Wernicke's aphasia: inability to link "sound image" to meaning
§
○
-
Double Dissociation
Broca's area -> impaired speech production, intact comprehension
-
Wernicke's area -> intact speech production, impaired comprehension
-
Syntactic & phonological processing in Broca's aphasia
Patients with Broca's aphasia insensitive to grammatical markers
-
Patients use basic subject-verb-object to ordering rule to figure out who chases whom
-
Impaired at matching action with picture
-
But not all deficits seen in Broca's aphasia can be explained as syntax processing
impairment
Syntactic and phonological deficits likely co-occur in patients with Broca's
aphasia due to large lesion (usually stroke)
Deficits in both domains may rely on neighbouring but not identical
regions in anterior parts of left hemisphere (inferior frontal, anterior
superior temporal, insular cortex
§
○
General conclusion
Broca's aphasia can be decomposed into different more specific
impairments each caused by damage to neighbouring but distinct brain
regions
Accounts for variability b/w different patients with Broca's aphasia □
§
○
-
Anterior aphasias (Broca')
Impaired
Phonological production
§
Syntax comprehension
§
○
Intact
Semantic comprehension
§
○
-
Posterior aphasia (Wernicke's)
Impaired
Phonological comprehension (phonological paraphasias)
§
Semantic comprehension
§
○
Intact
Syntax comprehension
§
○
-
Arcuate fasciculus - bundle of axons from Wernicke's to Broca's
Conduction Aphasia
Lesion location
Speech output and comprehension intact
Not Broca's, not Wernicke's
§
○
Difficulty repeating what was just heard: sound images could not be conducted
forward to be produced
Severed connection b/w Broca's area and Wernicke's area
§
Arcuate fasciculus
§
○
Disconnection syndrome: no direct route from sound image to speech output
○
-
Global aphasia
Damage to multiple components of system
-
Damage to sound image and output
-
No ability to comprehend or produce speech
-
Extensive lesion to left hemisphere
-
Brain network of language
What? Ventral stream: link phonological and semantic info
-
Where? Dorsal stream: sensorimotor interface, articulation
-
ERPs of semantics and syntax
N400: index of semantic retrieval
-
P600: ongoing sentence-level integration
Syntactic violation
○
Garden path sentence: where you think it's gonna end but it doesn't
○
-
P600s were initially seen as markers of syntactic violation, and N400s indices of semantic
integration
-Two streams of processing
Recent models have moved to integrate these 2 markers as part of the same ongoing
processing of a sentence
-Functionally integrated but still anatomically distinct
Memory of tunes (semantics) vs. whether a note sounds right or not (syntax)
-In key violation (memory violation): semantic should elicit N400
-Out of key violation (memory and rule violation): syntax should elicit P600
Melodic Intonation Therapy (MIT)
-Patient have been observed to have good production of words when singing
-LH lateralization means that left hemisphere stokes severely impair speech
production, but right hemisphere is spared
-MIT combines an exaggerated prosody using musical tones and the rhythmic tapping
of the left hand to engage right hemisphere analogs of the left hemisphere language
systems
-Measured structural changes in connections b/w right hemisphere analogs using
Diffusion tensor imaging (DTI) looking at arcuate fasciculus
○Right arcuate fasciculus is smaller than left because less engage
-Summary: tap left hand, engage right hemisphere (engage in melody) so compensate
damage in left hemisphere
Turn sound into electrical signals
Lecture 4
Saturday, May 26, 2018 1:13 PM
![](https://new-preview-html.oneclass.com/Exbq3r4gwdYONPXbe2PxNy1MLBo2plvz/bg3.png)
Auditory system
Receive input:
Light -> sound
○
Photoreceptors -> auditory receptors:
○
-
Differentiate stimuli:
Brightness, color -> loudness, pitch
○
-
Allow us to interact with environment
-
Neural signals are sent from the inner ear to
Brain stem nuclei
○
Medial geniculate nucleus (MGN) - thalamus
○
Auditory cortex
Auditory receptors in cochlea -> brain stem neurons -> MGN -> auditory
cortex
§
○
-
The ear
Sound is a bunch of pressure waves
-
The Middle Ear
Ossicles: amplification of sound pressure onto oval window
Oval window is much smaller than tympanic membrane (ear drum)
§
Ossicles act like levers to increase pressure onto smaller space
§
Malleus (hammer) -> incus (anvil) -> stapes (stirrup)
§
○
The Cochlea (inner ear)
Spiral shape, unwound forms a hollow tube
○
Two membrane covered holes at the base: oval window (contact with ossicles)
and round window
○
Cross section revels 3 fluid-filled chambers: scala vestibuli, scala media, scala
tympani. Scala vestibuli and tympani are continuous
Have different fluid in scala vestibuli/tympani vs. media
§
Apex: high frequency
§
Base: low frequency
§
○
Unusual ionic conc. In the endolymph are generated by transport process in the
stria vascularis. These ionic con. are responsible for the endocochlear potential.
Stria vascularis: pumps lots of K+
§
Endolymph: high K+, low Na+
§
Perilymph: low K+, high Na+
§
○
Organ of Corti
Auditory receptors: hair cells
Arranged into a single row of inner hair cells and 3 rows of outer
hair cells. 3 times more outer hair than inner hair cells.
□
Makes synapses onto spiral ganglion cells, whose cell bodies form
the spiral ganglion. Axons from the spiral ganglion form the
auditory nerve.
□
§
Movement of the basilar membrane due to sound results in the bending
of the stereocilia. (leaning towards the longest stereocilia)
Changes in the membrane potential of the hair cells are the result
of opening K+ channels located at the tip of the stereocilia.
□
K+ influx into the cell from the surrounding endolymph results in
depolarization, the opening of Ca2+ channels and the release of
neurotransmitters onto spiral ganglion neurites.
□
§
Bending in one direction (to long): depolarize. Bending in the opposite
direction: hyperpolarize
Movements of stereocilia are very small (10^-9m). Movement of 0.3
nm is sufficient to produce the perception of sound
□
Hair cell membrane potential "tracks" variations in sound pressure □
§
Spiral ganglion neurons make synaptic contacts with hair cells
Inner hair cells connect to 95% spiral ganglion cells □
Outer hair cells connect to 5% spiral ganglion cells
Most spiral ganglion cells receive input from a single inner
hair cell at a particular location on the basilar membrane
®
They generation AP in the response to the sound of a specific
frequency: the neuron's characteristic frequency
®
Tuning curves
More narrowly tuned more specific
◊
®
□
§
Movement of the oval window is accompanied by a corresponding
movement at the round window
Outer hair cells contain motor protein that actively contribute to
the movement of the basilar membrane
Motor proteins help us become more sensitive to sound,
amplify the movement by expand or contrast
(depolarization)
®
□
Otoacoustic emissions are sounds produced by movements of the
basilar membrane in the absence of an external auditory stimulus
□
§
2 main properties determine how the basilar membrane responds to
sound: width and stiffness
The basilar membrane is organized according to place code for
frequency (tonotopic map)
□
High frequency: base, narrow, stiff□
Low frequency: apex, wide, floppy □
§
○
Auditory pathway
Each ear is projection to both cerebral hemispheres
-
-
Cells in bran stem respond to specific time differences
These cells are called coincidence detectors (localize sounds by comparing
sounds from 2 ears with the delay, we can detect the localization of sound)
○
Superior olivary nucleus
First point where both ears compared!
§
§
○
-
Sound localization
Horizontal plane: interaural time delays provide an important cue
For sudden sounds, the interaural time delay ranges b/w 0 msec (sounds is
coming from a location straight ahead) to 0.6 (sound comes from a location to
one side)
§
○
-
The head casts an acoustic shadow or block some of the sound intensity that reaches
an ear when the sound source is located on the opposite side
These interaural intensity differences are greater for higher frequencies than for
lower frequencies, since lower frequency waves can diffract around the head
more easily due to their longer wavelength
To localize:
Low frequency sounds (20-2000Hz): interaural time delay□
High frequency sounds (2000-20000Hz): interaural intensity
difference
□
§
These two processes constitute the duplex theory of sound localization
§
○
-
Auditory cortex
Medial geniculate nucleus cells project to the primary auditory cortex (A1, Brodmann
area 41, Heschl's gyrus)
Cortical neurons in A1 are sharply frequency-tuned
○
They form a tonotopic map and are organized into columns.
○
Receptive fields: not defined by space but frequency!
○
-
Hierarchy within auditory cortex
Left auditory cortex -> language
○
Right auditory cortex -> music environment sounds (non-speech sound)
○
Core -> simple sounds (one wave)
○
Parabelt -> complex patterns (combinations of different sine waves)
○
Causal -> where
○
Rostral -> what
○
-
Some neurons in A1 have complex response characteristics, such as responses to
vocalizations
Wernicke's area: language comprehension; loss of this area results in serious
disruption linguistic function
○
Deafness can result from
Damage to the ears or cochlea (bilateral): peripheral hearing loss
§
Damage to the auditory cortex (bilateral): central hearing loss
§
Unilateral lesion of auditory cortex DO NOT result in deafness.
§
○
-
How do we separate sounds?
Sound originate from different sources, but all sum up to one pressure wave
-
We can decompose complex sounds into individual frequency components using
Fourier analysis
A similar process happens in the brain for initially separating sounds
○
-
Gestalt: good continuation
-
Auditory system summer:
Sound processing starts at the cochlea
-
Uses hair cells to translate mechanic vibrations to electrochemical stimulus
-
Hair cells inner and outer project to spiral ganglion cells (from auditory nerve)
-
Auditory nerve -> brainstem -> auditory cortex
-
Localization begins processing in the brain stem uses interaural time delay and
intensity differences
-
Auditory cortex retains tonotopic map
-
Auditory cortex processes simple and complex sounds
-
Speech-in-noise processing
One of the ways we can figure things out is our knowledge of the conversation of
language - context
-
Language:
Goal of language
Communicate ideas (abstract or concrete) to others
○
Requires coherent encoding/decoding of semantic info into
verbalized/written/signed units for language production and understanding
○
-
3 main components
Phonology -> sound
○
Syntax -> grammar
○
Semantics -> meaning
○
-
Phonology
Rules of sounds of language
Related to motor control
○
-
Phonemes
Smallest unit of sound that can signal meaning
○
Pill vs. bill
○
-
Phonetics
How is the unit produced?
○
Pill vs. spill
○
-
Syntax
Rules of language organization
Grammar
○
-
In English:
Subject-verb-object
○
The student got a b
○
The student got a bee
○
-
Semantics
Meaning of language
-
Control of speech output determined by Wada technique:
Right handed - left hemisphere - speech representation
-
Left handed - lateralization not as strong as right handed, some are at right
hemisphere
-
Wada procedure
"putting one brain hemisphere to sleep" by injecting sodium amytal into the carotid
artery. The left hemisphere is found to be dominant for language in most but not all
subjects. Right handed subjects: 95% LH-dominant. Left handed subjects: 19% RH-
dominant
-
Is lateralization of function present only in adult humans?
No! evidence of lateralization in variety of species
-
It's evident early in development
Hemispheres are not equipotent at birth
○
Neuroanatomical asymmetries present at birth
○
As early as 1 week old
ERPs greater over LH to speech than nonverbal material (might take
longer to produce sounds)
§
○
Children with early LH hemispherectomy
Regain language in RH, but not as fluent
§
○
Genetic basis
Need to consider prenatal factors (positioning, etc.)
§
○
-
Right hemisphere contributions to language
Evidence from split-brain patient suggests that right hemisphere contributes to
written and auditory language processing
But
Poor understanding of complex syntax
§
Cannot produce speech output
§
Cannot process phonological info
§
Vocabulary restricted to concrete words
§
○
-
RH seems to specialized for aspects of language not processed well in LH
Prosody (melody)
○
Narrative (story line)
○
Inferences (filling in blanks)
○
-
Prosody
Role for conveying emotional tone
-
Additional role for conveying syntactically helpful info
Difference in voice path (intonation) at end of declarative statement vs.
question
This class is awesome! Vs. This class is awesome?
§
○
-
Perception / comprehension of prosody
Aphasic patients with LH lesion can distinguish b/w questions and statements
using prosody
○
Patients with RH lesion show impairment in perception of prosodic cues related
to tonal aspects of verbal stimuli (often together with poor tonal memory)
○
-
Production of prosody:
May be supported by RH homolog of "Broca's" region
○
Lesion to right frontal cortex can lead to complete lack of prosody, i.e. aprosodic
speech (similar to monotonous voice of sales person of telephone)
○
-
Inference and Narrative
Patients with right hem. Damage can have difficulty with integration of heard or
written text
Making inferences based on incomplete info
○
Following thread of a story
○
Understanding non-literal aspects of language (jokes and metaphors)
○
-
Examples of problems that can be observed in patients with right hem. Lesions:
Determining whether sentence is relevant to gist of story
○
Making inferences using incomplete info
John walked In the water near some glass. He grabbed his foot and called
the lifeguard for help
§
○
-
Difficulty with metaphors, comprehending jokes and indirect requests (when info
needs to be interpreted non-literally)
-
Generally speaking:
RH language problems less apparent in daily language use than aphasias after
LH lesions
○
-
How does the brain process language?
Broca's area
Speech production
○
Lesion: Broca's aphasia (non-fluent aphasia)
Patient know what they want to say but they cannot get it out
§
Speech quality
Paucity of output (not related to motor problems) - not talking
much
□
Telegraphic, agrammatical speech
Has content words (noun)
®
Limited function words (but, behind, and)
®
Limited word endings (ing, s)
®
□
Great difficulty naming objects □
Difficulties repeating words □
Limited writing abilities □
§
Motor functions intact
No paralysis of face, vocal musculature, can blow out candles □
Utter non-linguistic sounds w/ ease □
§
Relatively spared comprehension
§
Broca: Deficit in programming speech output
§
○
-
Wernicke's area
Speech comprehension
○
Lesion: Wernicke's aphasia (fluent aphasia)
Continuous talking but doesn’t make sense
§
Impaired comprehension
Difficulty following simple commands
"Pick up spoon", "Show me the red square"
®
□
§
Speech quality
Sounds are well formed □
All part of speech present□
But makes little sense" "word salad"□
§
Reading and writing often severely impaired
§
Motor functions intact
§
Also exhibit paraphasia
Errors in producing specific word even during simple repetition □
Instead words are replaced with similar words
Phonemic paraphasia: similar sounds (fable for table)
®
Semantic paraphasia: similar meaning (barn for house)
®
Neologisms: follow structure of language, but not in lexicon
(paffle, contrap) making up new words
®
□
§
Wernicke's aphasia: inability to link "sound image" to meaning
§
○
-
Double Dissociation
Broca's area -> impaired speech production, intact comprehension
-
Wernicke's area -> intact speech production, impaired comprehension
-
Syntactic & phonological processing in Broca's aphasia
Patients with Broca's aphasia insensitive to grammatical markers
-
Patients use basic subject-verb-object to ordering rule to figure out who chases whom
-
Impaired at matching action with picture
-
But not all deficits seen in Broca's aphasia can be explained as syntax processing
impairment
Syntactic and phonological deficits likely co-occur in patients with Broca's
aphasia due to large lesion (usually stroke)
Deficits in both domains may rely on neighbouring but not identical
regions in anterior parts of left hemisphere (inferior frontal, anterior
superior temporal, insular cortex
§
○
General conclusion
Broca's aphasia can be decomposed into different more specific
impairments each caused by damage to neighbouring but distinct brain
regions
Accounts for variability b/w different patients with Broca's aphasia □
§
○
-
Anterior aphasias (Broca')
Impaired
Phonological production
§
Syntax comprehension
§
○
Intact
Semantic comprehension
§
○
-
Posterior aphasia (Wernicke's)
Impaired
Phonological comprehension (phonological paraphasias)
§
Semantic comprehension
§
○
Intact
Syntax comprehension
§
○
-
Arcuate fasciculus - bundle of axons from Wernicke's to Broca's
Conduction Aphasia
Lesion location
Speech output and comprehension intact
Not Broca's, not Wernicke's
§
○
Difficulty repeating what was just heard: sound images could not be conducted
forward to be produced
Severed connection b/w Broca's area and Wernicke's area
§
Arcuate fasciculus
§
○
Disconnection syndrome: no direct route from sound image to speech output
○
-
Global aphasia
Damage to multiple components of system
-
Damage to sound image and output
-
No ability to comprehend or produce speech
-
Extensive lesion to left hemisphere
-
Brain network of language
What? Ventral stream: link phonological and semantic info
-
Where? Dorsal stream: sensorimotor interface, articulation
-
ERPs of semantics and syntax
N400: index of semantic retrieval
-
P600: ongoing sentence-level integration
Syntactic violation
○
Garden path sentence: where you think it's gonna end but it doesn't
○
-
P600s were initially seen as markers of syntactic violation, and N400s indices of semantic
integration
-Two streams of processing
Recent models have moved to integrate these 2 markers as part of the same ongoing
processing of a sentence
-Functionally integrated but still anatomically distinct
Memory of tunes (semantics) vs. whether a note sounds right or not (syntax)
-In key violation (memory violation): semantic should elicit N400
-Out of key violation (memory and rule violation): syntax should elicit P600
Melodic Intonation Therapy (MIT)
-Patient have been observed to have good production of words when singing
-LH lateralization means that left hemisphere stokes severely impair speech
production, but right hemisphere is spared
-MIT combines an exaggerated prosody using musical tones and the rhythmic tapping
of the left hand to engage right hemisphere analogs of the left hemisphere language
systems
-Measured structural changes in connections b/w right hemisphere analogs using
Diffusion tensor imaging (DTI) looking at arcuate fasciculus
○Right arcuate fasciculus is smaller than left because less engage
-Summary: tap left hand, engage right hemisphere (engage in melody) so compensate
damage in left hemisphere
Turn sound into electrical signals
Lecture 4
Saturday, May 26, 2018 1:13 PM
Document Summary
Neural signals are sent from the inner ear to. Auditory receptors in cochlea -> brain stem neurons -> mgn -> auditory cortex. Ossicles: amplification of sound pressure onto oval window. Oval window is much smaller than tympanic membrane (ear drum) Ossicles act like levers to increase pressure onto smaller space. Malleus (hammer) -> incus (anvil) -> stapes (stirrup) Two membrane covered holes at the base: oval window (contact with ossicles) and round window. Cross section revels 3 fluid-filled chambers: scala vestibuli, scala media, scala tympani. Have different fluid in scala vestibuli/tympani vs. media. In the endolymph are generated by transport process in the stria vascularis. These ionic con. are responsible for the endocochlear potential. Arranged into a single row of inner hair cells and 3 rows of outer. Arranged into a single row of inner hair cells and 3 rows of outer hair cells. 3 times more outer hair than inner hair cells.