•Adults can even integrate visual and auditory speech information that is not temporally
coordinated (Almost like McGurk effect but not completely)
•Authors concluded that ‘our brain can transfer familiarity with the way a person talks into
familiarity with the sound of his/her voice’
Prosodic information can be gleaned from visual speech
•Sentence intonation (Question Vs. Statement)
•Pitch changes associated with lexical tone
oEg. Mandarin and Cantonese
•Even before they begin speaking infants detect characteristics of visual speech
oMatching audit to one of 2 talking faces
oTelling languages apart based on videos alone
oSusceptible to McGurk effect
this is kind of in contrast to the other study with the preschool kids etc, but
they aren’t completely different
**Question: What implications do these findings have for theories of speech?
•Auditory based theories of speech perception cannot account for things like the McGurk
**Question: What is the relationship between audio and visual speech?
•Amodal accounts of multimodal speech perception claim that in an important way
speech information some whether instantiated as acoustic or optical energy
oThis does not necessarily mean that speech information is equally available in
oAmodal basically means that even if u know about the mcgurk effect u cant stop
urself for still falling for it
•Automatic integration supports Amodal accounts.
•Late integration theories argue that auditory visual streams of information are analyzed
•Top down effects of lexical status support late integration theories
**Question: Then why is it easy to talk on the phone?
•Visual information strong influences listeners perception of auditory speech
•Classic theories of speech perception have difficult time explaining this
Somatosensory information also plays a role in speech perception;
•Syllables heard simultaneously with cutaneous air puffs were more likely to be heard as
aspirated (for example, causing participants to heard ‘b’ as ‘p’)
•This was observed in the absence of training