recognize speech sounds.
•Like our ability to recognize faces visually, the auditory system recognizes the
patterns underlying speech rather than just the sounds themselves.
•Using fMRI scans, Belin, Zatorre, and Ahad found that some regions of the
brain responded more when people heard human vocalization (both speech and
non-speech) than when they heard only natural sounds. Regions in which there
was a large difference were located in the temporal lobe, on the auditory cortex.
•When it comes to analyzing the detailed information for speech, the left
hemisphere plays a larger role.
•The analysis of speech usually begins with it elements, or phonemes.
•Phonemes are the elements of speech – the smallest units of sound that
allow us to distinguish the meaning of a spoken word.
•Voice-onset time – the delay between the initial sound of a consonant and
the onset of vibration of the vocal cords.
•Voicing is the vibration of your vocal cords. The distinction between voiced
and unvoiced consonants permits us to distinguish between /p/ and /b/,
between /k/ and /g/, and between /t/ and /d/.
•Phonemic discrimination begin with auditory processing of the sensory
differences, and this occurs in both hemispheres. However, regions of the left
auditory cortex seem to specialize in recognizing the special aspects of speech.
•Ganong found that the perception of a phoneme is affected by the sounds that
follow it. We recognize speech sounds in pieces larger than individual phonemes.
•Phonemes are combined to form morphemes, which are the smallest units of
meaning in language.
•The syntax of a particular language determines how phonemes can be
combined to form morphemes. Ex: the word fastest contains 2 morphemes, /fast/,
which is a free morpheme, because it can stand on its own and still have
meaning, and /ist/, which is a bound morpheme. Bound morphemes cannot
stand on their own and must be attached to other morphemes to provide