Textbook Notes (369,205)
Canada (162,462)
Psychology (9,699)
PSYB51H3 (306)
Chapter 9-15

PSYB51 CHAPS 9-15

40 Pages
491 Views

Department
Psychology
Course Code
PSYB51H3
Professor
Matthias Niemeier

This preview shows pages 1,2,3,4. Sign up to view the full 40 pages of the document.
Description
CHAPTER 9: HEARING: PHYSIOLOGY AND PSYCHOACOUSTICS What is Sound? - Sounds are created when objects vibrate - Vibrations of an object cause its surrounding medium to vibrate as well and this vibration in turn causes pressure changes in the medium - Pressure changes best described as waves - Sound waves travel at a particular speed depending on medium, moving faster thru denser substances o Sound travels faster in water than in air Basic Qualities of Sound Waves: Frequency and Amplitude - Amplitude or intensity: the magnitude of displacement of a sound pressure wave. Amplitude is perceived as loudness. - Frequency: for sound, the number of times per second that a pattern of pressure change repeats. Frequency is perceived as pitch. - Hertz (Hz): a unit of measure for frequency. One hertz equals one cycle per second. - Loudness: the psychological aspect of sound related to perceived intensity (amplitude) - Pitch: the psychological aspect of sound related mainly to perceived frequency - Low-frequency sounds correspond to low pitches and high-frequency sounds correspond to high pitches - Humans can detect sounds that vary from about 20 to 20,000 Hz - Elephants hear vibrations at very low frequencies that help detect presence of large animals - Sonar systems used by some bats use sound frequencies above 60,000 Hz - Decibel (dB): a unit of measure for the physical intensity of sound. Decibels define the difference between two sounds as the ratio between two sound pressures. Each 10:1 sound pressure ratio equals 20 dB, and a 100:1 ratio equals 40 dB o dB = 20 log(p/p₀) o p is the pressure of sound being described o p₀ is a reference pressure and is typically defined in auditory research contexts to be 0.0002 dyne/cm² - Levels are defined as dB SPL (sound pressure level) - Relatively small decibel changes can correspond to large physical changes Sine Waves and Complex Sounds - Sine wave or pure tone: a waveform for which variation as a function of time is a sine function - Spectrum: a representation of the relative energy present at each frequency - Harmonic spectrum: the spectrum of a complex sound in which energy is at integer multiples of the fundamental frequency - Fundamental frequency: the lowest frequency component of a complex periodic sound - Timbre: the psychological sensation by which a listener can judge that two sounds with the same loudness and pitch are dissimilar. Timbre quality is conveyed by harmonics and other high frequencies. Basic Structure of the Mammalian Auditory System Outer Ear - Sounds first collected from the environment by the pinna: the outer, funnel-like part of the ear - Ear canal: the canal that conducts sound vibrations from the pinna to the tympanic membrane and prevents damage to the tympanic membrane - Tympanic membrane: the eardrum; a thin sheet of skin at the end of the outer ear canal. The tympanic membrane vibrates in response to sound - While a ruptured eardrum can be excruciating, in most cases a damaged tympanic membrane will heal itself, but it is possible to damage it beyond repair Middle Ear - Pinna and ear canal make up the outer ear: the external sound-gathering portion of the ear - Tympanic membrane is the border between outer and middle ear: an air-filled chamber containing the middle bones, or ossicles. The middle ear conveys and amplifies vibration from the tympanic membrane to the oval window. - Ossicle: any of three tiny bones of the middle ear - Malleus: the first ossicle (connected to the tympanic membrane), the malleus receives vibration from the tympanic membrane and is attached to the incus - Incus: the middle of the three ossicles, connecting the malleus and the stapes - Stapes: the last ossicle, connected to the incus on one end, the stapes presses against the oval window of the cochlea on the other end o Stapes transmits vibrations of sound waves to the oval window: the flexible opening to the cochlea thru which the stapes transmits vibration to the fluid inside - Inner ear: a hollow cavity in the temporal bone of the skull, and the structures within this cavity; the cochlea and the semicircular canals of the vestibular system - Ossicles are the smallest bones in human body - Amplify sound vibrations in two ways: o Joints are hinged in a way that makes them work like levers o Ossicles increase the energy transmitted to the inner ear by concentrating energy from larger to smaller surface area - Tensor tympani: the muscle attached to the malleus; tensing the tensor tympani decreases vibration - Stapedius: the muscle attached to the stapes; tensing the stapedius decreases vibration. - Main purpose of muscles is to tense when sounds are very loud, restricting the movement of the ossicles and thus muffling pressure change that might be large enough to cause damage - Acoustic reflex: a reflex that protects the ear from intense sounds, via contraction of the stapedius and tensor tympani muscles o Follows the onset of loud sounds by 1/5 of a second Inner Ear - It is here that the fine changes in sound pressure are translated into neural signals that inform the listener about the world Cochlear Canals and Membranes - Cochlea: a spiral structure of the inner ear containing the organ of Corti - Tympanic canal: one of three fluid-filled passages in the cochlea. The tympanic canal extends from the round window at the base of the cochlea to the helicotrema at the apex. Also called scala tympani. - Vestibular canal: extends from the oval window at the base of the cochlea to the helicotrema at the apex. Also called scala vestibule. - Middle canal: sandwiched between the tympanic and vestibular canals and contains the cochlear partition. Also called scala media. - Helicotrema: the opening that connects the tympanic and vestibular canals at the apex of the cochlea - Three canals of cochlea separated by two membranes o Reissner’s membrane: a thin sheath of tissue separating the vestibular and middle canals o Basilar membrane: a plate of fibers that forms the base of the cochlear partition and separates the middle and tympanic canals in the cochlea  Not really a membrane because it’s not a tin, pliable sheet, it’s a plate made up of fibers that have some stiffness - Cochlear partition: the combined basilar membrane, tectorial membrane, and organ of Corti, which are together responsible for the transaction of sound waves into neural signals - Vibrations transmitted thru tympanic membrane and middle-ear bones cause the stapes to push and pull the flexible oval window in and out of the vestibular canal at the case of the cochlea - This movement of the oval window causes “traveling waves” t flow thru the fluid in the vestibular canal - A displacement forms in the vestibular canal and travels from the base of the cochlea down to the apex - By the time the traveling wave reaches the apex, its displacement has mostly dissipated - Round window: a soft area of tissue at the base of the tympanic canal that releases excess pressure remaining from extremely intense sounds The Organ of Corti - Organ of Corti: a structure on the basilar membrane of the cochlea that is composed of hair cells and dendrites of auditory nerve fibers - Hair cells: any cell that as stereocilia for transducing mechanical movement in the inner ear into neural activity sent to the brain; some hair cells also receive inputs from the brain - Auditory nerve fibers: a collection of neurons that convey information from hair cells in the cochlea to (afferent) and from (efferent) the brain stem - Stereocilia: hairlike extensions on the tips of hair cells in the cochlea that, when flexed, initiate the release of neurotransmitters o Inner hair cells are arranged in straight rows with shorter stereocilia in front and taller ones in back o Outer hair cells stand in rows that form the shape of a V or W - Tectorial membrane: a gelatinous structure, attached on one end, that extends into the middle canal of the ear, floating above inner hair cells and touching outer hair cells o Floats atop the organ of Corti Inner and Outer Hair Cells - Hair cells are specialized neurons that transduce one kind of energy into another form - Deflection of a hair cell’s stereocilia causes a change in voltage potential that initiates the release of neurotransmitters, which encourage firing by auditory nerve fibers that have dendritic synapses on hair cells - Cochlea has only 14,000 hair cells - Tip link: a tiny filament that stretches from the tip of a stereocilium to the side of its neighbour - When a stereocilium deflects, the tip link pulls on the taller stereocilium in a way that opens an ion pore somewhat like opening a gate for just a tiny fraction of a second o Allows potassium ions to flow rapidly into the hair cell, causing rapid depolarization o Depolarization leads to rapid influx of calcium ions and initiation of the release of neurotransmitters form the base of the hair cell to stimulate dendrites of the auditory nerve o Firing of auditory nerve fibers completes the process of translating sound waves into patterns of neural activity Coding of Amplitude and Frequency in the Cochlea - The larger the amplitude, the higher the firing rate of the neurons that communicate with the brain - Different parts of the cochlear partition are displace to different degrees by different sound wave frequencies - High frequencies cause displacements closer to the oval window, low frequencies cause displacements nearer the apex - Place code: tuning of different parts of the cochlea to different frequencies, in which information about the particular frequency of an incoming sound wave is coded by the place along the cochlear partition that has the greatest mechanical displacement - Cochlea as a whole narrows from base to apex, but basilar membrane widens towards apex - Basilar membrane is thick at base and thinner at apex - Cochlea separates frequencies along its length like an acoustic prism - Afferent fiber: a neuron that carries sensory info to the CNS - Efferent fiber: a neuron that carries sensory from the CNS to the periphery o When these become active, outer hair cells with which they synapse become physically longer, making the nearby cochlear partition stiffer o Outer hair cells make the cochlea more sensitive and more sharply tuned to particular frequencies Auditory Nerve - Frequency selectivity - Threshold tuning curve: a graph plotting the thresholds of a neuron or fiber in response to sine waves with varying frequencies at the lowest intensity that will give rise to a response o Researchers insert an electrode very close to a single AN fiber, and then measure how intense the sin waves of different frequencies must be for he neuron to fire faster than its normal firing rate - Characteristic frequency: the frequency to which a particular AN fiber is most sensitive Two-Tone Suppression - When a second tone of a slightly different frequency is added, the rate of neural firing for the first one actually decreases – this is called two-tone suppression - Suppression effects are pronounced when the second tone has a lower frequency than the first tone Rate Saturation - Isointensity curve: a map plotting the firing rate of an auditory nerve fiber against varying frequencies at a steady intensity - Frequencies such as 1000 Hz, to which the AN fiber had almost no response at low intensity levels, evoke quite substantial responses when intensity is increased - Rate saturation: the point at which a nerve fiber is firing as rapidly as possible and further stimulation is incapable of increasing the firing rate - Rate-intensity function: a graph plotting the firing rate of an auditory nerve fiber in response to a sound of constant frequency at increasing intensities - Low-spontaneous fibers: an AN fiber that has a low rate (less than 10 spikes per second) of spontaneous firing; low-spontaneous fibers require relatively intense sound before they will fire at higher rates. (They are like cones – require more energy) - High-spontaneous fibers: an AN fiber that has a high rate (more than 30 spikes per second) of spontaneous firing; high-spontaneous fibers increase their firing rate in response to relatively low levels of sound. (Theya re like the rods of the retina – low light) - Mid-spontaneous fiber: an AN fiber that has a medium rate (10-30 spikes per second) of spontaneous firing. The characteristics of mid-spontaneous fibers are intermediate between low and high spontaneous fibers Temporal Code for Sound Frequency - Phase locking: firing of a single neuron at one distinct point in the period of a sound wave at a given frequency. (The neuron need not fire on every cycle, but each firing will occur at the same point in the cycle). - Existence of phase locking means that the firing pattern of an AN fiber carries a temporal code: tuning of different parts of the cochlea to different frequencies, in which info about the particular frequency of an incoming sound wave is coded by the timing of neural firing as it relates to the period of the sound - Volley principle: the idea that multiple neurons can provide a temporal code for frequency if each neuron fires at a distinct point in the period of a sound wave but does not fire on every period Auditory Brain Structure - Cochlear nucleus: the first brain stem nucleus at which afferent AN fibers synapse o Consist of 3 subnuclei - Superior olive: an early brain stem region in the auditory pathway where inputs from both ears converge - Neurons from the cochlear nucleus and superior olive travel up the brain stem to the inferior colliculus: a midbrain nucleus in the auditory pathway - Medial geniculate nucleus: the part of the thalamus that relays auditory signals to the temporal cortex and receives input from the auditory cortex - Tonotopic organization: an arrangement in which neurons that respond to different frequencies are organized anatomically in order of frequency - Tonotopic organization maintained by primary auditory cortex: the first area within the temporal lobes of the brain responsible for processing acoustic information - Belt area: a region of cortex, directly adjacent to the PAC, with inputs from A2, where neurons respond to more complex characteristics of sounds - Parabelt area: a region of cortex, lateral and adjacent to the belt area, where neurons respond to more complex characteristics of sounds, as well as to input from other senses Basic Operating Characteristics of the Auditory System - Psychoacoustics: the study of psychological correlates of the physical dimensions of acoustics; a branch of psychophysics - Sounds are measure with respect to frequency, but listeners hear pitch - Intensity of sound is measured as sound pressure in decibels, but listeners hear loudness Intensity and Loudness - Audibility threshold: the lowest sound pressure level that can be reliably detected at a given frequency - Equal-loudness curves: a graph plotting sound pressure level against the frequency for which a listener perceives constant loudness o Equal-amplitude sounds can be perceived as softer or louder than each other, depending on the frequencies of the sound waves - Temporal integration: the process by which a sound at a constant level is perceived as being louder when it is of greater duration. The term also applies to perceived brightness, which depends on the duration of light Frequency and Pitch - For any given frequency increase, listeners will perceive a grater rise in pitch for lower frequencies than they do for higher frequencies - Listeners can discriminate between tones of 999 and 1000 Hz - Masking: using a second sound, frequently noise, to make the detection of another sound more difficult - White noise: noise consisting of all audible frequencies in equal amounts. White noise in hearing in analogous to white light in vision, for which all wave-lengths are present - Critical bandwidth: the range of frequencies conveyed within a channel in the auditory system - Results from masking paradigm helped cement role of place coding in pitch perception b revealing similarities between perceptual effects and physiological findings: o Width of critical bandwidth changes depending on frequency of test tone, these widths correspond to the physical spacing of frequencies along basilar membrane o Masking effects are asymmetrical o “Upward spread of masking” – for a mask whose bandwidth is below the critical bandwidth of a test tone, the mask is more effective it is centered on a frequency below the test tone’s frequency Hearing Loss - 30 mil American suffer from some form of hearing impairment - Simplest way to introduce some hearing loss is to obstruct the ear canal, thus inhibiting the ability of sound waves to exert pressure on the tympanic membrane - Build-up of ear wax in ear canal - Conductive hearing loss: hearing loss caused by problems with the bones of the middle ear - Otitis media: inflammation of the middle ear, commonly in children as a result of infection - Otosclerosis: abnormal growth of the middle-ear bones that causes hearing loss o Surgery can improve hearing - Most common, and most serious auditory impairment is sensorineural hearing loss: hearing loss due to defects in the cochlea or AN o Most often occurs when hair cells are injured o Most cancer drugs are ototoxic: producing adverse effect on cochlear or vestibular organs or nerves - More common cause of hearing loss is damage to hair cells by excessive exposure to noise - Earliest devices for helping ppl with hearing loss were simple horns - Today we use hearing aids – can be tuned to provide the greatest amplification only for frequencies in the region of greats loss *for most ppl, higher frequencies will need to be amplified more) - Advantage of old horns – allowed listeners to direct their hearing toward the sound source they were most interested in - Electronic hearing aids amplify everything - Cochlear prosthetics provide some relief to deaf ppl o Artificial cochlear implants with tiny flexible coiled with about two dozen miniature electrode contacts CHAPTER 10: HEARING IN THE ENVIRONMENT Sound Localization - For most positions in space, sound source will be closer to one ear than to the other - Even though sound travels fast, pressure wave will not arrive at both ears at same time - Intensity of a sound is greater at the ear closer to the source Interaural Time Difference - Interaural time difference: the difference in time between a sound arriving at one ear versus the other - Can tell whether a sound is coming from our right or left by determining which ear receives the sound first - Azimuth: the angle of a sound source on the horizontal plane relative to a point in the center of the head between the ears. Azimuth is measured in degrees, with 0 degrees being straight ahead. The angle increases clockwise toward the right, with 180 degrees being directly behind. - ITDs are largest, about 640 microseconds when sounds come directly from the left or directly from the right, although this value varies depending on the size of your head - Sound coming directly in front of or directly behind the listener produces as ITD of 0; the sound reaches both ears simultaneously The Physiology of ITDs o Medial superior olives (MSOs): a relay station in the brain stem where inputs from both ears contribute to detection of the interaural time difference  First places in the auditory system where inputs from both ears converge o T.C. Yin and Chan: found neurons in the MSOs whose firing rates increase in response to very brief time differences between inputs from the two ears of cats Interaural Level Difference - Interaural level difference: the difference in level (intensity) between a sound arrive at one ear versus the other - Sounds are more intense at ear closer to sound source because head blocks sound pressure wave from reaching opposite ear - Properties of ILD relevant for auditory localization similar to those of ITD: o Sounds more intense at ear that is closer to sound source and less intense at ear farther away from source o ILD largest at 90 and -90 degrees, and it is non-existent at 0 degrees and 180 degrees o Between these two extremes, ILD generally correlates with angle of sound source, but because of irregular shape of the head, the correlation is not quite as precise as it is with ITDs - Important difference between ITD and ILD: the head blocks high-frequency sounds much more effectively than it does low-frequency sounds because long wavelengths of low-frequency sounds “bend around” the head in much the same way a large ocean wave crashes over a piling near the shore The Physiology of ILDs o Neurons sensitive to intensity differences between two ears can be found in lateral superior olive (LSO): a relay station in the brain stem where inputs form both ears contribute to detection of the interaural level difference o Excitatory connections to the right LSO come from the ipsalateral ear o Inhibitory inputs come from the contralateral ear via medial nucleus of trapezoid body Cones of Confusion - Cone of confusion: a region of positions in space where all sounds produce the same time and level (intensity) differences (ITDs and ILDs) - As soon as you move your head, the ITD and ILD of a sound source shift, and only one spatial location will be consistent with the ITDs and ILDs perceived at both head positions Pinna and Head Cues - Cones of confusion not major practical problem for auditory system because time and intensity are not the only cues for pinpointing the location of sources - Because of their complex shape, pinnae funnel certain sound frequencies more efficiently than others - Size and shape of rest of the body affect which frequencies reach the ear most easily - Directional transfer function (DTF): a measure that describes how the pinna, ear canal, head, and torso change the intensity of sounds with different frequencies that arrive at each ear form different locations in space (azimuth and elevation) - Listening to a live orchestra versus listening to an orchestra on headphones o Situation similar to visual depth perception of three-dimensionality, pictorial cues can give a limited sense of depth, but to get a true perception of 3-dimensionality, we need binocular disparity info that we normally only get when we’re seeing real objects - Researchers suggest listeners learn about the way DTFs relate to places in the environment thru their extensive experience listening to sounds, while other sources of info, such as vision, provide feedback about location - Children may update the way they sue DTF info during development, and it appears that such learning can continue into adultgood - Hoffman, Van Riswick, Van Ospal: inserted plastic molds into folds of adults’ pinnae o Listeners immediately became much poorer at localizing sounds, after 6 weeks of living with these molds in their eras, the subjects’ localization abilities had greatly improved Auditory Distance Perception - Simplest cue for judging distance of a sound source is relative intensity of the sound o Sound becomes less intense with greater distance, so listeners have little difficulty perceiving the relative distance of two identical sound sources o However our assumptions may turn out to be false - Inverse-square law: a principle stating that as distance from a source increases, intensity initially decreases much faster than distance increases, such that the decrease in intensity is equal to the increase in distance squared. This general law also applies to optics and other forms of energy - Listeners are fairly good at using intensity differences to determine distance when sounds are presented within 1 meter of the head - Intensity works best as a distance cue when the sound source or listener is moving o Sounds that are farther away do not seem to change direction in relation to the listener as much as nearer sounds do - Another cue for auditory distance is spectral composition of sounds o Sound-absorbing qualities of air dampen high frequencies more than low frequencies, so when sound sources are far away, high frequencies decrease in energy more than lower frequencies as sound waves travel from source to ear o Change in spectral composition is noticeable only for distances greater than 1000 metres o Similar to aerial perspective - Another distance cue, relative amounts of direct versus reverberant energy, inform listener about distance because when a sound source is close to a listener, most of the energy reaching the ear is direct, whereas reverberant energy provides a greater proportion of the total when he sound source is farther away Complex Sounds Harmonics - Harmonic sounds are among most common types of sounds in environment - Fundamental frequency: the lowest frequency component of a complex periodic sound - Auditory system is acutely sensitive to natural relationships between harmonics - If first harmonic is removed from a series of harmonics, and only the others are presented, the pitch that listeners hear corresponds to the fundamental frequency, even though it is not part of the sound - One thing all harmonics have in common is fluctuations in sound pressure at regular intervals corresponding to fundamental frequency - Every harmonic of 250 Hz will have energy peak every 4 ms Timbre - Timbre: the psychological sensation by which a listener can judge that two sounds with the same loudness and pitch are dissimilar. Timbre quality is conveyed by harmonics and other high frequencies - Perception of timbre is related to the relative energy of different acoustic spectral components Auditory “Colour” Constancy - Perception of timbre depends on environment in which sound is heard o i.e., higher frequencies tend to be reinforced by hard surfaces such as tile floors/concrete walls, but are dampened by soft surfaces such as carpet/curtains - This like the problem of colour constancy (chapter 5); in vision, the goal is to perceive the same colors seen throughout the spectrum of illumination can be quite different depending on the type of lighting - In hearing, surfaces in the environment reflect and absorb energy at different frequencies in ways that change the spectral shape that finally arrives at your ears - Kiefte and Kluender: used different spectral shapes of the vowel sounds “ee” and “oo” to learn how hearing calibrates for changes in listening environment o Created stimuli that enabled them to separately measure the contributions of tilt and frequency of the second peak when perceiving these vowels o Had listeners identify vowels following a sentence like “you will not hear the vowel” but with an interesting twist o They created some sentences so that the overall tilt of the sentence was the same as the tilt of the following vowel, either ee-like or oo-like o To other sentences they added a peak in the spectrum all the way thru the sentence at the same frequency as the second peak in the vowel that listeners would identify o Listeners heard the very same vowels in dramatically different ways depending on which type of manipulated sentence preceded the vowels o When tilt stayed the same for both the preceding sentence and the vowel, isterners used only the frequency of the second peak to identify the vowel, when the second peak was present all the way thru the preceding sentence, listeners relied mostly on tilt to identify the vowel Attack and Decay - Attack: the part of a sound during which amplitude increases (onset) - Decay: the part of a sound during which amplitude decreases (offset) - How quickly a sound decays depends on how long it takes for the vibrating object creating the sound to dissipate energy and stop moving Auditory Scene Analysis - Few truly quiet places outside the laboratories of hearing scientists and the testing chambers of audiologists - For an auditory scene, the situation is greatly complicated by the fact that all the sound waves from all the sound sources in the environment are summed together in a single complex sound wave - Source segregation or auditory scene analysis: processing an auditory scene consisting of multiple sound sources into separate sound images Spatial, Spectral, and Temporal Segregation - Spatial segregation – sounds that emanate from the same location in space can typically be treated as if they arose from the same source; a sound that is perceived to move in space can more easily be separated from background sounds that are relatively stationary - Spectral segregation – sounds with the same or similar pitch are more likely to be treated as coming from the same source and to be segregated from other sounds - Auditory stream – sounds perceived to emanate from the same source - Auditory stream segregation: the perceptual organization of a complex acoustic signal into separate auditory events for which each stream is heard as a separate event - Johann Sebastian Bach: before stream segregation was discovered by auditory scientists, Bach exploited these auditory effects in his compositions - Gestalt principle of similarity  sounds that are similar to each other tend to be grouped together into streams Group by Timbre - Tones that deviate from the rising/falling pattern are heard to “pop out” of the sequence - Grouping by timbre is particularly robust because sounds with similar timbre usually arise form the same sound source - Neural processes that give rise to stream segregation can be found throughout the auditory system, from the first stages of auditory processing to the primary auditory cortex to secondary areas of the auditory cortex, such as the belt and parabelt areas Grouping by Onset - Sound components that begin at the same time, or nearly the same time, such as harmonics of music or speech sound, will also tend to be heard as coming from the same sound source - Grouping different harmonics into a single complex sound - If a single harmonic of a vowel sound beings before the rest of the harmonics of the vowel, that lone harmonic is less likely to be perceived as part of the vowel than if the onsets are simultaneous - R.A. Rasch: showed it’s much easier to distinguish two notes from one another when the onset of one precedes the onset of the other by at least 30 ms o Noted musicians playing together in an ensemble such as a string quartet don’t being playing notes at exactly the same time, even when the musical score instructs them to do so o Instead, they begin notes slightly before or after one another, and this staggered start helps listeners pick out the individual instruments in the group - Grouping of sounds with common onsets is consistent with Gestalt principle of common fate When Sounds Become Familiar - Listeners make use of experience and familiarity to separate different sound sources - It’s easier to pick out sounds from a background when you know what you’re listening for - McDermott, Wroblewski, Oxenham: created complex novel sounds by combining natural sound characteristics in ways that listeners had never heard before o Repeatedly played these sounds at the same time and intensity as a background of other novel sounds that did not repeat o Listeners could segregate and identify the sound when it repeated Continuity and Restoration Effects - Often have to deal with the total masking of one sound source by another for brief periods - Gestalt principle of good continuation - Continuous auditory stream is heard to continue behind the masking sound - Labeled by auditory researchers and “continuity effects” or “perceptual restoration effects” - Signal detection task - Kluender and Jenison: used signal detection methodology with a slightly more complex version of the continuity effect o Listeners heard tone glides, in which a sine wave tone varies continuously in frequency over time is superimposed over part of the glide, listeners report hearing the glide continue behind the noise o Kluender and Jenison created stimuli in which the idle portion of the glide either was present with the noise or was completely removed o In trials in which the noise was shortest and most intense, the signal detection measure of discriminability dropped to 0 o Perceptual restoration was complete: listeners had no idea whether or not the glide was actually present with the noise - Imaging studies of humans who can report when they do and don’t hear the tone thru the noise, show metabolic activity in the PAC (A1) that is consistent with what listeners report hearing, whether to not the tone is present Restoration of Complex Sounds - DeWitt and Samuel: played familiar melodies with notes excised and replaced by noise, listeners perceived the missing notes as having been present; listeners could not report which notes had been removed and replaced with noise; listeners much less likely to “hear” a missing note in an unfamiliar melody - Seeba and Klump: trained European starlings to peck when they heard a difference between two pats of starling song, called motifs; when starlings heard motifs with short snippets filled with noise or with silence, they were more likely to peck as if there was a difference between an intact and an interrupted motif when silence filled the gap; starlings more likely to restore missing bits of familiar song - Perceptual restoration of speech more compelling than restoration of music - R.M. Warren and Obusek: played a sentence with a letter removed or replaced by silence and listeners still heard the sentence as if it were intact and complete, even when listeners were explicitly warned that a small part of the sentence had been removed (except when the missing letter replaced with silence) - Warren and Sherman: listeners restored a missing sound on the basis of linguistic info that followed the deletion; hints that meaningful sentences actually become more intelligible when gaps are filled with intense noise than when gaps are left silent CHAPTER 11: MUSIC AND SPEECH PERCEPTION Music - Ppl have been using music as a way to express themselves and influence the thoughts and emotions of others for a very long time - Oldest musical instrument – flutes carved out of vulture bones (30,000 years old) Music and Emotion - When listeners hear pleasant-sounding chords preceding a word, they are faster to respond that a word such as charm is positive and slower to respond that a word such as evil is negative - Some clinical psychologies practice music therapy, thru which ppl sing, listen, play, move to music to improve mental and physical health - Music can reduce pain, promote positive emotions, alleviate stress, improve resistance to disease - When ppl listen to pleasurable music, it produces changes in heart rate, muscle electrical activity, respiration, and blood flow increases in brain regions associated with reward and motivation Musical Notes - Most important characteristics of any acoustic signal is frequency - Pitch: psychological aspect of sound related mainly to perceived frequency Tone Height and Tone Chroma o Importance concept in understanding musical pitch is the octave: the interval between two sound frequencies having a ratio of 2:1 o When one of two periodic sounds is double the frequency of the other, those two sounds are one octave apart o “Just intonation” – frequencies of sounds are in simple ratios with one another o Set of notes used commonly in Western music is called “equal temperament” o Musical pitch described as having two dimensions:  Tone height: a sound quality corresponding to the level of pitch. Tone height is monotonically related to frequency  Tone chroma: a sound quality shared by tones that have the same octave interval o Visualize musical pitch as a helix; frequency and tone height increase with increasing height on the helix o Both a place code and a temporal code can be used in perception of pitch o Neurons in the auditory nerve signal frequency both by their location in the cochlea and by the timing of the firing o For frequencies greater than 5000 Hz, temporal coding doesn’t contribute to the perception of pitch, and pitch discrimination becomes appreciably worse because only place coding can be used o Most musical instruments produce notes that are below 4000 Hz Chords o Chords: a combination of three or more musical notes with different pitches played simultaneously o Simultaneous playing of two notes is a dyad o Major distinction between chords is whether they are consonant or dissonant o One consonant relationship is the octave; other consonant intervals are the perfect fifth (3:2) and perfect fourth (4:3) o Dissonant intervals are defined by the less elegant ratios; i.e. the minor second (16:15) and the augmented fourth (45:32) do not sound very pleasing o Augmented fourth called the “devil in music” in middle ages Cultural Differences o All of our discussion thus car has concerned the heptatonic (seven-note) scale o The pentatonic scale has five notes per octave  in gospel, jazz, rock, blues  Traditional in Asia o Mandarin, Thai, Vietnamese are tone language, giving singsongy impression to some English listeners o Speakers of tone languages use changes in voice pitch to distinguish words in their language o In scales in which octaves contain few notes, the notes may be more loosely tuned than are notes in the heptatonic Western scale, so a wider range of pitches could qualify for a given note o Ppl hear musical notes different ways o When Javanese and Western musicians hear intervals between notes, their estimates of the intervals vary according to how well those notes correspond to Javanese versus Western scales, respectively o Lynch and Eilers: tested the degree to which 6-month-old infants notice inappropriate notes within both the traditional Western scale and the Javanese pélog scale  Infants appeared to be equally good at detecting such “mistakes” within both scales, but adults from Florida were better at detecting deviations from the Western scale Making Music - Notes or chords can form a melody: a sequence of notes or chords perceived as a single coherent structure - Melody is defined by its contour – the pattern of rises and declines in pitch – rather than by an exact sequence of sound frequencies - Tempo: the perceived speed of the presentation of sounds - Any melody can be played at either a fast or a slow tempo - If the notes of a given sequence are played different relative durations, we will hear completely different melodies Rhythm o Most activities have rhythm – walking, galloping, finger tapping, waving, swimming o Thaddeus Bolton conducted experiments in which he played a sequence of identical sounds perfectly spaced in time; they had no rhythm  His listeners readily reported that the sounds occurred in groups of 2, 3, or 4  They reported hearing the first sound of a group as “accented”, or “stressed”, while the remained sounds were not o Listeners are predisposed to grouping sounds into rhythmic patterns o Sounds that are louder, longer, and higher in pitch are more likely to heard as leading their group o Syncopation: any deviation from a regular rhythm  Accenting a note when it is expected to be unaccented; not playing a note when it is expected  When we listen to two syncopated polyrhythms, one of the two rhythms becomes the dominant or controlling rhythm, other rhythm tends to be perceptually adjusted to accommodate the first  Accented beat of the subordinate rhythm shifts in time; syncopation is the perception that beats in the subordinate rhythm have actually traveled backward or forward in time Melody Development o Like rhythm, melody is a psychological entity o Saffran et al.: created 6 samples and deliberately novel “melodies” composed of sequences of three tones  Infants heard only 3 minutes of continuous random repetitions of the melodies, next infants heard both the original melodies and series of new three-tone sequences  New sequences contained same notes as the originals, but one part of the sequence was taken from only melody and another part form another melody  Because infants responded differently to the new melodies, we can deduce that they had learned something about the original melodies Speech - Kenny Muhammad – the human orchestra - Vocal tract: the airway above the larynx used for the production of speech. The vocal tract includes the oral tract and nasal tract. - Unlike other animals, human larynx positioned low in the throat – disadvantages are that humans choke on food more easily and we cannot swallow and breathe at the same time Speech Production - 3 basic components: respiration, phonation, and articulation Respiration and Phonation o To initiate speech sound, diaphragm pushes air out of lungs, thru trachea, up to the larynx o At larynx, air passes thru 2 vocal folds, made up of muscle tissue that can be adjusted to vary how freely air passes thru the opening between them o Phonation: the process thru which vocal folds are made to vibrate when air pushes out of the lungs o Rate at which vocal folds vibrate depends on their stiffness and mass o Vibration of the vocal folds creates a harmonic spectrum as described in chapter 10 o First harmonic corresponds to the actual rate of physical vibration of the vocal folds – the fundamental frequency Articulation o Vocal tract – area above larynx; oral tract and nasal tract combined o Humans can change shape of vocal tract by manipulating jaw, lips, tongue body, tongue tip, etc. o Articulation: the act or manner of producing a speech sound using the vocal tract o Formants: a resonance of the vocal tract. Formants are specified b their center frequency and are denoted by integers that increase with relative frequency  Peaks in the speech spectrum  Labeled by number, from lowest frequency to highest o For short vocal tracts, formants are at higher frequencies than for longer vocal tracts o One of most distinctive characteristics of speech sounds is that their spectra change over time o Spectrogram: in sound analysis, a three-dimensional display that plots time on the horizontal axis, frequency on the vertical axis, and amplitude (intensity) on a color or gray scale Classifying Speech Sounds o Vowel sounds made with open vocal tract, and they vary mostly in how high or low and how far forward or back the tongue is placed in the oral tract, along with whether or not the lips are rounded o Produce consonants by obstructing the vocal tract in some way, and each consonant sound can be classified according to three articulatory dimensions:  1. Place of articulation – airflow can be obstructed at the lips or at the alveolar ridge just behind the teeth or at the soft palate  2. Manner of articulation – airflow can be:  Totally obstructed; partially obstructed; only slightly obstructed; first blocked and then allowed to sneak thru; blocked at first from going thru the mouth, but allowed to go thru the nasal passage  3. Voicing – the vocal cords may be  Vibrating or not vibrating o Speech sound repertoires of languages have developed over generations of individuals to include only sounds that are relatively easy to tell apart Speech Perception - Produce about 10-15 consonants and vowels per second, can double this rate if we’re in a hurry - Coarticulation: the phenomenon in speech whereby attributes or successive speech units overlap in articulatory or acoustic patterns Coarticulation and Lack of Invariance o Explaining how listeners understand speech despite all variation has been one of most significant challenges for speech researchers o Context sensitivity due to coarticulation presents one of the greatest difficulties in developing computer recognition of speech o We cannot program or train a computer to recognize a speech sound without also taking into consideration which speech sounds precede and follow that sound o We cannot identify those preceding and following sounds without also taking into consideration which sounds precede and follow them Categorical Perception o Categorical perception: for speech as well as other complex sounds and images, the phenomenon by which the discrimination of items is no better than the ability to label items o 3 qualities define categorical perception: sharp labeling (identification) function; discontinuous discrimination performance; researchers can predict discrimination performance on the basis of labeling data o Listerns report hearing differences between sounds only when those differences would change the label of the sound, so the ability to discriminate sounds can be predicted by how listeners label the sounds How Special is Speech? o Researchers suspect that humans had evolved special mechanisms just for perceiving speech o “Motor theory” of speech perception holds that processes used to produce speech sounds can somehow be run in reverse to understand the acoustic speech signal o Problems with the motor theory:  Turns out that speech production is at least as complex as speech perception, if not more so  Numerous demonstrations have shown that nonhuman animals can learn to respond to speech signals in much the same way that human listeners do  Japanese quail can be taught to tell “d” from “b” and “g” Coarticulation and Spectral Contrast o The perception of coarticulated speech appears to be at least partially explained by some fundamental principles of perception that you’ve already read about o Because coarticulation always causes a speech sound to become more like the previous speech sound, auditory processes that enhance the contrast between successive sounds undo this assimilation o Listeners more likely to perceive “bah” when preceded by the vowel sound “ee” and to perceive “dah” when preceded by “oo” Using Multiple Acoustic Cues o We spend a large chunk of our waking lives listening to speech and identifying people by their faces o Many small differences must be used together in order to discriminate different speech sounds and different faces o At the same time, other stimulus differences must be ignored so that multiple instances of the same speech sound or multiple images of the same face can be classified properly o Speech is special because humans have evolved unique anatomical machinery for producing it and we spend a great deal of time practicing the perception of speech o Listeners can make use of their experience with co-occurrence of these multiple acoustic differences to understand speech o We don’t need individual acoustic invariants to distinguish speech sounds; we just need to be as good at pattern recognition for sounds as we are for visual images o Neurons in the brain are best at integrating multiple sources of info to recognize patterns Learning to Listen - Experience is important for auditory perception - Unlike vision, experience with speech beings very early in development; babies gain significant experience with speech even before they’re born - Measurements of heart rate as an indicator of the ability to notice change between speech sounds have revealed that late-term foetuses can discriminate between different vowel sounds - Newborns prefer hearing their mother’s voice over other women’s voices - 4 day old French babies preferred hearing French instead of Russian; newborns prefer hearing particular children’s stories that were read aloud by their mothers during the third trimester of pregnancy Becoming a Native Listener o Acoustic differences that matter critically for one language may be irrelevant or even distracting in another language o Spanish uses only the 5 vowel sounds “ee”, “oo”, “ah”, “ay”, “oh” – English uses numerous other vowels o The “r/l” distinction is difficult for Japanese ppl to pick up when they learn English as a second language o Because “l” and “r” is irrelevant to native Japanese speakers when they’re learning their native language, it is adaptive for them to learn to ignore it o The difference between “ee” and “ih” is less perceptible to the Spanish speaker because both of these English sounds are similar to the Spanish “ee” o One study found that by 6 months of age, infants from Seattle were more likely to notice acoustic differences that distinguish two English vowels than to notice equivalent differences between Swedish vowels, and infants form Stockholm were more likely to notice the difference between two Swedish vowels than between two English vowels o Learning is most difficult when both of the sounds in the second language are similar to a single sound in the first language (i.e., “r” and “l” for Japanese speakers learning English) o Learning is easier if he two new sounds are both unlike any sound in the native language (i.e., English listeners have no problem distinguishing click sounds from Zulu because Zulu clicks are unlike any English sounds) o Picking up the distinctions in a second language is easiest if the second language is learned at the same time as the first Learning Words o Whole point of producing and perceiving speech sounds is to put them together to form words, which are the unites of language that convey meaning o No string of speech sounds in inherently meaningful o 8 month old infants can learn to pick out words from streams of continuous speech based on the extent to which successive syllables are predictable or unpredictable. While sitting on a parent’s lap, infants heard 2-minute sequences of syllables. In the second part of the experiment, infants were familiar with 3-syllable sequences that the heard before, but they noticed that they had never heard other syllable combinations. o Saffran et al. suggest that infants in their study learned the words by being sensitive to the statistics of the sequences of sounds that they had heard in the first part of the experiment o When sounds that are rarely heard together occur in combination, that’s a sign that there is a break between two words Speech in the Brain - For most ppl, the left hemisphere is dominant for language processing - Hearing sounds of any kind activates the primary auditory cortex - Rosen et al. developed a clever way to tease apart cortical responses to acoustic complexity from responses to speech per se o Began with complete sentences that included natural changes in both amplitude and frequency, and they replaced voicing vibration with noise o Next the researchers took away changes in frequency, leaving behind only changes in amplitude o Finally, to match the amount of acoustic complexity, the created hybrid “sentences” by adding that amplitude changes of one sentence to the frequency changes in another o Neural activity found bilaterally in response to unintelligible hybrid “sentences” o Responses in the left temporal lobe became dominant only when sentences were intelligible because amplitude and frequency changes coincided properly o Appears that language-dominant sphere responses depend on listeners using speech for linguistic understanding, and not from acoustic complexity alone - Cortical processes related to speech perception could be distinguished from brain processes that contribute to the perception of other complex sounds in two other ways: o Listeners have a wealth of experience simultaneously hearing speech and viewing talkers’ faces, and the McGurk effect is evidence of the profound effects that visual cues can have on the way speech sounds are perceived - Zatorre studied a group of ppl with cochlear implants who previously had been deaf o These listeners exhibited increased brain activity in the visual cortex when listening to speech o Zatorre hypothesized that this activation of visual areas of the brain is the result of increased experience and ability with lip-reading for these previously deaf individuals CHAPTER 13: TOUCH - Term touch is used to refer to the sensations caused by mechanical displacements of the skin - Use the term tactile to refer to mechanical interactions of touch - Kinesthesis: perception of the position and movement of our limbs in space - Proprioception: perception mediated by kinaesthetic and vestibular receptors - Somatosensation: collectively, all the sensory signals from the body - Pain serves as warning system that tells us when something might be internally wrong or when an external stimulus might be dangerous - Temperature sensations enable us to seek or create a thermally safe environment - Mechanical sensations provide a powerful means of communicating our thought and emotions nonverbally - We must always be in direct contact with an object to perceive it by touch (excluding the sun and a jackhammer) Touch Physiology The Skin and Its Tactile Receptors - Skin is the largest and heaviest organ, approximately 1.8 square meters and 4 kilos - Epidermis: outer of two major layers of skin - Dermis: the inner of two major layers of skin, consisting of nutritive and connective tissues, within which lie the mechanoreceptors - Each type of receptor can be characterized by three attributes: o 1. Type of stimulation to which the receptor responds o 2. Size of the receptive field o 3. Rate of adaptation (fast versus slow) - Tactile Receptors o Mechanoreceptors: a sensory receptor that responds to mechanical stimulation (pressure, vibration, movement) o Consists of a nerve fiber and an associated expanded ending o Fall into a glass called A-beta fibers, which have wide diameters that permit very fast neural conduction o 4 populations found in the palm: Meissner corpuscles, Merkel cell neurite complexes, Ruffini endings, Pacinian corpuscles o Meissner corpuscle: a specialized nerve ending associated with fast adapting fibers that have small receptive fields (FA I) o Merkel cell neurite complex: a specialized nerve ending associated with slowly adapting fibers that have small receptive fields (SA I) o Pacinian corpuscle: a specialized nerve ending associated with fast adapting fibers that have large receptive fields (FA II) o Ruffini ending: a specialized nerve ending associated with slowly adapting fibers that have large receptive fields (SA II) o Meissner and Merkel receptor endings located at junction of epidermis and dermis o Ruffini and Pacinian endings embedded deeply in the dermis o SA I fibers respond to steady downward pressure, fine spatial details, low frequency vibrations < 5 Hz  Important for texture and pattern perception o SA II fibers respond to sustained downward pressure, lateral skin stretch  More than one SA II fiber must be stimulated to perceive tactile sensation o FA I fibers respond to low-frequency vibrations from 5 to 50 Hz o FA II fibers respond to high frequency vibrations from 50 to 700 Hz  Such vibrations occur whenever an object fist makes contact with the skin o K.O Johnson: feeling shape of key in your pocket requires SA I channel; shaping fingers to grasp the key involves SA II channel; inserting the key and increased grip force so key doesn’t slip involves FA I channel; FA II channel tells you when he key has hit the end of the key hole - Kinesthetic Receptors o Kinesthetic: referring to perception involving sensory mechanoreceptors in muscles, tendons, and joints o Play a role in sensing where our limbs are and what kinds of movements we’re making o Muscle spindle: a sensory receptor located in a muscle that senses its tension; convey the rate at which the muscle fibers are changing in length o Receptors in tendons provide signals about the tension in muscles attached to the tendons, and receptors directly in the joints themselves come into play particularly when a joint is bent to an extreme angle o Patient Ian Waterman: cutaneous nerves that connected Waterman’s kinesthetic and other mechanoreceptors to his brain were destroyed; Waterman is now completely dependent on vision to tell about the positions of his limbs in space - Thermoreceptors o Thermoreceptors: a sensory receptor that signals info about changes in skin temperature o Warmth fibers: a sensory nerve fiber that fires when skin temperature increases o Cold fiber: a sensory nerve fiber that hires when skin temperature decreases o Under normal conditions the skin is kept between 30 degrees Celsius and 36 degrees Celsius o Also kick in when we make contact with an object that is warmer or colder than our skin - Nociceptors o Nociceptors: a sensory receptor that transmits info about noxious stimulation that causes damage or potential damage to the skin o Pain begins with nociceptors and can be divided into two types of nerve fibers o A-delta fiber: an intermediate=sized, myelinated sensory nerve fiber that transmits pain and temperature signals o C fiber: a narrow-diameter, unmyelinated sensory nerve fiber that transmits pain and temperature signals o A delta fibers respond to strong pressure or heat o C fibers respond to intense stimulation of various sorts: pressure, heat, cold, noxious chemicals o Hanson’s disease (leprosy) and diabetes are characterized by loss of pain sensation o Patient Miss C: lacked pain sensation, died at 29 from infections that could have been prevented in someone who was alerted to injury by painful sensations From Skin to Brain - Touch messages must travel as far as 2 meters to get from the skin and muscles of the feet to the brain - Info moves up thru the spinal cord - There are a number of somatosensory nerve drunks, arising in the hands, arms, feet, legs and other areas of skin - Axons in the older nerve trunks synapse first in the spinal cord - Once in the spinal cord, touch info proceeds upward toward brain via two pathways o The evolutionarily older spinothalamic pathway: the route from the spinal cord to the brain that carries most of the info about skin temperature and pain; slower of the two pathways o Dorsal column-medial lemniscal pathway: the route from the spinal cord to the brain that carries signals from skin, muscles, tendons, and joints; conveys info more quickly - Neurons in DCML pathway first synapse in cuneate and gracile nuclei; activity then passed onto neurons that synapse in the ventral posterior nucleus of the thalamus - Form thalamus, much of touch info carried up to cortex into somatosensory area 1 (the primary receiving area for touch in the cortex) located in parietal lobe behind postcentral gyrus - Neurons in S1 communicate w/ somatosensory area 2 (the secondary receiving area for touch in the cortex) which lies in areas of the cortex, which lies in upper bank of lateral sulcus and other cortical areas - Somatotopic: spatially mapped in the somatosensory cortex in correspondence to spatial events in the skin - Homunculus: a maplike representation of regions of the body in the brain - Each map as a twin homunculus - Sensory homunculus derived from Wilder Penfield and his studying open brains - Tight correspondence between body parts and areas of S1; can have unfortunate side effects for amputees - Phantom limb: sensation perceived from a physically amputated limb of the body - Ramachandran: made observation that amputees often report feeling sensations in their phantom arms and hands when their faces of remaining limbs are touched; source of confusion traced to an idiosyncrasy in the homunculus; apparently hand and arm areas of S1 are invaded by neurons carrying info form touch receptors in the face; brain attributes the activity to stimulation from missing limbs - Sense of touch is divided between what and where systems in higher cortical areas: o Patient studied by Reed et al. showed impairment in ability to recognize objects by touch but showed no deficit in her spatial ability; another patient could locate and manipulate object by touch without recognizing them - Pascual-Leone & Hamilton: deprived normal, sighted volunteers of visual stimulation by blindfolding for 5 days; each day volunteers participated in an fMRI study during which pairs of Braille patterns were presented to right index finger; subject required to judge whether two patterns in each pair were same or different; on first day only left side of SA1 was activated but as days progressed activation in S1 declined while increasing in V1; apparently V1 took over processing the spatial pattern; removing blindfold resulted in full return to what had been neurally observed before blindfolding - Neural plasticity: the ability of neural circuits to undergo changes in function of organization as a result of previous activity Pain - Multiple Levels of Pain o Substantia gelatinosa: a jellylike region of interconnecting neurons in the dorsal horn of the spinal cord o Dorsal horn: a region at the rear of the spinal cord that receives inputs form receptors in the skin o Gate control theory: a description of the pain-transmitting system that incorporates modulating signals from the brain o Nociceptive signals arrive at the subtantia gelatinosa o According to gate control theory the bottom-up brain signals from the nociceptors can be blocked via a feedback circuit located in the dorsal horn o When gate neurons send excitatory signals, the sensory info is allowed to go thru, but inhibitory signals from the gate neurons cancel transmissions to the brain’ the results of these interactions at the spinal cord are tra
More Less
Unlock Document

Only pages 1,2,3,4 are available for preview. Some parts have been intentionally blurred.

Unlock Document
You're Reading a Preview

Unlock to view full version

Unlock Document

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit