Exam Review.docx

42 Pages
151 Views
Unlock Document

Department
Psychology
Course
PSY280H5
Professor
Giampaolo Moraglia
Semester
Fall

Description
The World of Sounds Characteristic of Sound  Two definitions of sound: o Physical: sound is pressure changes in the air or other medium o Perceptual: sound is the experience we have when we hear  Physical dimensions of sound  Amplitude/loudness is determined by sound pressure level o Difference between high and low peaks in sound wave  Higher amplitude = louder sound o Diaphragm makes area vibrate and causes pressure on eardrum that makes sound o Condensation = pushing surrounding molecules together causing increase in density and air pressure o Rarefaction = wave going down (molecules spread out causing decreased density and air pressure) o Sound wave = pattern of air pressure changes  Air pressure changes move outwards, but air molecules move back and forth (stay at same place) o Sound pressure level is expressed in decibels (log scale)  increasing the sound level by 10dB almost double’s the sounds loudness  0 db = relative amplitude 1  quiet midnight  120 db = relative amplitude 100,000,000  gun shot  Above 140 db will harm/damage inner ear  Frequency/pitch has an audible range (humans) of 20-20,000 Hz o Number of cycles per second the change in pressure repeats  Higher frequency = higher pitch o Tuning fork causes periodic condensation and rarefaction of air molecules  Produces a sinusoidal wave (smooth/pure) o Sounds in real life are complex unlike tuning fork o Tone height: increasing pitch that accompanies increases in a tone’s fundamental frequency o Tone chroma = notes with the same letter o Octave = intervals of notes (double frequency for each octave) 1 o Tones separated by octaves have the same tone chroma o Pitch is determined not by the presence of fundamental frequency, but by the information that indicates fundamental frequency (ie. spacing of harmonics and repetition rate) o Effect of missing fundamental: constancy of pitch, even when fundamental removed o Periodicity pitch = pitch we perceive in tones and that has harmonics removed o Pitch neurons: respond to stimuli associated with certain pitch even if these stimuli have different harmonics  Timbre is the quality that distinguishes between two tones that have the same loudness, pitch and duration, but still sound different o Difference in harmonics of different instruments is one factor that causes musical instruments to have different timbres o Timbre also depends on time course of tone’s attack (build up of sound at beginning of tone) and on time course of decay (decrease in sound at end of tone)  Complex tone: repetition rate of complex tone is called the fundamental frequency of the tone o Periodic complex tones consist of number of pure tones o Frequency spectra provides a way of indicating a complex tone’s fundamental frequency and harmonics without drawing waveform  Auditory nerves are only equipped to conduct specific frequencies and not complex wave patterns  Principle of additive synthesis and fourier synthesis o Ear  fourier analysis: break down complex waveform into its pure tone components o Brain  additive synthesis: add pure tone to create complex tone External (Outer) and Middle Ear  Outer ear – pinna and auditory canal o Pinna helps with sounds location o Auditory canal – tube-like 3 cm long structure  It protects the tympanic membrane (ear drum) 2  Resonance occurs when sound waves that are reflected back from the closed end of the auditory canal interact with sound waves entering the canal  The resonant frequency of the canal amplifies frequencies between 1000 – 5000 Hz  Resonant frequency – the frequency reinforced the most  Depends on the length of the canal  Middle ear – two cubic cm separating inner from outer ear o It contains the three ossicles (tiniest bones in body)  Malleus – moves due to the vibration of the tympanic membrane  Incus – transmits vibrations of malleus  Stapes – transmit vibration of incus to the inner ear via the oval window of the cochlea o Have synovial joints between ossicles that are controlled by skeletal muscles  Sound has to travel through different media before reaching the receptors of the inner ear o 90% of sound is lost because passing through air o Ear must compensate for this Functions of Middle Ear  Impedance matching: increasing the amplitude of pressure caused by sound (on the internal ear) to compensate for the loss of energy incurred during the transmission of sound o Impedance = resistance to flow o Ratio of tympanic membrane/ stapes footplate = 17 x 1  Large vibration is outcome o Ratio of stapes lever/ malleus = 1 x 1.3  Stapes moves significantly, while malleus moves a little  Muscles of ossicles and attenuation reflex o Protective function of internal ear  Reduces impact of sound that could be harmful  2 muscles  by pulling stapes, outward impact is reduced o Masks background noise o Reduces sensitivity of hearing one’s own voice  May be through higher neural processes (masking too) o Hyperacousis  increased sensitivity and intolerance to sounds within the normal range of amplitude (occasionally one’s own voice)  Commonly caused by paralysis of stapedius muscle  Innervated by VII cranial nerve (facial nerve) 3 Function of Ossicles  Outer and middle ear filled with air  Inner ear is filled with fluid that is much denser than air  Pressure changes in air transmit poorly into the denser medium  Ossicles act to amplify the vibration for better transmission to the fluid  Middle ear muscles dampen the ossicles’ vibrations to protect the inner ear from potentially damaging stimuli Organ of Corti  The petrous temporal bone – the hardest bone in the body o Very delicate organ within bone  Inner ear  cochlea and vestibular apparatus o Cochlea for hearing o Vestibular apparatus for posture o Nerve = 8 cranial nerve (vestibular cochlear nerve)  The inner ear structure is important for hearing is the cochlea o Fluid-filled snail-like structure (35 mm long) set into vibration by stapes o Divided into the scala vestibuli (upper ½) and scala tympani (lower ½) by cochlear partition (membranous cochlea)  Scala media is the medium within the membranous o Cochlear partition extends from the base (stapes end) to almost the apex (far end) o Organ of corti is inside the cochlear partition o Stapes opens into stapes vestibuli o Scala tympani = secondary tympanic membrane in round window o Cochlea coils 3 times o Outermost layer = bony cochlea o Helicotrema at apex is where 2 scalas connect 4  Organ of corti  key structures o Basiliar membrane vibrates in response to sound and supports the organ of corti  Organ of corti rests on basilar membrane o Inner and outer hair cells are the receptors for hearing  Cilia, which protrude from tops of cells, are where the sound acts to produce electrical signals o Tectorial membrane extends over the hair cells o Pillars: supports reticular lamina o Reticular lamina anchors outer hair cells  Towards apex there are more outer hair cells o Supporting cells are very important  offer metabolic and physical support o Inner and outer hair cells have stereocilia  Inner: one row  Outer: w shape  Hair cell “tip-links” o Extensions that connect the tip of cilia o Extremely sensitive to vibrations o Depending on deflection  gets different action potentials o Usher’s syndrome = defective tip-links  Hair cell transduction: conversion of vibrations into electrical signals o Cilia of IHC bend in response to movement of organ of corti and the tectorial membrane o Movement in one direction opens ion channels (depolarization)  Relaxed = potassium channel closed  When larger amounts of K+ and Ca+ enter channel, tiplink stretches o Movement in other direction closes the channels o Back and forth bending of hair cells causes bursts of electrical projections 5  Cochlear michrophonic-potentials o Postassium continuously pumped into the endolymph, keeps it extremely positive (140mv)  Only place in body where voltage is so high o Perilymph is fluid like extracellular fluid Cochlear Microphone  Basilar membrane and resonating frequencies o Base is narrow and towards apex gets wider o Fibruales go across basilar membrane  Apex = longer and thinner, less tightly packed  Respond to low frequencies  Base = shorter and thicker, densely packed  Respond to high frequencies  Outer hair cells: muscles inside hair cells o Found efferent (motor) fibres connected to hair cell o Hair cell contracts (contains contractile protein)  Stimulate hair cell direction (either one efferent fibre or directly on hair cell) and it will contract  “Prestin” is the ultra-fast contractile protein o Prestin knock-out mice have impaired hearing o Prestin goes across cell membrane of hair cell  12 transmembrane/domains (ie. go across membrane) o Many congenital hearing problems due to prestin problems  Innervation patterns of IHC and OHC o Outer = efferent and afferent (sensory)  Convergent innervation  more spiral ganglion per hair cell o Inner = afferent only  Divergent innervation  1:1 ratio of spiral ganglion to hair cell o 95% of nerve fibres in auditory nerve are from IHC, only 5% are from OHC Dynamic Interactions Between the Inner and Outer Hair Cells  Fine-tuning of IHC o Stereocilia attached to tectorial membrane at OHC, not IHC o IHC is in less sensitive state = OHC are relaxed (not contracted) 6 o IHC is in more sensitive state = OHC are contracted  Basilar membrane moves upward and stereocilia closer to tectorial membrane therefore more sensitive o Dynamic response/ microphone because OHC response makes IHC more sensitive and give response o Motile response = contract o Mechanical amplifier = amplify o Even in absence of OHC, IHC will respond  Effect of OHC damage on frequency tuning curve of IHC o If OHC damaged, IHC almost useless  significantly impaired o When knock-out prestin, similar response/curve  Inner vs. outer hair cells 7 The “Fine-Tuned” Cochlea  Tuning curves along the basilar membrane o Different IHC responds to different frequencies  Sharp-frequency bandwidth  cochlea has a bank of filters each with specific band-width owing to: o OHC that selectively enhance the activity of particular IHC while relatively suppressing neighbouring segments (reduces noise) o This is akin to lateral inhibition between IHC o K+ and Ca+ channels close rapidly after activation  cuts down spatial and temporal noise o Red lines = filters Tests for Hearing  Ocoacoustic emissions  ear produces sounds o Extremely low amplitude sounds produced due to contractions of OHC o Require powerful amplifiers introduced into the external ear to record them o May be induced by an external click (as an echo) OR spontaneously caused by contraction of OHC due to efferent nerve impulses o Often used to test ear function in the newborn (show OHC functioning) o Not to be confused with tinnitus (ringing in ear) or hallucinations  Weber’s test: tuning fork is placed on forehead o Normal: equal vibration on both sides o Nerve deaf: no vibration on one side o Air deaf: seems to hear better in blocked ear  Sound more on blocked side due to lack of air flow  Sounds from outside mask vibrations, but not there 8  Rinne’s test o Bone conduction is tested by placing tuning fork on the bone behind the ear o Air conduction is tested by holding the vibrating tuning fork close to ear o Normally air conduction is better than bone conduction because of ossicles  Audibility curve in humans: loudness depends on sound pressure and frequency o Keep one constant and change the other to get curve o Green = as frequency increases, threshold decreases (ie. more sensitive), until at 4000, threshold increases o Red = adjusting frequency (keep dB constant)  Feel almost same intensity for both curves even though different db o Blue = no matter what frequency, makes no difference  equally as bad and painful (can damage ear) o Db = sound pressure level o Most sensitive at frequencies between 2000-4000 Hz o Auditory response area – can hear tones that fall within this area  Two types of hearing loss: o Conductive hearing loss: blockage of sound from the receptor cells o Sensorineural hearing loss:  Damage to hair cells (from age, loud noise, drugs)  Damage to auditory nerve or brain  Most common type is age related hearing loss  >80% of hearing loss in a population  Perception of high frequency sound, especially speech is the major impairment  Presbycusis: most common form of sensorineural hearing loss o Loss of sensitivity is greatest at higher frequencies, accompanies age, and affects males more severely than females o Caused by factors in addition to aging 9  Noise-induced hearing loss: occurs when loud noises cause degeneration of hair cells o Damage to organ of corti is often observed o Leisure noise: listening to Ipod, recreational gun use, etc. o Amount of hearing loss depends on level of sound intensity and duration of exposure  Tinnitus: ringing, swishing, cooing and other noise appearing to heard in ear or head o Incidence: 10-15% of population  20% of these seek help o Characteristics: the sound could annoy, interfere with daily functioning o Cause  Subjective: no cause detected  Objective/ secondary: blocked/infected ear, damaged/inflamed or degenerated hair cells or auditory nerve, aneurysm of blood vessels close to auditory nerve, brain aneurysm, tumors (rare)  Hair cell damage due to loud sounds is becoming common cause o Treatment: noise maskers, implants and cognitive behavioural therapy, steroids, music therapy (organizing brain maps)  Depends on cause  Noise maskers = find out frequency can hear and mask/block Theories of Hearing Rutherford’s Telephone Theory  Depending upon frequency of sound, entire basilar membrane will vibrate (ie. like diaphragm)  Ex. if 40Hz, whole BM will vibrate at 40Hz Place Theory (von Bekesy)  Determined not all vibrated, but depending on frequency, different places on BM vibrated  Frequency of a sound is indicated by the place along the cochlea at which nerve firing is highest  Different hair cell vibrate/activated, which activates different neurons and sends certain signals to brain therefore know frequency  Evidence for place theory: o Tonotopic map shows that cochlea shows an orderly map of frequencies along its length  Apex responds best to low frequencies  Base responds best to high frequencies o Neural frequency tuning curves 10  Pure tones are used to determine the threshold for specific frequencies measured at single neurons  Plotting thresholds for frequencies results in tuning curves  Frequency to which the neuron is most sensitive is the characteristic frequency  Frequency tuning curves of cat auditory nerve fibers  The characteristic frequency of each fiber is indicated by the arrows along the frequency axis  Sharply tuned by OHC to certain frequency  Physical properties of the basilar membrane: o Base of the membrane (by stapes) is:  3-4x narrower than at the apex  100 times stiffer than the apex  Both the model and direct observation showed that the vibrating motion of the membrane is a travelling wave o Vibrates most at certain area where frequency is, but wave travels along full BM  Updating place theory: o Bekesy used BM isolated from cadavers and his results showed no difference in response for close frequencies that people can distinguish o New research with live membranes shows that entire OHC respond to sound by slight tilting and a change in length  For this reason these cells are called the cochlear amplifier o Live cochleas are more sensitive to sharp frequencies because OHC contracting and expanding in response to vibration of BM Travelling Wave Theory of Bekesy  A travelling wave is generated along the BM depending upon the frequency of sound  Envelope of the travelling wave o Indicates the point of maximum displacement of the BM o Hair cells that this point are stimulated the most strongly leading to the nerve fibers firing most strongly at this location o Position of the peak is a function of frequency 11  Travelling waves along the cochlea (base  apex)  Shortcomings of Bekesys’ experiments o The cadaveric experiments did not explain how BM and hair cells responded to different overlapping frequencies (ie. cochlear filters) Evidence for Place Theory  Cadaver records of BM vibrations and travelling wave model  Tonotopic hair cell and nerve fiber response, tonotopic/place map on the auditory cortex  Auditory masking experiment: frequency specific masking o Couple masking sound with certain frequency, cannot hear  threshold increases The Volley Theory of Hearing (Wever)  During a series of sounds – a given neuron not need respond to every single wave (ie. place theory)  Different sets of neurons respond to different successive wave and each set sends a burst of synchronous discharges (volley) o Eventually, with many sets can respond to all waves 12  This theory accounts for responses of neurons to high frequency sound waves above 2000 Hz o Problem with place theory: cannot explain why we can hear above 1000 Hz because only have neurons up to that  Complex periodic sound is composed of several harmonics o Different hair cells pick up different components of complex sound and send to brain in form of volley  The BM breaks down the complex periodic sound into its harmonics (simple parts/ sounds)  Hair cells respond to generate action potentials o When pressure increases, cilia bend to the right and firing occurs o When pressure decreases, cilia bend to the left and no firing occurs o Hair cells fire in synchrony with the rising and falling pressure of the sound stimulus o Phase locking = property of firing at the same place in the sound stimulus o Temporal coding = connection between the frequency of a sound stimulus and the timing of the auditory nerve firing o Frequency is coded in the cochlea and auditory nerve based both on which fibers are firing (place coding) and one the timing of nerve impulses in auditory nerve fibers (temporal coding) o Place coding is effective across the entire range of hearing and temporal coding up to 4000Hz (frequency that phase locking stops operating) Cochlear Implants  The device is made up of: o Microphone worn behind ear receives sounds  electrical signals and sends to o Sound processor breaks down the complex sound signal (fourier analysis) into frequency bands and sends to o Transmitter that sends electrical signals to spiral multichannel (different frequency) electrodes implanted inside cochlea  Implanted therefore ignore hair cells and directly stimulates spiral ganglion cells  Used to not have multi-channels therefore could only hear certain sounds  enhanced today  Implants stimulate the cochlea at different places on the tonotopic map according to specific frequencies of the stimulus  These devices help deaf people to hear some sounds and to understand language 13 Localization of Sounds Auditory Localization  Auditory space – surrounds an observer and exists wherever there is sound  Coordinates: o Azimuth coordinates – position left to right o Elevation coordinates – position up and down o Distance coordinates – position from observer  On average, people can localize sounds: o Directly in front of them most accurately o To the sides and behind their heads least accurately  Location cues are not contained in the receptor cells like on the retina in vision; this, location for sounds must be calculated Cues for Localization of Sounds  Biaural cues: location cues based on the comparison of signals received by the left and right ears  Monaural cues: not comparing two ears, just looking at difference in one ear  Interaural time difference (ITD): especially useful for sounds arising from the horizontal plane (binaural) o Difference between the time sounds reach the two ears  When distance to each ear is the same, there are no differences in time  When the source is to the side of the observer, the times will differ o Best for low frequency sounds  Interaural intensity or level difference (ILD): particularly helpful for high frequency sounds because they produce acoustic shadows (binaural) o Difference in sound pressure level reaching the two ears  Reduction in intensity occurs for high frequency sounds for the far ear  The head casts an acoustic shadow  This effect doesn’t occur for low frequency sounds o ILD as a function of frequency for 3 different sound source locations  Microphones detect differences between 2 ears  The lower frequency there is no ILD however as increases as frequency increases 14  Spectral cues: especially useful for sounds arising from vertical plane (monaural) o Head related transfer function (HRTF) = spectral cue  Difference between sound from source and sound actually entering ears (since head and pinna decrease intensity of some frequencies)  Cone confusion: sounds arriving from different areas on cone do not produce any ITD, ILD or significantly different spectral cues on ears o Head movements help to eliminate this confusion  Monoaural and binaural columns in A1 o A1 is divided into tonotopic columns for different frequencies along its length o Each of the tonotopic strips is divided into alternating binaural (EE) and monaural (EI) columns o EE are called summation columns  respond to either ear o EI are called suppression columns  one ear suppresses the other o Map for one ear  monoaural  Plasticity following monoaural deprivation during critical period  amblyaudia o Expt = sewed one ear shut o Auditory acuity depends on binaural and monaural cues o Monaural deprivation especially during critical periods results in significant distortion in tonotopic maps in A1 (especially to high frequency sounds) o In amblyaudia – cortex starts to ignore one ear (brain dead to one ear) Localization with Mold in Ear  Silicon mold does allow sound to move through ear, however it causes pinna to almost not be there  Localization changes when a mold is placed in ear  Eye saccade maps with different sound locations in a dark room in horizontal and vertical coordinates before applying silicon mold  accurate  Day 0, 5, and 19 after applying mold  day 1 is worst, and gets increasing better over days, but never as good as without mold o Azimuth coordinates that depend on binaural cues were never affected  Post-control (removal of mold)  hearing goes back to almost normal therefore pinna plays very important role 15 Central Mechanisms for Auditory Localization  ITD tuning curves for 6 neurons in superior olivary nucleus that each respond to a narrow range of ITD’s  Role of coincidence detector in neurons in signaling direction of sound o These only fire when the impulses from the left and right auditory nerves coincide  The physiological representation of auditory space o Two mechanisms have been proposed:  Narrowly tuned ITD neurons  Coincidence detector neurons o They are found in the inferior colliculus and the superior olivary nuclei  ITD tuning curves for broadly tuned neurons (Gerbil model) o Patterns of response of the broadly tuned curves for stimuli coming from the left, front, and right  Cortical areas for sound localization (monkey model) o Interaural time difference detectors in cortex o “Panoramic neuron” fires different patterns depending on the direction of sound  Like morse code from each side therefore neuron relies on different patterns o Topographic map (spatial map)  The what and where streams of hearing o Where (locates sound): starts in posterior part of core belt and extends to parietal lobe and prefrontal cortex 16 o What (identifies sound): starts in anterior part of core belt and extends to prefrontal cortex Auditory Pathways  Auditory nerve fibers from cochlea synapse in a sequence of subcortical structures  Major relays:  Cochlear nucleus  Olivary nucleus (superior)  Colliculus – inferior  Medial geniculate  Auditory cortex – area 41/A1 = COCHLEA o Each of the above relay centres has been investigated thoroughly for spectral sensitivity and direction sensitivity  Flow can go in various directions  Probable role of each of the major sensory relays o Cochlear nucleus: spectral analysis (spectral cues) o Superior olivary nucleus: spatial analysis  Medial – determines ITD  helpful for sound localization of low frequency sounds  Lateral – determines ILD  helpful for localization of high frequency sounds o Inferior colliculus: integration of spectral and spatial analysis o Medial geniculate: attending to particular spatial and spectral cues and “top-down” auditory signal processing  Major “centrifugal” feedback connections in the auditory pathways o Corticofugal outputs to medial geniculate (considering cortex as centre): attention to auditory cues o Olivofugal outputs to OHC (considering superior olivary nucleus as centre): tuning of IHC 17 o Cochleofugal outputs to stapedius and tensor tympani (considering cochlear nucleus to be centre): attenuation reflex  Auditory and visual pathways compared o Relays: many more in auditory pathways o Cross-over between left-right in auditory pathways o There is “cochleotopic” representation on the auditory cortex for audition just as “retinotopic” on visual cortex for vision o There are “what” and “where” streams for audition extending from temporal to parietal-frontal cortex as from the visual cortex in vision  Auditory areas in the cortex o Cortical processing starts with core area, which includes the primary auditory cortex (A1) and some nearby areas o Signals then travel to the belt area and then the parabelt area o Has property of hierarchal processing  evidenced by fact that core activated by simple sounds, but other areas need more complex sounds  Auditory cortex is shaped by experience o Monkey that had discrimination training to specific frequency had more space on tonotopic map devoted to that frequency than untrained monkey o Musical training enlarges area of auditory cortex that responds to those tones o 25% more of cortex activated in musicians that non-musicians  Visual and auditory localizations a multimodal integration (PET Study) o Discrete as well as overlapping o Bilateral activation of superior parietal, superior temporal and frontal areas o Also have cerebellar and thalamic activation (sound and little for vision)  Balint’s syndrome: poor auditory localization o Younger person has better ability to localize sound and decreases with age  Deaf-hearing  similar to blindsight in vision o Patient had bilateral lesion in temporal lobe o Became completely deaf to pure tones, complex tones as spoken speech o Did not respond to sounds voluntarily, when asked to (instructed by writing) o However, reflexly responded to sounds o She could localize sounds Sound Localization in The Dark  Facial structure of barn owl o Well adapted for detecting vertical cues o Face being flat like the dish of an antenna 18 o Asymmetry of feathers covering left and right ear openings  Both in shape and feather type (not in same location) o Left ear higher and tilted downwards – more sensitive to sounds from below o Right ear lower and tilted upwards – more sensitive to sounds from above o Can very quickly track a swift rat in the dark  Echolocation in bats o High frequency sounds (>20 kHz) produced by larynx are two types:  Constant frequency: similar to vowels produced by us  Rapidly changing frequency: similar to our consonants o Cortex analyses the two types of echoed sounds received by the ears separately o The echoed, constant frequency sounds are good at detecting a moving target (moth) o The echoed, changing sounds are good at detecting the range of a stationary target (tree, house, cable) o The moth has evolved its own strategies to confuse bat  It emits confusing counter clicks  Echolocation in dolphins o Deep underwater has poor visibility o Sounds are significantly distorted by water o Dolphins produce intense 200dB high-frequency clicks through their nasal sacs accentuated by melon that focuses the sound on their potential target o The sound echoed by the target are picked up by the lower jaw (dolphins do not have functioning external ears) and analyzed by the highly developed brain o Melon magnifies sound, mandibular nerve picks up echo, and sends them to cortex Auditory Scene Analysis Auditory Scene Analysis  Auditory scene: the array of all sound sources in the environment  Auditory scene analysis: process by which sound sources in the auditory scene are separated into individual perceptions o Ex. each musician produces a sound stimulus, and all 3 sounds are combined in the output of the loudspeaker  ear must be able to categorize where it is coming from  This does not happen at the cochlea since simultaneous sounds are together in the pattern of vibration of BM o This must happen higher up Cocktail Party Problem/Effect 19  Definition: a psychoacoustic phenomenon by which we are able to filter out and recognize one source of auditory input admist competing noisy others  Why a “problem”: o Simultaneous sounds o Different pitch and timbre o Different intensities o Varying time and sequence o Different sources o Distractions o Echoes Grouping and Streaming  Definition: parts of the complex acoustic signal that are grouped together to form an auditory stream  Two categories of streaming/grouping: o Sequential: process by which the auditory system groups successively occurring sounds into separate streams o Simultaneous: process by which the auditory system assigns simultaneously occurring sounds into separate streams  Cues for sequential and simultaneous streaming (used to categorize): o Similar in location: sounds created by a particular source usually come from one position in space or from a slowly changing location o Similar in pitch o Similar in timbre o Similar in amplitude o Proximity in time: if two sounds that start at different times, it is likely that they came from different sources  onset time o Visual cues o Auditory continuity: sounds that stay constant or that change smoothly are often produced by the same source  Pitch and timing o Difficult to stream: frequencies are close, timings are far apart  Can’t categorize together o Easy to stream: frequencies are close, timings are short  You can’t distinguish the two o Easy to stream: frequencies are very different, timings are far apart  Compound melodic line or implied polyphony  auditory stream segregation o When high and low tones are alternated slowly, auditory stream segregation does not occur, so the listener perceives alternating high and low tones o Faster alteration results in segregation into high and low streams (two separate melodies)  Scale of illusion or melodic channeling: an illusion that occurs when successive notes of a scale are presented alternatively to the right and left ear 20 o Even though ear receives notes that jump up and down in frequency, smooth ascending or descending scales are heard in ear  Effect of past experience  expt by Dowling o Three blind mice is played with notes alternating between octaves o Listeners find it difficult to identify the song o But after they hear the normal melody, they can then perceive it in the modified version using melody schema  top-down processing  Melody schema: a representation of a familiar melody that is stored in a person’s memory  Interactions between vision and sound  Visual capture or the ventriloquist effect: an observer perceives sound as coming from the visual location rather than the source of the sound o Expt by Sekuler  Balls moving without sound appeared to move past each other  Balls with an added “click” appeared to collide Hearing Inside Rooms  Direct sound: sound that reached the listener’s ears straight from the source  Indirect sound: sound that is reflected off of environmental surfaces and then to the listener  When a listener is outside, most sound is direct; however inside a building, there is direct and indirect sound  Where sound is coming from  expt by Litovsky o Listeners sat between 2 speakers: a lead speaker and a lag speaker o When sound comes from the lead speaker followed by lag speaker with a long delay, listener hears two sounds o When the delay is decreased to 5-20 msec, listener hears the sound as coming from the lead speaker only – the precedence effect  Architectural acoustics: the study of how sounds are reflected in rooms o Factors that affect perception in concert halls  reverberation time – th the time it takes sound to decrease by 1/1000 of its original pressure  If it is too long, sounds are muddled  reflected sound lasts too long, like an echo  If it is too short, sounds are dead  no feedback and hard to produce high frequency sounds  Acoustics in classrooms o Ideal reverberation time in classrooms is:  .4 - .6 secs for small classrooms  1 – 1.5 secs for auditoriums  These maximize abilities to hear voices  Most classrooms have times of 1 second or more  With instruments, reverberation time longer because higher frequencies (music = 2 secs)  Frequency and intensity of sound important factors in reverberation time
More Less

Related notes for PSY280H5

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit