Textbook Notes (363,063)
Canada (158,170)
Psychology (9,565)
PSYA01H3 (1,196)
Steve Joordens (1,052)
Chapter 4

Chapter FOUR.docx

16 Pages
Unlock Document

University of Toronto Scarborough
Steve Joordens

Chapter FOUR SENSATION AND PERCEPTION - Synesthesia is the perceptual experience of one sense that is evoked by another sense o This was once thought to occur in as few as one in every 25,000 people o The experience of seeing colours evoked by sounds or of seeing letters in specific colours is much more common among synesthetes than, say, a smell evoked by touching a certain shape. o For instance, a synesthete who sees the digits 2 and 4 as pink and 3 as green will find it easier to pick out a 2 among a bunch of 3s than among a bunch of 4s, whereas a nonsynesthete will perform these two tasks equally well. o Brain-imaging studies also show that in some synesthetes, areas of the brain involved in processing colours are more active when they hear words that evoke colour than when they hear tones that don’t evoke colour; and no such differences are seen among people in a control group. OUR SENSES ENCODE THE INFORMATION OUR BRAINS PERCEIVE - Sensation and perception appear to be one seamless event. - Information comes in from the outside world, gets registered and interpreted, and triggers some kind of action: no breaks, no balks, just one continuous process. - Psychologists on the other hand that sensation and perception are TWO separate activities - Sensation is simple stimulation of a sense organ o It is the basic registration of light, sound, pressure, odour, or taste as parts of your body interact with the physical world. - After a sensation registers in our central nervous system, perception takes place at the level of our brain: it is the organization, identification, and interpretation of a sensation in order to form a mental representation. o As an example, our eyes are coursing/skimming across these sentences right now! o The sensory receptors in our eyeballs are registering different patterns of light reflecting off the page. o Our brain, however, is integrating and processing that light information into the meaningful perception of words, such as meaningful, perception, and words. o Our eyes, the sensory organ, aren’t really seeing words; they’re simply encoding different lines, curves, and patterns of ink on a page. o Our brain, the perceptual organ, is transforming those lines and curves into a coherent mental representation of words and concepts. - Damage to the visual-processing centers in the brain can interfere with the interpretation of information coming from the eyes: The senses are intact, but perceptual ability is compromised. - We all know that we have FIVE senses: vision, hearing, touch, taste and smell. o But we possess several more senses besides these five o Touch for instance encompasses distinct body senses, including sensitivity to pain and temperature, joint position and balance, and even the state of the gut. - Our senses all depend on the process of transduction which occurs when many sensors in the body convert physical signals from the environment into encoded neural signals sent to the central nervous system. - In vision, light reflected from surfaces provides the eyes with information about the shape, colour and position of objects - In audition, vibrations(from vocal cords or a guitar string, perhaps) cause changes in air pressure that propagate through space to a listener’s ears. - In touch, the pressure of a surface against the skin signals its shape, texture, and temperature. - In taste and smell, molecules dispersed in the air or dissolved in saliva reveal the identity of substances that we may or may not want to eat. - In each case physical energy from the world is converted to neural energy inside the central nervous system PSYCHOPHYSICS - As we already learned, in order to understand a behaviour researchers must first operationalize it, and that would involve finding a reliable way to measure it. - The structuralists, led by Wilhelm Wundt and Edward Titchener, tried using introspection to measure perceptual experiences and they had failed miserably - After all, you can describe your experience to another person in words, but that person cannot know directly what you perceive when you look at a sunset. - Evoked memories and emotions intertwine with what you are hearing, seeing, and smelling, making your perception of an event- and therefore your experience of that event- unique. - QUESTION: Given that perception is different for each of us, how could we ever hope to measure it? - ANSWER: in the mid-1800s the German scientist and philosopher Gustav Fechner developed an approach to measure sensation and perception called psychophysics: methods that measure the strength of a stimulus and the observer’s sensitivity to that stimulus. MEASURING THRESHOLDS - Psychophysicists begin the measurement process with a single sensory signal to determine precisely how much physical energy is required to evoke a sensation in an observer. ABSOLUTE THRESHOLD - The simplest quantitative measurement in psychophysics is the absolute threshold, the minimal intensity needed to just barely detect a stimulus. - A threshold is a boundary o The doorway that separates the inside from the outside of a house is a threshold, as is the boundary between two psychological states (``awareness`` and ``unawareness``) - In order to find the absolute threshold for sensation, the two states in question are sensing and not sensing some stimulus. - To measure the absolute threshold for detecting a sound, for example, an observer sits in a soundproof room wearing headphones linked to a computer. The experimenter presents a pure tone (the sort of sound made by striking a tuning fork) using the computer to vary the loudness or the length of time each tone lasts and recording how often the observer reports hearing that tone under each condition. DIFFERENCE THRESHOLDS - The absolute threshold is useful for assessing how sensitive we are to faint stimuli, but most everyday perception involves detecting differences among stimuli that are well above threshold. o For example, parents can usually detect their own infant`s cry from the cries of other babies, but its probably more useful to be able to differentiate the ``I`m hungry`` cry from the ``I`m cranky`` cry from the ``Something is biting my toes`` cry. - The human perceptual system excels at detecting changes in stimulation rather than the simple onset or offset of stimulation - As a way of measuring this, Fechner proposed the just noticeable difference, or JND, the minimal change in a stimulus that can just barely be detected. - The JND is not a fixed quantity; rather, it depends on how intense the stimuli being measured are and on the particular sense being measured. o For example, consider measuring the JND for a bright light; an observer in a dark room is shown a light of fixed intensity, called the standard (S), next to a comparison light that is slightly brighter or dimmer than the standard. o When S is very dim, observers can see even a very small difference in brightness between the two lights: the JND is small. o But if S is bright, a much larger increment is needed to detect the difference: the JND is larger. - Weber`s Law states that the just noticeable difference of a stimulus is a constant proportion despite variations in intensity. o For instance, the JND for weight is about 2-3% o If you picked up a one-ounce envelope, then a two-ounce envelope, you`d probably notice the difference between them. o But if you picked up a twenty-pound package, then a twenty-pound, one-ounce package, you`d probably detect no difference at all between them.  In fact, you`d probably need about a twenty-and-a-half-pound package to detect a JND. - When calculating a difference threshold, it is the proportion between stimuli that is important; the measured size of the difference, whether in brightness, loudness, or weight, is irrelevant. SIGNAL DETECTION - Measuring absolute and difference thresholds requires a critical assumption: that a threshold EXISTS! o Humans don’t suddenly and rapidly switch between perceiving and not perceiving; recalling the transaction from not sensing to sensing is gradual o An absolute threshold is operationalized as perceiving the stimulus 50% of the time, which means the other 50% of the time it might go undetected. - Whether in the psychophysics lab or out in the world, sensory signals face a lot of competition, or noise, which refers to all the other stimuli coming from the internal and external environment. - Memories, moods, and motives intertwine with what you are seeing, hearing, and smelling at any given time. - This internal ``noise`` competes with our ability to detect a stimulus with perfect, focused attention. - Other sights, sounds, and smells in the world at large also compete for attention - As a consequence of noise, we may not perceive everything that we sense, and we may even perceive things that we haven’t sensed. o For instance the hearing tests we take; we would have obviously missed some of the quiet beeps but also we would have also said we heard beeps when they really weren`t there. - An approach to psychophysics called signal detection theory holds that the response to a stimulus depends both on a person`s sensitivity to the stimulus in the presence of noise and on a person`s decision criterion. o That is, observers consider the sensory evidence evoked by the stimulus and compare it to an internal decision criterion. o If the sensory evidence exceeds the criterion, the observer responds by saying, `` yes, I detected the stimulus,`` and if it falls short of the criterion, the observer responds by saying, ``No, I did not detect the stimulus`` - Signal detection theory allows researchers to quantify an observer`s response in the presence of noise. o In a signal detection experiment, a stimulus, such as a dim light, is randomly presented or not. o Observers in a detection experiment must decide whether they saw the light or not. o If the light is presented and the observer correctly responds, ``yes`` the outcome is a hit. o If the light is presented and the observer says, ``No`` the result is a miss o However, if the light is not presented and the observer nonetheless says it was, a false alarm has occurred. o If the lights is not presented and the observer responds ``No`` a correct rejection has occurred - Signal detection theory explicitly takes into account observers` response tendencies, such as liberally saying ``Yes`` when there is any hint of a stimulus or conservatively reserving identifications only for obvious instances of the stimulus - Signal detection theory proposes a way to measure perceptual sensitivity, how effectively the perceptual system represents sensory events, separately from the observer`s decision-making strategy. - Signal detection theory offers a practical way to choose among criteria that permit decision makers to take into account the consequences of hits, misses, false alarms, and correct rejections. SENSORY ADAPTATION - The aroma of freshly baked bread, diving into cold water, and blinding bathroom lights, are all examples of sensory adaptation, the observation that sensitivity to prolonged stimulation tends to decline over time as an organism adapts to current conditions. o For example while we are studying in a quiet room the neighbour turns up their stereo, which gets our attention, but after a few minutes the sounds fade from our awareness and we continue our studies. o But remembering that our perceptual systems emphasize change in responding to sensory events: When the music stops, we notice. - Our sensory systems respond more strongly to changes in stimulation than to constant stimulation. - A stimulus that doesn’t change usually doesn’t require any action - But a change in stimulation often signals a need for action VISION I: HOW THE EYES AND THE BRAIN CONVERT LIGHT WAVES TO NEURAL SIGNALS - 20/20 refers to a measurement associated with a Snellen chart, named after Hermann Snellen, the Dutch ophthalmologist who had developed it as a means of assessing visual acuity, the ability to see fine detail; it is the smallest line of letters that a typical person can read from a distance of 20 feet. - Our sophisticated visual system has evolved to transduce visual energy in the world into neural signals in the brain - Humans have sensory receptors in their eyes that respond to wavelengths of light energy o When we look at people, places, and things, patters of light and colour give us information about where one surface stops and another begins. o The array of light reflected from those surfaces preserves their shapes and enables us to form a mental representation of a scene. SENSING LIGHT - Visible light is simply the portion of the electromagnetic spectrum that we can see, and it is an extremely small slice. - Think of light waves as waves of energy - Light waves vary in height and in the distance between their peaks, or wavelengths - The three properties of light waves are the following: o The length of a light wave determines its hue, or what humans perceive as colour o The intensity or amplitude of a light wave – how high the peaks are – determines what we perceive as the brightness of light o Purity is the number of distinct wavelengths that makeup the light. It corresponds to what humans perceive as saturation, or the richness of colours - Light doesn’t need a human to have the properties it does: Length, amplitude, and purity are properties of the light waves themselves. - What humans perceive from those properties are colour, brightness, and saturation. THE HUMAN EYE - Light that reaches the eyes passes first through a clear, smooth outer tissue called the cornea, which bends the light wave and sends it through the pupil, a hole in the coloured part of the eye - This coloured part is the iris, which is a translucent, doughnut-shaped muscle that controls the size of the pupil and hence the amount of light that can enter the eye. - Immediately behind the iris, muscles inside the eye control the shape of the lens to bend the light again and focus it onto the retina, light-sensitive tissue lining the back of the eyeball. - The muscles change the shape of the lens to focus objects at different distances, making the lens flatter for objects that are far away or rounder for nearby objects. This is referred to as accommodation, the process by which the eye maintains a clear image on the retina. - If the eyeball is too long, images are focused in front of the retina, leading to nearsightedness (myopia), - If the eyeball is too short, images are focused behind the retina, and the result is farsightedness (hyperopia) PHOTOTRANSDUCTION IN THE RETINA - QUESTION: How does a wavelength of light become a meaningful image? - ANSWER: The retina is the interface between the world of light outside the body and the world of vision inside the central nervous system - Two types of photoreceptor cells in the retina contain light-sensitive pigments that transduce light into neural impulses o Cones detect colour, operate under normal daylight conditions, and allow us to focus on fine detail o Rods become active under low-light conditions for night vision - Rods are much more sensitive photoreceptors than cones, but this sensitivity comes at a cost o Because all rods contain the same photopigment, they provide no information about colour and sense only shades of gray. - Rods and cones differ in several other ways as well, most notably in their numbers o About 120 million rods are distributed more or less evenly around each retina except in the very center, the fovea, an area of the retina where vision is the clearest and there are no rods at all.  The absence of rods in the fovea decreases the sharpness of vision in reduced light, but it can be overcome o Each retina contains only about 6 million cones, which are densely packed in the fovea and much more sparsely distributed over the rest of the retina.  This distribution of cones directly affects visual acuity and explains why objects off to the, in our peripheral vision, aren’t so clear.  The light reflecting from those peripheral objects has a difficult time landing in the fovea, making the resulting image less clear. - The more fine detail encoded and represented in the visual system, the clearer the perceived image. - The process is analogous to the quality of photographs taken with a six-megapixel digital camera versus a two- megapixel camera - The retina is thick with cells. o The photoreceptor cells (rods and cones) form the innermost layer. o The middle layer contains bipolar cells, which collect neural signals from the rods and cones and transmit them to the outermost layer of the retina, where neurons called retinal ganglion cells (RGCs) organize the signals and send them to the brain. - The bundled RGC axons – about 1.5 million per eye – form the optic nerve, which leaves the eye through a hole in the retina o Because it contains neither rods nor cones and therefore has no mechanism to sense light, this hole in the retina creates a blind spot, which is a location in the visual field that produces no sensation on the retina. RECEPTIVE FIELDS - Each axon in the optic nerve originates in an individual retinal ganglion cell - Most RGCs respond to input not from a single retinal cone or rod but from an entire patch of adjacent photoreceptors lying side by side, or laterally, in the retina. - A particular RGC will respond to light falling anywhere within that small patch, which is called its receptive field, the region of the sensory surface that, when stimulated, causes a change in the firing rate of that neuron. o For instance, the cells that connect to the touch centers of the brain have receptive fields, which are the part of the skin that, when stimulated, causes that cell’s response to change in some way - A given RGC responds to a spot of light projected anywhere within a small, roughly circular patch of retina. - Most receptive fields contain either a central excitatory zone surrounded by a doughnut-shaped inhibitory zone, which is called an on-center cell, or a central inhibitory zone surrounded by an excitatory zone, which is called an off-center cells - The doughnut-shaped regions represent patches of retina - The responses of an on-center retinal ganglion cell: o When the spot of light exactly fills the excitatory zone, it elicits the STRONGEST response o When the light falling on the surrounding inhibitory zone elicits the WEAKEST response or NONE at all. - The response of an off-center retinal ganglion cell: o A small spot shining on the central inhibitory zone elicits a WEAK response o A spot shining on the surrounding excitatory zone elicits a strong response in the RGC - The retina is organized in this way to detect edges – abrupt transitions form light to dark or vice versa. o Edges are of supreme importance in vision.  They define the shapes of objects, and anything that highlights such boundaries improves our ability to see an object’s shape, particularly in low-light situations. PERCEIVING COLOUR SEEING COLOUR - Colour is nothing but our perception of wavelengths from the spectrum of visible light o For example we perceive the shortest visible wavelengths as deep purple - As wavelengths increase, the colour perceived changes gradually and continuously to blue, then green, yellow, orange, and with the longest visible wavelengths, red. - The rainbow of hues and accompanying wavelengths is called the visible spectrum - All rods contain the same photopigment, which makes them ideal for low-light vision but bad at distinguishing colours - Cones contain any one of three types of pigment. o Each cone absorbs light over a range of wavelengths, but its pigment type is especially sensitive to visible wavelengths that correspond to RED (long-wavelength), GREEN (medium-wavelength), or BLUE (short-wavelength) light. - RED, GREEN, and BLUE are the primary colours of light; colour perception results from different combinations of the three basic elements in the retina that respond to the wavelengths corresponding to the three primary colours of light. - Increasing light to create colour is called additive colour mixing - Recreating any colour by mixing paints is referred to as subtractive colour mixing and works by removing light from the mix, such as when you combine yellow and red to make orange. - The darker the colour, the less light it reflects, which is why black surfaces reflect no light. TRICHROMATIC COLOUR REPRESENTATION IN THE CONES - Light striking the retina causes a specific patter of response in the three cone types o One type responds best to short-wavelength (bluish) light S-CONES o The second type to medium-wavelength (greenish) light M-CONES o The third type to long-wavelength (reddish) light L-CONES - This trichromatic colour representation means that the pattern of responding across the three types of cones provides a unique code for each colour. - A genetic disorder in which one of the cone types is missing – and, in some very rare cases, two or all three – causes a colour deficiency. o This trait is sex-linked, affecting men much more often than women - Colour deficiency is often referred to as colour blindness, BUT people missing only one type of cone can still distinguish many colours, just not as many as someone who has the full complement of three cone types. - Like synesthetes, people whose vision is colour deficient often do not realise that they experience colour differently from others. COLOUR-OPPONENT REPRESENTATION INTO THE BRAIN - Sensory adaptation occurs because our sensitivity to prolonged stimulation tends to decline over time. - Staring too long at one colour fatigues the cones that respond to that colour, producing a form of sensory adaptation that results in a colour afterimage - Colour-opponent system, where pairs of visual neurons work in opposition; o RED-sensitive cells against GREEN-sensitive cells o BLUE-sensitive against YELLOW-sensitive - It may be that opponent pairs evolved to enhance colour perception by taking advantage of excitatory and inhibitory stimulation. o RED-GREEN cells are excited – they increase their firing rates – in response to wavelengths corresponding to red o RED-GREEN cells are inhibited – they decrease their firing rates – in response to wavelengths corresponding to green o BLUE-YELLOW cells increase their firing rate in response to blue wavelengths – excitatory o BLUE-YELLOW cells decrease their firing rates in response to yellow wavelengths – inhibitory - Fatigue leads to an imbalance in the inputs to the color-opponent neurons, beginning with the retinal gangkion cells: The weakened signal from the green-responsive cones leads to an overall response that emphasizes red. THE VISUAL BRAIN - Streams of action potentials containing information encoded by the retina travel to the brain along the optic nerve. - Half of the axons in the optic nerve that leave each eye come from retinal ganglion cells that code information in the right visual field, whereas the other half code information in the left visual field. o These two nerve bundles link to the left and right hemispheres of the brain, respectively. - The optic nerve travels from each eye to the lateral geniculate nucleus (LGN) located in the thalamus. o The thalamus receives inputs from all of the senses except smell. - From there the visual signal travels to the back of the brain, to a location called area V1, the part of the occipital lobe that contains the primary visual cortex. o Here the information is systematically mapped into a representation of the visual scene. - There are about 30-50 brain areas specialized for vision, located mainly in the occipital lobe at the back of the brain and in the temporal lobes on the sides of the brain. NEURAL SYSTEMS FOR PERCEIVING SHAPE - Perceiving shapes depends on the location and orientation of an object’s edges. o And area V1 is specialized for encoding edge orientation. - Neurons in the visual cortex selectively respond to bars and edges in specific orientations in space. - Area V1 contains populations of neurons, each “tuned” to respond to edges oriented at each position in the visual field. o This means that some neurons fire when an object in a vertical orientation is perceived, other neurons fire when an object in a horizontal orientation is perceived, still other neurons fire when objects in a diagonal orientation of 45 degrees are perceived. o The outcome of the coordinated response of all these feature detectors contributes to a sophisticated visual system that can detect where a doughnut ends and celery begins. PATHWAYS FOR WHAT, WHERE, AND HOW - Brain researchers have used transcranial magnetic stimulation (TMS) to demonstrate that a person who can recognize what an object is may not be able to perceive that the object is moving. o This implies that one brain system identifies people and things and another tracks their movements, or guides our movements in relation to them - Two functionally distinct pathways, or visual streams, project from the occipital cortex to visual areas in other parts of the brain. o The ventral (“below”) stream travels across the occipital lobe into the lower levels of the temporal lobes and includes brain areas that represent an object’s shape and identity o The dorsal (“above”) stream travels up from the occipital lobe to the parietal lobes (including some of the middle and upper levels of the temporal lobes), connecting with brain areas that identify the location and motion of an object.  Because the dorsal stream allows us to perceive spatial relations, researchers originally dubbed it the “where pathway”  Neuroscientists then argued that because the dorsal stream is crucial for guiding movements, such as aiming, reaching, or tracking with the eyes, the “where pathway” should more appropriately be called the “how pathway” - Some of the most dramatic evidence for two distinct visual streams comes from studying the impairments that result from brain injury. - Visual-form agnosia is the inability to recognize objects by sight. VISION II: RECOGNIZING WHAT WE PERCEIVE ATTENTION: THE “GLUE” THAT BINDS INDIVIDUAL FEATURES TO A WHOLE - Specialized feature detectors in different parts of the visual system analyze each of the multiple features of a visible object – orientation, colour, size, shape, and so on. - The binding problem is concerned with how feature are linked together so that we see unified objects in our visual world rather than free-floating or miscombined features. ILLUSRY CONJUNCTIONS: PERCEPTUAL MISTAKES - Researchers have discovered errors in binding that reveal important clues about how the process works. o One such error is known as an illusory conjunction, a perceptual mistake where features from multiple objects are incorrectly combined.  For example the participants of this experiment claim to have seen a blue A or a red X instead of the red A and the blue X that had actually been shown. - QUESTION: Why do illusory conjunctions occur? - ANSWER: Treisman and her colleagues have tried to explain them by proposing a feature integration theory which holds that focused attention is not required to detect the individual features that comprise a stimulus, such as the colour, shape, size, and locations of letters, but is required to bind those individual features together. o From this perspective, attention provides the “glue” necessary to bind features together, and illusory conjunctions occur when it is difficult for participants to pay full attention to the features that need to be glued together. - When experimental conditions are changed so that participants can pay full attention to the coloured letters, and they are able to correctly bind their features together, illusory conjunctions disappear. - Feature integration theory also helps to explain some striking effects observed when people search for targets in displays containing many items. THE ROLE OF THE PARIETAL LOBE - The binding process makes use of feature information processed by structures within the ventral visual stream, the “what pathway” - But because binding involves linking together features processed in distinct parts of the ventral stream at a particular spatial location, it also depends critically on the parietal lobe in the dorsal stream, the “where pathway”. - More recent studies suggest that damage to the upper and posterior portions of the parietal lobe is likely to produce problems with focused attention, resulting in binding problems and increased illusory conjunctions. - Neuroimaging studies indicate that these same parietal regions are activated healthy individuals when they perform the kind of visual feature binding that patients with parietal lobe damage are unable to perform, as well as when they search for conjunction features. BINDING AND ATTENTION IN SYNESTHESIA - Some researchers have characterized synesthesia as an instance of atypical feature binding. - Recent research shows that some of the same processes involved in normal feature binding also occur in synesthesia - fMRI studies of synesthetic individuals have revealed that the parietal lobe regions that we’ve already seen are implicated in normal binding of colour and shape and become active during the experience of letter-colour synesthesia. - Applying TMS to these parietal regions interferes with synesthetic perceptions. - Other experiments have shown that synesthetic bindings, such as seeing a particular digit in a particular colour, depend on attention. nd o For example refer to 2 paragraph of page 147 - Although our perceptual experiences differ substantially from those of synesthetes, they rely on the same basic
More Less

Related notes for PSYA01H3

Log In


Don't have an account?

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.