Study Guides (400,000)
CA (150,000)
UTSG (10,000)
PSY (800)

PSY270H1 Study Guide - Voice-Onset Time, Vocal Folds, Phoneme


Department
Psychology
Course Code
PSY270H1
Professor
Christine Burton

This preview shows pages 1-3. to view the full 11 pages of the document.
PSY270 JUNE 12, 2012
LANGUAGE
PSYCH LOUNGE WEDNSEDASY MORNING
FROM LAST WEEK
WHAT IS LANGUAGE
DIFFERENT LEVELS
WORDS SENTENCES AND TEXTS AND STORIES
MODUAL VS DOMAIN MODULARITY TO TALK ABOUT LANGUAGE
BU TD PERCIEVE LANGUAGE DECODE THE SOUND AND GET MESSAGE OR MESSAGE RECEIVE
INFLINCED BY PAST KNOWLEDEG
Network models
- Heiracrhacl already discussed
- Every node up and down
- Longer if answer moving between levels
- Problem most time ling time to travel not always true chicken animal two levels faster then one
level is bird
- Cant have hericarch org
Contd
- Spreading activation
- No hericarch avoides it problem one faster
- Present in environment node becomes active fire engine active and all stored becomes active
- Activity then spreads out to other nodes
- The length of link determines strength of connection means activity spread faster 0 faster
ambulance if saw fire truck then house
- What determeines strength of connections or length is exeprince so top down porcessss, so
hear think or know in addition of ambulance and fire truck
- What this means is that everyone one has different knowldeg representation
- Problem is that everything is possible, within one it changes it is nice and explain every
cvategroizeation and is problem if explains everything without explaining anything
- Activiation doesn’t stop once, one node out in all direction and keeps spreading, weakens as it
goes another problem at some point it has to stop does not specify when , one thing activate all
rest of knowledge
- Not call this spreading activation model, talks in refercen to heriarchal model
- This is semtnatic network model as the hericactcal one, this is spreading activiation

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

- The other move between link, not an atomatic spread, active search is the heiracrchal effect is
same, mind is just searching in the heactchal
- The sreading mind is not actively looking like the other one
- Explains different but not why different, not quite there
Parallele distributed processing
- Newer
- Not based on how categories are organized but structrure of the brain
- Not nodes but neuron like unuts
- Differecen between semantic the units do no contain info theses ones don’t semantic do
- Units connected with links, amount iof connectivity
- Others directly trlated, every unit is connected with every other one
- Look hierarchal but it aint
- What contain layer of unites
- Input recvieve info
- One layer hidden where processing happens unknown
- And output ayer, comes in processed and goes out
- Like neurons can be one of 3 active ex and inh
- Important each connectioin links has different weight coneection strength
- Like length in spreading higher weight is stronger connection from 0 to 1 0 is no 1 is perfect
connection this unit is active and other all activity is transferred to other equally active
- .5 half as active
- How storeed is pattern of activity some active depending on input and weights and hidden layer
represent particular catergory or exemplar 0 distributed processing
Contd
- How system learns is by process called basck propagantion
- Computer simulations mimic it like brain
- Feed in pic of dog receptors of input take into info to hiddent
- Weights at beging are random so random pattern of activity random output
- So this is a tabke needs to get feedback no wroung so info fed back from output to hidden
and weights are changed
- Continues to weighting just so it is a dog so input certain pattern dog
Contd
- Input random pattern of activyt in hidden and output
Pd
- Explains how it easy
- Pattern of activation shouldn’t be that different for other dog

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

- Resistant to faulty input
- What is it new mcds even if green arches know yellow neogh there to knwow aht you are
talking about
- Demonstaret grasce degradation something wrong in system models can compensate some
info missing, neurons not active in memory trace this model can compensate fot it
- Brain damage not everything top working tot is tip of tongue
- Always learn if appropriate feedback
- How think not anatomical detail
Contd
- Generalization of leanting know dog
- Certain pattern etc slightly different germand Sheppard shouldn’t be to much of diffent pattern
of activation little diffent output
- Faulty input little bit wrong slightl different change weightings and get slight difference in
activity no diferecne in ouput same activity
- Changes weighting is feedback
- Most pattern intact not change output
- Diftributed pattern more or less intact
You're Reading a Preview

Unlock to view full version