PSY100H1 Lecture Notes - Lecture 10: Connectionism, Noam Chomsky, Phoneme

16 views9 pages
Published on 15 Apr 2013
School
UTSG
Department
Psychology
Course
PSY100H1
Lecture 10 - 03-25-13
continued from last lecture...
Knowledge network
Knowledge is represented via a vast network of connections and associations between all of the
information you know. Information held in nodes. Nodes are connected similar to what we've
talked about in the past. Other evidence for the knowledge representation in a network comes
from sentence-verification task. This has been a way to show knowledge in a network is a
plausible idea. Basically there's a few different ways to look at it; one is answering true or false
statements i.e. robins are birds, robins are animals, seeing how fast people respond. There's
different types of true statements. There's ones that are directly associated or others that are a
few links away. "Cats have hearts", with this kind of idea, we're generally thinking that animals
have hearts as a more representative idea, cat is an animal, so an animal has a heart. "Cats have
claws" requires one link. It's a much more direct link because not all animals have claws.
People looked at the time to answer these questions. The associative path in these questions,
how they hypothesized will determine their speed of answering true or false. The associative
path takes a little more time if they're not directly linked. We're trying to say that maybe
information is organized in this manner and this is some type of proof that it could be.
Nodes can represent concepts. Links such as hasa or isa can associate each concept. If we want
to say yeah, Max is my dog, and my friend Max has a dog, those are different statements and
ideas and tell different things to us. In one, the Max is represented as a dog. The idea came
about that the way if each node represents a concept rather than connecting information, that
hasa and isa link represents a linkage of information. It's not efficient however based on the
number of links there could be. It then becomes this thing where you need something to
oversee these links. Has to be something that instructs what that link is, becomes inefficient.
A more complex network (Anderson ACT) was around a notion of a proposition, smallest unit
that can be true or false. The proposition is what happens between the agent and the object.
We can say dog, bone. Dog is agent and bone is object. The relationship/proposition is chew;
true or false; eat, true or false, that's the relationship. With this model, it becomes easier to try
to represent this huge network of information we have in our knowledge. We can add to it; with
the addition of time and place. We have time and location nodes that are set between the agent
and the object, with that we can differentiate things. Jacob is agent and object is pigeons,
relationship is feeds, time is last spring, location is Trafalgar Square. All this is to say that you can
build these types of networks, there's a node representing this information, a node for agent
and object, another for time and location which gives us the relationship of when they
happened and other types of information. While it still fits, you can imagine that it would be
quite large if every single information has this node. You think about something, lights up a
node, surrounding nodes activate - serial processing. There's something that's more common
nowadays and thought of as an actual representation of what's going on.
Propositional networks
Localist representations - each node is equivalent to one concept.
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 9 pages and 3 million more documents.

Already have an account? Log in
Connectionist networks
Information in connectionist doesn't have a concept node activating others. Instead it's a bunch
of nodes that represent one thing, huge network of nodes representing one piece of
information. They're not unique to only that network. E.g. we can think of the class, A, B, C, D
node; car, D, E, F node, overlap; some of these nodes can be used for other information which
becomes complex because you cannot pinpoint where the node is held. We call this parallel
distributed processing (PDP). There's no central authority that has to say this is how you learn.
Rather, the way learning happens is through connection weights or strengths between different
concepts/information. There's a lot of algorithms in play. Learning algorithms - how weights are
changed, make them stronger or weaker. E.g. having cells that fire at the same time or having
nodes that activate at the same time. When that occurs, they tend to become stronger or more
linked. "cells that fire together wire together". Error signals cause a node to decrease its
connections to input nodes that led to the error (back propagation). They can send an error
signal backwards "gave me the wrong information", changes the strength of the connections
between the first and second and throughout the whole network, and that can reshape how
pieces of information are fit together in the network. How we take our information: we believe
it's stored in memory somewhere. We also say there is reliance of knowledge. How that works is
that it's in this network: whether it's a single node representing a concept or distributed
processing which is more plausible; we can model them now, we can also use them as a way to
try to figure out how other parts of our brain is functioning and how we can have interaction
between knowledge and our systems. The brain divides and conquers. Generally when one
action is happening, there are a bunch of parts lighting up and taking control of it and certain
parts that fit into that model.
***
Chapter 9: Language
Lecture Outline
Organization of language
Phonology
Words
syntax
sentence parsing
biological roots of language
language and thought
Language is the one thing that makes humans what they are because it relishes in
communication. That's also true for animals but it's not as potent as it is among humans.
The organization of language
If we start to break it down and say what's happening with language, we're thinking of
something, have an idea, using our knowledge or something in memory, taking that from our
thoughts and making them into sounds. And then conversely when someone's talking to us,
we're hearing sounds and turning those into thoughts, there's some hierarchal structure that we
have of organization to be able to do this.
Sentences - sequences of words; things we deal with when we're writing and speaking
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 9 pages and 3 million more documents.

Already have an account? Log in
words - smallest free form
morpheme - smallest unit of meaning;
phoneme - smallest unit of sound
Hierarchal, with each level composed of sublevels. Phonemes are the sounds being made. We
have 40 sounds in English in which we combine to make words. That still leads to infinite
amount of information we could produce.
Phonology
We have air flowing from our lungs pushed down from mouth and nose, modulated by tongue
and teeth to make sounds. We used to the vocal folds.
Voicing
Whether vocal folds vibrate - zdbv
Or not - stpf
Manner of production
whether air is fully stopped or merely restricted - bpdt/zsvf
Place of articulation
where in the mouth the air is restricted
closing of lips - bp
top teeth against bottom lip - vf
tongue behind upper teeth - dtzs
Many words have no clear boundaries yet speech segmentation is effortless. When we're talking
in the sentence, we can hear a clear distinction in the words when there is in fact none. We
can't necessarily segment the sentence into pieces of information that makes sense (esp. in
foreign languages). How do we learn languages in the first place? It's just one big sound wave.
What are the properties and how are we able to segment the words?
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 9 pages and 3 million more documents.

Already have an account? Log in

Get OneClass Grade+

Unlimited access to all notes and study guides.

YearlyMost Popular
75% OFF
$9.98/m
Monthly
$39.98/m
Single doc
$39.98

or

You will be charged $119.76 upfront and auto renewed at the end of each cycle. You may cancel anytime under Payment Settings. For more information, see our Terms and Privacy.
Payments are encrypted using 256-bit SSL. Powered by Stripe.