Class Notes (1,100,000)
CA (630,000)
Western (60,000)
PSYCH (7,000)
Lecture

Psychology 2135A/B Lecture Notes - Information Processing, Dependent And Independent Variables, Artificial Neural Network


Department
Psychology
Course Code
PSYCH 2135A/B
Professor
Robert Brown

This preview shows half of the first page. to view the full 2 pages of the document.
1. Paradigms
- Kuhn argue scientists have paradigms and new paradigms work better and replace old ones i.e.
Einstein's theory of relativity
- after a paradigm shift, enter a period of normal science
- then get more data that new model cannot handle so a newer model is created
- very dramatic shift in what everybody believes
- what questions should we ask, what terms should we use, how do we analyze, etc.
A. Information processing methods
- based on computer programs and computer architecture
- architecture: computers have an input system (keyboard), a central processor to do the
computing and the output system (screen)
- program: humans have input systems (senses), central processor (mind), output system
(commands to muscles)
- information stored in computer in symbolic form (binary code, 1, 0); human minds must have
something similar
- Pylyshyn (psychologist): symbol has a physical form and also has a meaning, thus, symbols
solve the mind-body problem; not really
- figure out architecture and program running the human mind
- information flows through organism, processed in stages, stored in specific places while being
processed
sensory input -> attention -> working memory Long-term
memory
Response output
- each function modelled on how computer does a task
- this system encodes information into representations - what are they like?
- processes operate on those representations - what are they like? (how do we retrieve memory,
how do we find it?)
- model is a general-purpose system - small set of processes can be combined in a great many
ways
- model in textbook is wrong; response output cannot come from long-term memory
B. Connectionist models
- Neural networks: simple neuron-like elements called nodes
- nodes have two qualities: activation level (energy level), weights on connections to other nodes
(adjusts levels of transfer of energy)
- experience decides amount of energy transfer between nodes
- across set of nodes, pattern of activation shows which nodes are on and which are off
- patterns of activation across a large population of nodes constitute representations
- doesn't tell us what things are, just relations
- if any node can be turned on, it can cause other nodes to be activated
You're Reading a Preview

Unlock to view full version