The Design of Experiments in
The Varieties of Scientific Experiments
Science: an interconnected series of concepts and conceptual schemes that have
developed as a result of experimentation and observation and are fruitful of further
experimentation and observation.
Pseudoscience: unscientific information masquerading as science.
Box 1.1: Rules to Help you Recognize Pseudoscience
1. Read the reference. Much pseudoscientific information will not provide references.
If they do provide references, make sure the authors of the referenced articles
support their claims.
2. Be alert for illogical leaps.
3. Do not be impressed by an idea’s longevity. Pseudoscientists often draw on
“ancient wisdom,” even if scientists have long ago discarded the ideas.
4. Do not be swayed by the degrees and awards claimed by the person promoting the
pseudoscience. Diplomas or awards do not make a person a scientist – a skeptical
evidence based approach does.
5. Nearly every pseudoscientist will try to sell you something.
Examples of neuroscience in pseudoscience: cranial electrostimulation, testicular
implant treatments for men with low libido.
Scientific methodology: a collection of logical rules, experimental designs,
theoretical approaches, and laboratory techniques that have accumulated
throughout history. Each field of science has slightly different scientific methods.
Experiment: an investigator gains new information by observing results after
changing one variable with all other variables held constant.
Epistemology: the study of how we know what we know.
Authority: what we base some of the things we accept as known on. (i.e. the church,
Empiricism: the idea that all knowledge arises from experience though the senses.
It is impossible to observe nature to learn the truths of nature without imposing our
active perceptual processes.
Interfield theories: theories that bridge two fields of science.
Paradigm: accepted facts and approaches in a scientific field at any one time.
Neuron doctrine: the belief that the neuron is the fundamental unit of the nervous
1 2 Book notes
The first step in conducting an experiment is to decide what topic you will investigate.
Start broad and progressively narrow and refine your focus until you arrive at one
testable hypothesis that you will confirm or reject in your research.
Current neuroscience uses a mixture of rationalism and empiricism, and our biases
can have strong influence on our rationalistic thinking, as we saw in the example of
Finding a mentor is critical for your early training. A good mentor will be someone
who has expert knowledge in the topic of interest to you and is interested in sharing
that knowledge with you.
Primary research articles: direct reports of experiments conducted by the
authors. These are peer reviewed.
Secondary sources: reviews of research in which the primary report is published
Peer reviewed: an article is submitted to a journal where the editor sends the
article to two or three experts in the topic. The experts read the manuscript and
determine if the research is well done, if the conclusions are merited based on the
evidence presented, and if the authors provide all the details necessary to allow
replication of the studies by other scientists. They often send detailed suggestions
for further experiments or revisions that the authors are required to address before
the paper is published.
PubMed is an example of a good database of peer reviewed articles.
Twelve Questions to Answer After Reading a Journal Article Describing
1. What report is this? (Use full reference citation.)
2. What was the “big question” of the study? What were the specific
research questions of this study?
3. What was previously known about this question? How does answering
the research question(s) add something new to what is already known?
4. Who or what was studied? (Cite number and key characteristics.) What
was the experimental design?
5. In sequential order, what were the major steps in performing the study?
(Record these in a flow chart.) Do not just repeat details from Items 1-4
and 6-9. Create an explanatory sketch that a year from now would help
you recall how the study was done.
6. What data were recorded and used for analysis?
7. What kinds of data analysis were used?
8. What were the results? (Refer to figures.) (After analysis, what do the
data from Item 6 say about the questions addressed in Item 2?)
9. What does the author conclude? (In light of both Item 8 and the entire
study experience, what is said about Item 2?)
10.What cautions does the author raise about interpreting the study?
11.Were there any flaws in this study? How could the experimental design
12.What particularly interesting or valuable things did you learn from
reading the report? (Consider results, method, discussion, references,
and so on.) 2 Falsifiability: able to disprove.
Straw-man hypothesis: an implausible assumption that no one would really
Model: a description of a process or phenomenon.
Theory: incorporates diverse phenomena and describes general organizing
What makes a good theory?:
1. A theory that explains more is better than one that explains less.
2. A simpler theory is preferred to a more complex theory if both have equal
3. Theories are also assessed by their fertility- do they load to new ideas, new
applications, and/or new connections? Does the theory predict unforeseen
results? Does it lead to testable predictions?
Internal validity: how well the research study allows conclusions about the causal
relationship between the variables, or how reliable and replicable the results are.
Replicable result: is seen when the entire experiment is repeated and similar
results are obtained.
External validity: how well the research study addresses the problem you would
ultimately life to solve or how well the research study generalizes to outer
populations or other settings.
Ecological validity: is how well your research mirrors the conditions in the natural
Basic Research Designs
Observational studies: Research that is conducted by simply observing a
Replication means observing the same phenomena under the same conditions
Nominal scale: categorical scale, with the categories not in any obvious ranked
position relative to each other.
Ordinal scale: a ranked ordering of instances.
Interval scale: A measurement with equal intervals between the points on the
scale. The differences between values are meaningful but, with no meaningful zero
(the origin of the measurement scale is arbitrary).
3 4 Book notes
Ratio scale: has equal intervals between the points on the scale, but with a
meaningful zero, and the ratios between measurements are meaningful.
Scale of Measurement Possible Mathematical Operations
Ordinal Rank order
Interval Add; subtract
Ratio Add; subtract; multiply; divide
Naturalistic observation can occur in the natural environment, outside the
laboratory. When measuring spontaneous behavior, use an approach that allows
replication and quantification, such as time sampling.
Time sampling is where you observe a phenomenon for a given amount of time
after such an amount passes (i.e. 1 minute out of every15).
Structured observations can happen in the natural environment, but the
researcher has intervened in some way to shape the situation. (ex. “scent-roll”, set
out some attractive substances to elicit this behavior and dogs will come roll in it.)
Case study an in-depth description of one particular individual.
Multiple case studies: analysis of multiple cases with similar performance deficits
or similar areas of brain damage.
Case control design: In this design the investigator first selects a group of cases
with the characteristic of interest, and then they create a control group that
resembles the cases in as many relevant ways as possible.
Double dissociation: the presentation of cases with damage to one area (Area A),
who demonstrate the ability to carry out Process 1 but not Process 2, and other
patients with damage to Area B, who cannot complete Process 1 but can easily
complete Process 2.
Independent variable: is the manipulated variable.
Dependent variable: is the measured variable.
Independent-samples design: in an experiment with the independent-samples
design, the experimenter varies one independent variable across two groups.
Controlled variables: explicitly eliminated or made identical for the two groups.
Confounding variable: changes with the independent variable.
Population: the entire collection of cases of interest.
Sample: a smaller subset drawn from that population.
Inferential Statistics: statistics that are based on the laws of probability and that
allow us to judge if samples are from different populations or if there are
“statistically significant” differences.
Random sample: random choosing of cases taken from the entire population, with
each case selected independently from the next.