false

Study Guides
(248,269)

Canada
(121,449)

University of Guelph
(7,158)

Psychology
(952)

PSYC 2360
(37)

Naseem Al- Aidroos
(10)

Final

Unlock Document

Psychology

PSYC 2360

Naseem Al- Aidroos

Winter

Description

Research Methods Final Exam: Chapters 11-14
TEXTBOOK/LECTURE NOTES
CHAPTER 11 – EXPERIMENTAL RESEARCH: FACTORIAL DESIGNS
- Most experimental research designs include more than one independent variable (aggressive behaviour can be
caused by more than one thing)
Factorial Experimental Designs:
- Factorial experimental designs: experimental designs with more than one IV (manipulation)
- Factor: each of the manipulated variables
o 1 IV (one-way design); 2 IVs (two-way design), etc.
- Factorial designs noted in a way that show number of IV’s (# of factors) + how many levels there are in each
factor (IV)
o 2x2 (two-way design, 2 levels of each factor)
o 2x3 (two-way design, 1 with two levels, one with 3 levels)
- Cells: number of conditions (total #)
o 2x2 (4 conditions)
o 3x3 (9 conditions)
- Don’t need as many participants when more IVs
The Two-Way Design:
st
- Violent cartoon-frustrated (1); violent cartoon-nonfrustratend 1 IV, 2 levels. (2); non-violent cartoon,
frustrated (3); non-violent cartoon, non-frustrated (4) 2 IV, 2 levels
- Crossing the factors: each level of each IV occurs with each level of other IV’s
o Accomplished through random assignment
o Crossing 2 factors; 2x2 design = 4 OUTCOMES/CONDITIONS
Main Effects: Mean Mean Total M
Mean Mean Total M
- Schematic diagram: Total M Total M
o When means are combined across the levels of another factor, they are said to control for, or to
collapse, across the effects of the other factor (called marginal means)
o Differences on the dependent measure across levels of one factor, controlling for all other factors in the
experiment called main effect of that factor
Basically, what does it show? Effect, or no?
Interactions and Simple Effects:
- Interaction: a pattern of means that may occur in a factorial experimental design where the influence of 1 IV on
the DV is different at different levels of another variable
- Simple effect: the effect of one factor within a level of another factor (ex. the effect of viewing violent vs. non-
violent cartoons for frustrated children)
The ANOVA Summary Table:
- For a 2x2 design, there may or may not be a significant main effect of the first factor, may or may not be a
significant main effect of the second factor, there may or may not be a significant interaction between the first
and second factor
- In factorial designs, each main effect and each interaction has its own F test, degrees of freedom, and p-value
- It is possible to commute, for each main effect and interaction, an effect size statistic
Understanding Interactions: Patterns of Observed Means:
- main effect vs. interactions
- Main effect is present if the average height of the line (the solid line) representing one
condition is greater than or less than the average height of the line representing another
condition (the dashed line)
- Interaction is present when the two lines are not parallel. If they aren’t parallel, this
shows that they are different
- Patterns with main effects only: ex. children showed more aggression after viewing violent (vs. non-violent)
cartoons regardless of whether they were frustrated; OR frustrated children were more aggressive than
nonfrustrated children
- Patterns with main effects and interactions: when the interaction is such that the simple effect in one level of
the second variable is opposite, rather than just different, from the simple effect in the other level of the second
variable, the interaction is called a crossover reaction
o When there is a crossover interaction and a main effect
Interpretation of Main Effects When Interactions Are Present:
- When there is a statistically significant interaction between the two factors, the main effects of each factor must
be interpreted with caution – this is because the presence of an interaction indicates that the influence of each
of the two independent variables cannot be understood alone; the main effects of each of the two factors are
said to be qualified (actually has an effect WHEN the other factor does) by the presence of the other factor
o Ex. it would be inaccurate to say that the viewing of violent cartoons increases aggressive behaviour,
even though the main effect of the cartoon variable is significant; this is because the interaction
demonstrates that this pattern is only true for nonfrustrated children.
More Factorial Designs:
- The factorial design is the most common of all experimental designs (2x2 design is the simplest factorial design)
The Three-Way Design:
- 2x2x2 (cartoon viewed: violent, nonviolent; prior state: frustrated, not frustrated; sex of child: male, female)
- The number of main effects and interactions increases in a 3-way design
- There are also two-way interactions (interactions that involve the relationship between two variables,
controlling for the third variable)
o The addition of sex of child as a factor results in 8 (rather than 4) conditions
o Interaction between sex of child and cartoon type interaction (controlling for prior state); interaction
between sex of child by prior state (controlling for cartoon viewed); interaction between cartoon and
prior state (controlling for gender)
- The three way interaction: tests whether all three variables simultaneously influence the dependent measure
o F-test is significant when the two-way interactions are the same at the different levels of the third
variable
- As the number of conditions increases, so does the number of participants needed and it becomes more difficult
to interpret the patterns of the means
Factorial Designs Using Repeated Measures:
- To create equivalence in factorial research designs = random assignment OR can use repeated-measures designs
(individuals participate in more than one condition of the experiment)
- Factorial designs can be entirely between participants (random assignment is used on all of the factors); may be
entirely repeated measures (the same individuals participate in all of the conditions); or a bit of both
- Designs in which they are mixed (both between or repeated measures) are called mixed factorial designs Comparison of the Condition Means in Experimental Designs:
- When more than two groups are being compared, a significant F does not indicate which groups are significantly
different from each other
o Ex. although it tells us that the effect of viewing violent cartoons is significantly different for frustrated
than for nonfrustrated children, it doesn’t tell us which means are significantly different from each other
We need to go further to figure out whether viewing violent cartoons caused significantly
MORE aggression for children who were not frustrated and whether viewing the violent
cartoons significantly decreased aggression for children in the frustration condition
Further statistical tests can answer the above questions – called mean comparisons = conducted
to discover which group means are significantly different from each other
Pairwise Comparisons:
- Pairwise comparison: any one condition mean is compared with any other condition mean
o There can be a lot of comparisons; in 2x2 factorial design, there can be 6
Violent cartoons-frustrated with violent cartoons-not frustrated
Violent cartoons-frustrated with nonviolent cartoons-frustrated
Violent cartoons-frustrated with nonviolent cartoons-not frustrated
Violent cartoons-not frustrated with nonviolent cartoons-frustrated
Violent cartoons-not frustrated with nonviolent cartoons-not frustrated
Nonviolent cartoons-frustrated with nonviolent cartoons-not frustrated
o Not practical to do a statistical test on each one of these, because each one has a random error of a=.05;
this can accumulate to being a high percentage
o Experimentwise alpha: the probability of the experimenter having made a Type 1 error at least one of
the comparisons (when six comparisons are made, the experimentwise alpha is .30 (.05 x 6); when 20
comparisons are made, the experimentwise alpha is 1.00, which indicates that one significant
comparison would be expected by chance alone
- Three ways to reduce the experimentwise alpha
o Compare only the means in which specific differences were predicted by the research hypothesis
These tests called planned comparisons/a priori comparisons
o Post hoc comparisons: means comparisons that (by taking into consideration that many comparisons are
being made and that these comparisons were not planned ahead of time) help control for increases in
experimentwise data; they only allow the researchers to conduct them if the F test is significant
o Complex comparisons
Complex Comparisons:
- Complex comparisons: more than two means are compared at the same time
o Complex comparisons are usually conducted with contrast tests
Lecture:
- 3x3x3x3x3 = (the # of numbers – how many things are being manipulated)
- F-value says: does this manipulation produce expected effect? F-value can mean the manipulation had a big
effect, and whether or not it is significant
o If f-value isn’t large, no interaction because the difference isn’t large
- A significant interaction indicates: the effect of one IV on the DV differs across the levels of the other IV
- How to recognize main effects: look at the table of means, look at the line graph; interactions: look at the line
graph (lines parallel = no interaction)
- Use t-tests to compare relevant pairs of conditions
- When average is the same between the two, can be interaction if lines aren’t parallel.
- No difference between two lines = no main effect
- The more t-tests you run, the more you increase your Type 1 error - Comparing condition means = each individual comparison has an alpha likelihood of resulting in a Type 1 error;
performing multiple comparisons results in greater than alpha chance of Type 1 error
- Familywise error rate = same thing as experimentwise alpha
- If only the interaction is significant, interpret it through mean comparisons
- Factorial ANOVA with three independent variables:
o 2 variables = 3 f-values; 1 variable = 1 f-value; 3 variables = 7 f-values
o 7 f-values in total: one for each of the main effect (there are 3); one for each possible two-way
interaction (there are 3); one for the 3-way interaction
- Three advantages of factorial designs:
o More efficient (test for more than 1 main effect in the same experiment); more comprehensive (tell us
more of the whole story – how do variables interact?); more valid (increased external validity (can
generalize conclusions to more situations)
CHAPTER 12 – EXPERIMENTAL CONTROL AND INTERNAL VALIDITY
Threats to the Validity of Research:
- Four major types of threats to validity
o 1. A threat to construct validity
Occurs when the measured variables used in the research are invalid because they do not
adequately assess the conceptual variables they were designed to measure
o 2. Threats to statistical conclusion validity
Occurs when the conclusions that the researcher draws about the research hypothesis are
incorrect because either a Type 1 or Type 2 error has occurred; Type 1 = researcher mistakenly
rejects the null hypothesis; Type 2 = researcher mistakenly fails to reject the null hypothesis
o 3. Threats to internal validity
Refers to the extent to which we can trust the conclusions that have been drawn about the
causal relationship between the independent and dependent variable
o 4. Threats to external validity
Refers to the extent to which the results of a research design can be generalized beyond the
specific settings and participants used in the experiment to other places, people, and times
Experimental Control:
- Experimental control: the extent that the experimenter is able to eliminate effects on the dependent variable
other than the effects of the independent variable; the greater the experimental control, the more confident we
can be that it is the independent variable, rather than something else that caused changes in the dependent
variable
Extraneous Variables:
- One cause of Type 2 errors is the presence of extraneous variables
- Their presence increases the within-groups variability within a design and they reduce power
Confounding Variables:
- Confounding variables: variables other than the independent variable on which the participants in one
experimental condition differ systematically or on average from those in other conditions
- Confounding variables are created during the experiment itself and are unintentionally created by experimental
manipulations
- Confounding: the other variable is mixed up with the independent variable, making it impossible to determine
which of the variables has produced changes in the dependent variable
- Internal validity: the extent to which changes in the dependent variable can confidently be attributed to the
effect of the independent variable, rather than to the potential effects of confounding variables - Alternative explanations: the confounding variable may not actually affect the dependent variable, and maybe
the independent variable still caused the dependent variable; but this is not known for sure
Control of Extraneous Variables:
Limited-Population Designs:
- One approach to controlling variability among participants is to select them from a limited, and therefore
relatively homogeneous, population
Before-After Designs:
- Measuring the variable you are testing before and after the study to find differences
- Similar to repeated measure design because the dependent variable is measured more than one time
- Both these increase the power of an experiment by controlling variability among participants
- In repeated measures designs, each individual is in more than one condition of the experiment; in before-after
design, each person is in only one condition (but, the dependent variable is measured more than one time, with
the first measurement serving as a baseline measure)
- Before-after designs can unfortunately cause retesting effects (fatigue might occur, practice effects, might guess
hypothesis, etc.)
Matched-Group Designs:
- Matched-group research design: participants are measured on the variable of interest before the experiment
begins and then are assigned to conditions on the basis of their scores on that variable (i.e. will pick people to be
in the experiment)
o This procedure reduces differences between the conditions on the matching variable and increases
power
o As long as the matching variable (between the two groups) correlates with the dependent measure, the
use of this group design will be beneficial
o Random assignment is sufficient enough to ensure that there are no differences BETWEEN the
experimental conditions, rather than using this method; matching is only used if one feels that it is
necessary to attempt to reduce variability among participants WITHIN the experimental conditions
o Matched-group designs are most useful when there are measures that are known to be correlated with
the dependent measure that can be used to match the participants, when there are expected to be
large differences among the participants on the measure, and when sample sizes are small
Standardization of Conditions:
- Standardization of conditions: accomplished when all participants in all levels of the independent variable are
treated in exactly the same way (with the exception of the independent variable)
- To help ensure standardization, a researcher contacts all participants in all of the experimental conditions in the
same manner, provides the exact same consent form and instructions, ensures interaction with the same
experimenters in the same room, and runs the experiment at the same time of the day; as the experiment
continues, the activities of the groups are kept the same (except for the changes in the experimental
manipulation)
- The most useful tool for ensuring standardization of conditions is the experimental script/protocol: contains all
the information about what the experimenter says and does during the experiment
- Using automated devices: tape recorders or computers to run the experiment; exactly the same instructions
given to everyone
o Disadvantages: if the person is daydreaming or something else that makes them miss an important part
of the instructions, there is no way to know about or correct this omission; these techniques don’t allow
participants to ask questions; it is more beneficial that the experimenter be there at the beginning of the
study to answer any questions, then they can leave and allow the experiment to progress Creation of Valid Manipulations:
Impact and Experimental Realism:
- When the manipulation creates the hoped-for changes in the conceptual variable, we say that is has had impact
- The extent to which the experimental manipulation involves the participants in the research is known as
experimental realism
o This is increased when participants take the study seriously
o Use strong manipulations (ex. violent vs. nonviolent shows – extremely violent show, extremely
nonviolent show
Manipulation Checks:
- Manipulation checks: measures used to determine whether the experimental manipulation has had the
intended impact on the conceptual variable of interest
o Also used to figure out if participants notice the manipulation
- Manipulation checks particularly important when no significant relationship is found between the independent
and dependent variables (did they notice the manipulation?)
- Manipulation checks are easy to do and should almost always be used
- Can be used to make alternative tests of the research hypothesis in cases where the experimental manipulation
does not have the expected effect on the dependent measure
o Ex. an experiment in which the independent variable did not have the expected effect on the dependent
variable; on the basis of a manipulation check, it is clear that the manipulation did not have the
expected impact – the manipulation didn’t produce what it was supposed to
o Should conduct an internal analysis: involves computing a correlation of the scores on the manipulation
check measure with the scores on the dependent variable as an alternative test of the research
hypothesis; turns an experimental design into a correlational study (used only when no significant
relationship between the experimental manipulation and the dependent variable is initially found)
Confound Checks:
- The manipulation must avoid changing other, confounding conceptual variables
o In this case, might use a manipulation check (asking participants something to figure out their reasoning)
o Might want to use one or more confound checks to see if the manipulation had any unintended effects
Confound checks: measures used to determine whether the manipulation has unwittingly
caused differences on confounding variables
Asking another question to see if the effect was for another reason (ex. you want the
participants to be bored with one task, so a manipulation check would be to ask them their level
of excitedness toward the task; a confound task would be asking them how difficult they found
the task – to see if this was a confounding variable, instead of just asking the question that will
hopefully support your hypothesis)
How to Turn Confounding Variables into Factors:
- Experiment should be designed so that the confounding variables turn into factors; change from having
experimenters running different conditions to each experimenter running all cond

More
Less
Related notes for PSYC 2360

Join OneClass

Access over 10 million pages of study

documents for 1.3 million courses.

Sign up

Join to view

Continue

Continue
OR

By registering, I agree to the
Terms
and
Privacy Policies

Already have an account?
Log in

Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.