Textbook Notes (270,000)
CA (160,000)
UVic (500)
PSYC (200)
Chapter 9

PSYC 300B Chapter Notes - Chapter 9: Helicopter Flight Controls, Sydney Trains T Set, Descriptive Statistics


Department
Psychology
Course Code
PSYC 300B
Professor
David Medler
Chapter
9

This preview shows half of the first page. to view the full 2 pages of the document.
multi-factorial research designs: involves 2 or more factors (IVs), each
with more than 1 level, & 1 DV in a single balanced design
-analyzed as a single experiment rather than a series of 1-way designs
𝛼 remains at 0.05, thus type I error is not inflated
-types of factorial designs (determined by measurement of factors):
(i) independent/between-group design: all factors are
independent variables, & participants contribute data to only
one experimental condition
(ii) repeated-measures/within-group design: all factors are
repeated-measures variables, & participants contribute data to
every level of every factor
(iii) mixed factorial: has at least 1 independent & 1 RM factor
-naming the design is based on:
(a) # of factors
(b) # of levels of each factor
(c) how each factor is measured (ex. between-groups, RM, etc.)
data matrix: a display that includes the factors, the levels of each factor, &
the cell means for each experimental group (DV)
-factor (independent variable): labeled on the outside of the data matrix
-levels (subgroups of that factor): a sub-heading, labeled outside of matrix
-cell means: represent data for each condition in the experiment
cell mean is 1 level of each factor at 1 level of another factor
# of cell means = multiplying design label
only use a figure if you have at least 4 cell means (a 2 x 2 design) or a
significant interaction
-marginal means: the averaged mean score across 1 level of 1 factor
have both row & column marginal means
# of marginal means = adding design label
-main effects: there is one main effect for each factor in the design, in
which each main effect is part of the total treatment effect
# of main effects = # of factors
you test the null for each factor, so a unique F(obs) is computed for
every main effect
test main effects by using the marginal means (the effect of one factor
is averaged over the levels of another factor)
some main effects can be predicted a priori from theory/past research
column main effect — the same as applying a 1-way ANOVA to
all the means
-determines whether one factor affects the other
row main effect — like running a t-test on the means
-determines whether there is an association between factors
interaction: the extent to which the effect of 1 factor varies/changes across
levels of a 2nd factor
-whether or not the presence of second variable influence the behaviour
of participants in the study
-represents a test of the treatment effect & is calculated from cell means
therefore requires a computed F(obs) value
-represents a significant interaction — the effect of one factor changes
across levels of a second factor
-cannot predict the presence of an interaction a priori, thus they cannot be
analyzed with planned comparisons
-can only use post-hoc comparisons if we have a significant interaction term
generalizability of a factor: the extent to which 1 factor is the same/
consistent across levels of the other factor
-represents a non-significant interaction — the behaviour is the same,
regardless of whether or not the factor is included
advantages — combining 2 factors into just 1 analysis, rather than have to
carry out multiple individual 1-way designs, is better because it:
(i) is more economical — you can study 2 more factors with the same
set of participants, therefore needing less
(ii) reduces some unexplained variability — easier to control for
some confound variable, & there is less opportunity for experimenter
error or variability among participants
(iii) you have the ability to test for the an interaction or to examine
the generalizability of a factor — major advantage of the design
logic of ANOVA — applied to a multi-factorial design:
-apply the random sampling model of hypothesis testing
-participants in each condition are randomly sampled from a unique
population, so that each population has the same parameters
under the null, each population will remain the same
under the alternative, at least one of the means will shift
-makes the assumptions:
(i) each condition is associated with its own population
(ii) the DV is normally distributed in each population
(iii) homogeneity of population variances for each cell
(iv) homogeneity of sample sizes (equal # of scores in each cell) !
(should not violate either homogeneity assumption)
(v) n 7 for each cell
-each condition has it’s own population distribution, sample distribution,
& sampling distribution of the means
all the sampling distributions for each condition are then combined
in order to create a unique null hypothesis distribution for each
treatment effect
F(obs) — because there is more than 1 null hypothesis to test (more than 1
sampling distribution), consequently need to compute more than 1 F-ratio
-# of treatment effects = # of F(obs) = # of unique sampling distributions
-each F-test is independent of each other & they do not influence one
another does not inflate type I error
-there is a unique F(obs) for each effect in the design
(i) 1 for each main effect (individual variable)
(ii) 1 for each possible combination of main effects (interaction terms)!
[ex. for a 3 factor design — there are 3 main effects (A, B & C),
& 4 interaction terms (A x B, A x C, B x C, A x B x C)]
-compute F(obs) using the formula — F(obs) = (MSeffect)÷(MSerror)
MSeffect = between-group variance estimate for the effect
-the main effect (interaction)
MSerror = within-group variance estimate
-represents pooled population S2 estimates
-computed from scores in each cell
extremely powerful because it considers the error from each
cell, rather than just one error estimate
-SST = SScells + SSerror
SScells is further divided into row main effects, column main effects, &
the interaction — SScells = SSrows + SScolumns + SSrows x column
-compute an F-ratio for treatment effect
when F(obs) = 1, then F(obs) = only error variance
when F(obs) > 1, then F(obs) = error + treatment variance
ANOVA source table — components:
-SST = total variability in the experiment — (x - x̅GM)2
-SSerror = variability within cells — SSi
-SScells = deviation of cell means from x̅GMncells(x̅cells - x̅GM)2
SSrows = main effect of the row factor — nrows(x̅rows - x̅GM)2
SScols = main effect of the column factor — ncols(x̅cols - x̅GM)2
SSrows x cols = interaction — SST - SSe - SSrows - SScols
-dfT = total variability in the experiment — nT - 1
-dferror = how much each score varies from its cell mean — nT - # of cells
-dfcells = # of cells - 1
dfrows = deviation of row marginal means from x̅GM — # of rows - 1
dfcol = deviation of column marginal means from x̅GM — # of col - 1
dfrows x col = everything left over after considering the variability
explained by each main effect — (# of rows - 1) x (# of col - 1)
-MSrows = mean squared rows — SSrows ÷ dfrows
-MScol = mean squared columns — SScols ÷ dfcols
-MSrows x cols = mean squared interaction — SSrows x cols ÷ dfrows x cols
-η²rows = rows effect size — SSrows ÷ SST
-η²cols = columns effect size — SScols ÷ SST
-η²rows x cols = interaction effect size — SSrows x cols ÷ SST
-R²rows = rows effect size — SSrows ÷ (SST - SScols - SSrows x cols) or !
SSrows ÷ (SSe + SSrows)
-R²cols = columns effect size — SScols ÷ (SST - SSrows - SSrows x cols) or !
SScols ÷ (SSe + SScols)
-R²rows x cols = interaction effect size — SSrows x cols ÷ (SST - SSrows - SScols) or !
SSrows x cols ÷ (SSe + SSrows x cols)
PSYC 300B - Chapter 9: Multi-Factorial Research Designs
η² R² because:
-Multiple R² partials out variability from the total
variably that is accounted for by other effects
-instead just focuses on how much variability for a
specific treatment effect is accounted for
You're Reading a Preview

Unlock to view full version