Class Notes (1,100,000)
CA (620,000)
York (40,000)
PSYC (5,000)
PSYC 2230 (200)
N/ A (9)
Lecture 10

PSYC 2230 Lecture Notes - Lecture 10: Interrupted Time Series, Random Assignment, Internal Validity


Department
Psychology
Course Code
PSYC 2230
Professor
N/ A
Lecture
10

Page:
of 4
RESEARCH METHODS
LECTURE 10: TUESDAY JULY 31ST, 2012
TOPIC: QUASI-EXPERIMENTAL DESIGNS AND APPLIED RESEARCH
You cannot randomly assign people to groups
QUASI EXPERIMENTAL DESIGNS
Procedures resemble those of true experiments
But lack the degree of control found in true experiments
Generally occur when the IV involves: subject variables (e.g personality type), an
environmental event (e.g. hurricanes or having a particular classroom teacher), the passage of
time
Hedrick, Bickman and Rog (1993) “a quasi-experimental design is not the method of choice,
but rather a fallback strategy”
Cannot infer cause and effect, BUT well designed quasi-experiments enable you to demonstrate
that rival interpretations are rendered unlikely
These designs are correlational
NON EQUIVALENT GROUP DESIGNS
Post test only nonequivalent control group design (aka static group comparison)
X O (treatment group)
O (nonequivalent control group)
x=treatment
o=measurement/observation
Because there is no random assignment to groups, confounding variables may explain any
difference observed.
No randomly assigned → therefore we do not know if the two groups are equivalent at the
beginning
Most common threat to internal validity of this type of design is SELECTION
This is because the two groups may be different on a number of variables such as age gender IQ
but we do not know. It is unlikely for matching to work because we cannot match on every
variable
Example: IV Training Program and No Training Program DV Smoking measure and smoking
measure. Two groups of participants. No control of who is in each group. Experimental group =
people who volunteer for the program and control group could be who did not sign up
Selection problem = smokers who choose to participate may differ in some important way from
those who do not. Maybe the ones who wanted to do this will be more motivated and those who
did not will not be. This will make it seem as though the treatment is successfully when it could
be due to other factors
Pre-test/ Post-test non-equivalent control group
O X O treatment group and non-equivalent control group
O O
Pre Post
Addition of the pretest measurement allows the researcher to compare the observations before
treatment
Design also allows a researcher to compare the pre test scores and post test scores for both
groups
It is important to know that you may not be able to compare every variable and aspect between
both groups
Example: Research methods and Reasoning Ability
Intervention: critical thinking seminar
Research Methods students receive the intervention (i.e. participate in the critical seminar)
Developmental Psychology students are used as a nonequivalent control group (i.e. do not
attend the seminar)
Graph is shown
Both groups increased but the slope of the line of the developmental group is steeper therefore
critical thinking scores would go up regardless in comparison to thinking participating in the
program would increase your scores greatly
What threats to internal validity cannot be ruled out?
May have different experiences (selection-history effect)
May mature at different rates (selection-maturation effect)
Be measured more or less sensitively by the instruments (selection-instrumentation effect)
May drop out of the study at different rates (differential subject mortality)
May differ in terms of regression to the mean (differential regression)
How severe are these problems? Heinsman and Shadish (1996) True experiments versus quasi
experiments
Quasi-experiments very similar to those of true experiments
When the quasi-experiments were characterized by: small differences between the group and
control on the pre-tests, low attrition rates, low levels of participant self-selection into
conditions
INTERRUPTED TIMES SERIES DESIGNS
Extension of the simple one group pre post design
Participants are pre-tested a number of times and then post-tested a number of times after being
exposed to the treatment intervention
O1 O2 O3 O4 X O5 O6 O7 O8 → observations before and after treatment O: observation X:
treatment
Useful when one cannot randomize participants, it is possible to obtain a series of assessments
of the DV before and after treatment
Goal: evaluate the influence of the treatment by comparing the observations made before
treatment with the observations made after
Example: Intervention: course to change students' study habits, implemented during the summer
(after semester 4) DV: semester GPA
Graph is shown
We can develop a baseline trend for GPA and a post line trend for after and conclude that only if
the pattern is different at baseline in comparison to post line that this treatment had a true effect
What threats are more easily ruled out?
Maturation: we assume maturational changes are gradual, not abrupt discontinuities
Testing: if testing influences responses, these effects are likely to show up in the initial
observations (i.e. before the intervention)
Regression: if scores regress to the mean, they will do so in the initial observations
Although we can rule out these 3 types of threats there are at least two we must still consider
History threats are somewhat lessened
Instrumentation threats are also likely in some studies
How many measurements are needed for a time series design
Depends on amount of random fluctuation (noise) that may occur in the outcome being
measured, how much of an impact the intervention is expected to have
Typically somewhere between 6 to 15 measurements
The more assessments that you can take before and after the stronger the design is
MULTIPLE TIME SERIES DESIGNS
Add a comparison group to the simple interrupted time series design: (one group gets the
intervention and the other does not)
Campbell and Stanley consider this an excellent quasi-experimental design, the best, the most
feasible
O1 O2 O3 O4 X O5 O6 O7 O8 / O1 O2 O3 O4 O5 O6 O7 O8
Each of these designs are an extension of each other
CONCLUSIONS ABOUT QUASI-EXPERIMENTAL DESIGNS
More flexible than experimental designs
Less internal validity than experimental designs
Researcher ca take steps to eliminate a threat that isn't automatically eliminated by the design
Often a threat cannot be eliminated
WHAT IS PROGRAM EVALUATION
Program evaluation is carefully collecting information about a program or some aspect of a
program in order to make necessary decisions about that program. Should it be run again in the
future? Should it be altered?
Application of social and behavioral research methods to determine the effectiveness of a
program or an intervention
Effectiveness can be defined in terms of the following questions: To what extent did the
program achieve its goals?
What aspects of the program contributed to its success?
How can the effect of a program be improved?
This has grown rapidly in manage health care etc
Myths: useless activity that generates a lot of boring data with useless conclusions, is a highly
unique and complex process that occurs at a certain time in a certain way. Might only be
applicable time or implemented in a certain way → too specialized
Helpful also → improve delivery mechanisms to be more efficient and less costly
Verify if the program is really running as originally planned
Produce data or verify results that can be used for public
Produce valid comparisons between programs to decide which should be retained
KEY STEPS IN PROGRAM EVALUATION
Step 1: Needs assessment: determining that a problem exists and designing a solution for the
problem. If you know the problem is drug abuse among young teens you may want to find out
what drug is being used, the age range of the targeted population and then design a program that
is specific to that group
Step 2: Program Monitoring: ensuring that the program is carried out as designed. 3 major
aspects: reaches its intended client population and that it is implemented completely and