Textbook Notes (368,826)
Canada (162,194)
Psychology (935)
PSYC 201W (78)
Chapter 10

PSYC 201W Chapter 10: Psyc 201W - Chapter 10 Notes
Premium

9 Pages
47 Views
Unlock Document

Department
Psychology
Course
PSYC 201W
Professor
A.George Alder
Semester
Spring

Description
Chapter 10 – Experimentation and Validity Critical Thinking, Inference, and Validity Categories of Inference • Different kinds of inferences: 1. Inferences about constructs 2. Statistical inferences 3. Causal inferences 4. Inferences about generalizability Types of Validity • Four types of validity: 1. Construct 2. Statistical conclusion 3. Internal 4. External Construct Validity • Applies to both measuring and manipulating variables • Construct validity – concerns the issue of whether the constructs (the conceptual variables) that researchers claim to be studying are, in fact, the constructs that they truly are manipulating and measuring o Affected by how faithfully the operational definitions of the independent and dependent variables represent the constructs that the researchers intend to study Example: Colour and Achievement Performance • Colour-achievement performance experiments, construct validity would involve asking two questions: 1. Is exposing students to a large red, green, or black code number on a booklet a valid manipulation of “colour”? 2. Are scores on anagram and arithmetic tasks a valid measure of “achievement performance”? Statistical Conclusion Validity • Statistical conclusion validity – concerns the proper statistical treatment of data and the soundness of the researchers’ statistical conclusions • Key question: o When the researchers concluded that there was or was not a statistically significant relation between the independent and dependent variables, was this conclusion based on appropriate statistical analyses? Statistical Issues • Inferential statistical tests for determining statistical significance typically require that certain assumptions be met in order for a particular test to be used in a valid manner o Example  the proper use of some statistical tests assumes that there is a certain minimum number of observations in each cell of a research design • Robust – statistical test can yield accurate results even if a data set violates the test’s statistical assumptions Internal Validity • Internal validity – concerns the degree to which we can be confident that a study demonstrated that one variable had a causal effect on another variable • Inferences about causality have internal validity when: o The research design and experimental procedures are sound and thus enable us to rule out plausible alternative explanations for the findings • Poor internal validity results from the presence of confounding variables that provide a reasonable alternative explanation for why participants’ responses differed, overall, across the various conditions of the experiment Examples: Heredity-Learning and Color Achievement • Example  Heredity-learning experiment: o Was it really the selective breeding manipulation that caused the two strains of rats to differ in their overall maze performance? o Or could other factors have been responsible? o Environmental factors? External Validity • External validity – concerns the generalizability of the findings beyond the present study • Examples: o Generalization across populations: ▪ Does exposure to red impair intellectual performance among schoolchildren, among adults who aren’t students, and among people who live in cultures where red isn’t artificially associated with stimuli that signal threat? o Generalization across settings: ▪ Does exposure to red impair performance at other intellectual tasks, at physical strength tasks, at tasks requiring fine perceptual motor coordination? o Generalization across species: ▪ Do findings on brain functioning, hormonal influences, drug effects, schedules of reinforcement, and developmental processes obtained in nonhuman animal experiments generalize to humans? Ecological Validity and the Realism of Experiments • Ecological validity – concerns the degree to which responses obtained in a research context generalize to behaviour in natural settings • Do people behave in real life as they do in our experimental laboratories? • Ecological validity is also often discussed in reference to how well the research setting and methods correspond to what people encounter in daily life o Examples: ▪ Tasks ▪ Stimuli ▪ Procedures used in a laboratory experiment • Mundane realism – the surface similarity between the experimental environment and real-word settings • Psychological realism – represents the degree to which the experimental setting is made psychologically involving for participants, thereby increasing the likelihood that they will behave naturally rather than self-monitor and possibly distort their responses Establishing Generalizability • Evidence for or against external validity accrues over time as scientists replicate and build on the original research • Replication – the process of repeating a study in order to determine whether the original findings will be upheld • No single study will be able to satisfy all questions about external validity, but even in their initial research on a topic experimenters can take some steps to increase confidence in the external validity of their findings o Example  scientists can replicate their own research within their initial research project Basic Threats to Internal Validity • Several types of confounding variables that can threaten a study’s internal validity • Quasi-experiment – a study that has some features of an experiment, but lacks key aspects of experimental control Seven Sources of Threat 1. History 2. Maturation 3. Testing 4. Instrumentation 5. Regression to the mean 6. Attrition 7. Selection History • History – events that occur while a study is being conducted, and that are not a part of the experimental manipulation or treatment • Whether history rises to the status of a plausible confounding variable will depend on the events that took place during this period • Can be a problem if an experiment is poorly executed • General history effects cannot explain findings Maturation • Maturation – ways that people naturally change over time, independent of their participation in a study • Examples: o Changes in cognitive and physical capabilities that occur with aging o Fluctuations in alertness and fatigue that accompany biological rhythms o Normal recovery from physical illness or psychological disorders • Also includes the general accrual of knowledge and skills as we gain more experience over time • Experiments do NOT prevent maturation, but by randomly assigning participants to conditions, one could assume that any maturation effects would be equivalent across the various conditions Testing • Testing – concerns whether the act of measuring participants’ responses affects how they respond on subsequent measures o Example  pretest versus actual test • Many experiments do NOT include a pretest because due to random assignment, the participants in the various conditions are, overall, assumed to be equivalent at the start of the experiment Instrumentation • Instrumentation – changes that occur in a measuring instrument during the course of data collection o Example  buying a cheap scale and weighing yourself weekly for a year resulting in loss of 8 pounds, but actually the scale’s cheap springs have worn and you actually only lost 3 pounds • As long as random assignment or proper counterbalancing procedures are used, then any instrumentation effects that might occur over the course of an experiment should, overall, affect participants in all conditions to an equivalent degree Regression to the Mean • Regression to the mean – the statistical concept that when two variables are not perfectly correlated, more extreme scores on one variable will be associated overall with less extreme scores on the other variable • The degree of regression to the mean should be equivalent in all conditions as long as participants are randomly assigned Attrition • Attrition (also called subject loss) – occurs when participants fail to complete a study o Examples  malfunction or equipment or participant may feel uncomfortable • Can threaten the internal validity of a well-designed experiment • Differential attrition – occurs when significantly different attrition rates or reasons for discontinuing exist, overall, across the various conditions o Can result in nonequivalent groups by the end of the experiment • Experimenters should determine why participants discontinue and examine any available pretest scores to determine whether continuing versus discontinuing participants differ, overall Selection • Selection – situations in which, at the start of a study, participants in the various conditions already differ on a characteristic that can partly or fully account for the eventual results • Experiments involve multiple conditions, and when between-subjects designs are used, the key to preventing a selection confound is to create equivalent groups at the start • Can achieve this by randomly assigning participants to conditions Example 2: Psychotherapy for Depression • Randomized controlled trial (also called randomized clinical trial) – an experiment in which participants are randomly assigned to different conditions for the purpose of examining the effectiveness of an intervention o Conducted in: ▪ Clinical ▪ Counseling ▪ Health ▪ Educational psychology ▪ Psychopharmacology ▪ Medicine ▪ Nursing • Wait-list control group – group of randomly selected participants who do not receive a treatment, but expect to and do receive it after treatment of the experimental group(s) ends Other Issues Concerning Experimental Control Demand Characteristics • When people consent to participate in an experiment, they are entering a social setting that involves its own implicit norms – unwritten rules – about how research participants ought to behave • Good subject role – involves providing responses that help to support the perceived hypothesis of the study o Arises from people’s hope that their responses will contribute to science and the study’s success • To minimize this response bias, experimenters typically conceal the hypothesis and study’s specific purpose from participants until the debriefing session • Demand characteristics – cues that influence participants’ beliefs about the hypothesis being tested and the behaviours expected of them o Examples: ▪ Experimenter’s behavior ▪ Laboratory layout ▪ Nature of the experimental tasks • If demand characteristics lead participants to guess the hypothesis accurately, this may create a plausible alternative explanation if the hypothesis is supported Addressing Demand Characteristics • Suspicion probes – conversational strategies conducted during debriefing in which experimenters explore participants’ beliefs about the study and its hypothesis o Most common approach to addressing whether demand characteristics influenced participants’ behaviour • Other approaches to addressing
More Less

Related notes for PSYC 201W

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit