PSYD33H3 Chapter Notes - Chapter 0: Inter-Rater Reliability, Convergent Validity, Theoretical Definition
Document Summary
Reading 1: reliability and validity of measurement https://opentextbc. ca/researchmethods/chapter/reliability-and-validity-of-measurement/ Interrater reliability: is the extent to which different observers are consistent in their judgments. You can record two university students as they interact with one another, then have two or more observers watch the videos and rate each student"s level of social skills. Content validity: extent to which a measure covers the construct of interest, assessed by carefully checking the measurement method against the conceptual definition of the construct. Discriminant validity: is the extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct. Main takeaways: psychological researchers do not simply assume that their measures work. Instead, they conduct research to show that they work. If they cannot show that they work, they stop using them. 4: there are two distinct criteria by which researchers evaluate their measures: reliability and validity.