PSY 250 Lecture Notes - Lecture 3: Inter-Rater Reliability, Concurrent Validity, Predictive Validity
Document Summary
Reliability: the consistency (stability) of a measure of behavior, measurement error. Variability: the amount of dispersion of scores around a central value. Reliability: the goal is to reduce error of a measure. Increase its reliability: three kinds of reliability (1) test-retest reliability (2) internal consistency reliability (useful for questionnaires/exams to tell ho(cid:449) (cid:862)good(cid:863) of a test it (cid:449)as (3) interrater reliability. Internal consistency reliability: reliability that is assessed at one point in time with multiple parts of the same measure. An exam is a tool to measure student aptitude in the material. Examine the scores of one half of the exam to the other half: chro(cid:374)(cid:271)a(cid:272)h(cid:859)s alpha. The average correlation (r) of each item of an exam/measure with all the other items. Intelligence/aptitude assessments: an indicator of reliability that examines the agreement of observations made by more than one rater or judge, very popular for studying animals and children.