PSY 2810 Lecture Notes - Lecture 8: Inter-Rater Reliability, Convergent Validity, Internal Validity
Document Summary
A measure must but reliable to be valid. If reliable it does not mean that it is always valid. Interrater reliability measure should produce consistent scores even if the person doing the scoring changes. Reasons for poor interrater reliability: measure itself is unreliable, raters are biased, raters are poorly trained, scores are too subjective. Internal reliability - the extents to which multiple items are answered accurately. If the items are well structured there is a high reliability. Cronbach"s alpha how we measure correlations (closer to 1 the better) Measures convergent and discriminate variables in order to study the whole construct. Must be a strong correlation (higher r value = tighter) If correlation is weak then the measure may be incomplete. If convergent validity it too low we can add new things to our measure to make it more exhausted. This can create a condensed score measure. If we add too many items we can hurt selectivity.