PSY 370 Lecture Notes - Lecture 31: Inter-Rater Reliability, British Rail Class 31, Psy
Document Summary
Estimate the average of all possible split-half correlations. Based on all of the variance among items. So, your reliability coefficient is not influenced by how you split the halves. Kr-20 is for dichotomous (right vs. wrong) items; alpha works for rating-scale type items. They are the same thing--the math for kr-20 is just a little easier. The most common measure of internal consistency is cronbach"s alpha. Separate true score from error caused by differences in raters. How big an effect does the rater have on the test score. 2 raters: correlate the ratings of the 2 raters. This is a little more complicated of a calculation. Tells us that both raters put people in the same rank order. These scores have a perfect correlation of 1 (good reliability, poor agreement) Because the person rated the lowest by rater 1 was also the same as the person rated the lowest by rater 2.