Textbook Notes (280,000)
CA (160,000)
UTSC (20,000)
Psychology (10,000)
PSYB01H3 (600)
Anna Nagy (200)
Chapter 5

Chapter 5 Textbook Notes

Course Code
Anna Nagy

This preview shows pages 1-2. to view the full 6 pages of the document.
Chapter 5: Measurement Concepts
Reliability: consistency/stability of measure
oi.e. use same measures and get same results
Formal definition of reliability:
otrue score: real score on var
omeasurement error: amt of variability
e.g. IQ tests on 2 ppl both avg 100, but low reliability measure will
have larger fluctuation (variability/measurement error)
achieving reliability
ocareful recording, asking of questions, procedures and using multiple
correlation coefficient can be used to assess reliability
Pearson product-moment correlation coefficient (r)
oRanges from -1.00 to +1.00
oAbsolute value is strength of relationship
o0 Is no rel
oAka reliability coefficient
Two reliable scores should have high correlation for reliability
Different types of reliability
oTest-Retest Reliability
oInternal Consistency Reliability
oInterrater Reliability
Test-Retest Reliability
Test re-test reliability: assessed by measuring twice at diff times

Only pages 1-2 are available for preview. Some parts have been intentionally blurred.

oCorrelation should be at least 0.8
Could be high b/c ppl remember test, alternate forms of test help
Some things arent reliable (e.g. mood) and impractical
Internal Consistency Reliability
Internal consistency reliability: reliability using responses from many items at
one point
oUsing a lot of items, can be more sure were measuring right var
Internal consistency Reliability indicators:
oSplit-half reliability: correlation of score on 1st half of test with 2nd half
Combining halves may give illusion of higher reliability than there
really is
oCronbachs alpha: avg of interitem correlation coeff.
i.e. each item correlation with another is calculated and averaged
oitem-total correlations: correlation of each item with total score
can get rid of items that dont correlate for higher reliability
Interrater Reliability
Interrater reliability: how much raters agree w/ observations
oi.e. agreement btw judges
oCohens Kappa indicator or interrater rel.
Reliability and Accuracy of Measures
Reliability accuracy
oPump that fills up same amt every time is reliable, but if it doesnt fill up said
amount, it is inaccurate
You're Reading a Preview

Unlock to view full version