PSYC 2P25 Lecture Notes - Lecture 2: Inter-Rater Reliability, Tiger Woods, Construct Validity

54 views3 pages

Document Summary

Reliability: a measurement is reliable if it agrees with other measurements of the same variable. When scores on a measurement are calculated as a sum ( or mean) of various parts (items) Scores should depend strongly on the common element of the items. Indicates the extent to which scores represent the common element of the items. How to make measurements have higher internal-consistency reliability: I(cid:374)(cid:272)lude (cid:858)ite(cid:373)s(cid:859) that are (cid:272)orrelated (cid:449)ith ea(cid:272)h other: ite(cid:373)s that (cid:272)orrelate stro(cid:374)gl(cid:455) (cid:449)ith each other are measuring a common characteristic. If items are uncorrelated with each other, the(cid:455) do(cid:374)(cid:859)t ha(cid:448)e a (cid:272)o(cid:373)(cid:373)o(cid:374) (cid:272)hara(cid:272)teristi(cid:272): might be measuring several different characteristics instead. When a characteristic is measured by obtaining ratings made by several positions. Scores on the total ( or average) rating should depend strongly on the raters common judgement. Indicates the extent to which overall scores represent the common element of the scores given by the various raters.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents